Virtual Reality

Virtual reality is, the projection upon the the five senses artificial stimuli. This can be used to create illusions (imaginary worlds) that can be simulated on a computer. Various technologies, especially computer games that utilize VR will incrementally reshape how we view society.

Interaction of virtual and real worlds
"The ultimate dream is to merge the real world and the virtual world into a totally seamless experience" -- PhotoSynth project

Development of simulation, rendering, VR input and output technologies will have wider implications than just better virtual environments. We can define three main modes of combining reality with virtual reality:


 * See simulated world and "be" in that simulated world - "vanilla" virtual reality
 * Stay in real world, but see simulated objects - augmented reality
 * See information about the real world, presented via computer - location based services, GIS. This information can also be used by robots for navigation in the real world.

The Mirror World video (30Mb, Quicktime) provides an interesting glimpse into the future connections between virtual and real world. Another demo Toshiba shows how our selves can be transferred into the VR as avatars.

Scale and evolution
Virtual reality started with single-player worlds simulated on a local machine (first 3D games and research projects before that). That is, computer games. The next step (late 1990s) was to have multiplayer worlds, where several participants could interact with limited realism. Another important development were persistent worlds - various MMORPGs and "general-purpose" virtual worlds such as ActiveWorlds, Second Life, and There. These persistent worlds are running on clusters of servers (sometimes distributed) and usually allow creation of custom content and programming by users. More than ten million people play MMORPGs as of 2005 and about 100 thousand "play" in general purpose worlds. Overall more than 100 million people play 3D computer and video games online (45 million in 2002 ).

The next step (2010-2015) is going to be development of more open systems, where content can be moved across platforms and where separate worlds can be linked (for example a room in a virtual building can be simulated on a private server using different simulation software, but would still be accessible for the people walking in the virtual city). Open source may play a role there. Eventually virtual reality worlds will integrate into a global Metaverse running on a distributed grid.

The step after that will be the integration of these worlds with input/output technologies, such as VR goggles and brain-computer interfaces. By then most of the people will spend a significant part of their lives in virtual reality (playing, communicating, working, having sex). Eventually, uploading will make feasible a full migration into virtual reality, while robotic bodies will make the reverse possible.

Content acquisition


To bridge the gap between reality and virtual reality we need methods to quickly (not slowly and manually ) convert objects from physical reality into digital models and back. This will have much wider implications than just more realistic games, this is going to gradually change what we consider reality.

Some technologies already exist: laser scanners and 3d printers for small objects. Some crude methods already exist to quickly generate 3d models of larger real world scenes (using image processing and LIDAR) - urban landscapes     and indoor environments.

It is already feasible and cost-effective to acquire photographic data for Yellow Pages  using the "drive and shoot" model. Hi-resolution satellite images of urban areas are being incorporated into MSN Virtual Earth. Google is quietly doing similar stuff, and may do much more in the future

Using a combination of these approaches 3D models of cities will soon (est. 2007-2009) be built cheaply and quickly. To create a realistic virtual environment one would only need to clean up the raw data a bit, combine the air photos for rooftops and large buildings with ground level images for details, add virtual pedestrians and traffic on streets.

Photosynth is an upcoming technology from Microsoft (videos, live demo, etc.) to recreate 3D environments from unstructured collections of photographs. In essense, it can take hundreds of photos of the Eiffel Tower from Flickr and automatically create a detailed 3D model.

Current realism of computer games


As of 2007 we are on the threshold of realism in computer games. It is finally possible to simulate certain aspects of reality in real time and with sufficient precision to declare it an accurate simulation overall.

For example, the Forza Motorsport racing simulation for Xbox is physically realistic. It is mostly on par with reality, even though it's not indistinguishable yet. To achieve this, programmers from Microsoft Game Studios take into account between 3000 and 10000 variables and simulate all aspects of driving, running the simulation at 240 ticks per second. For Race Against Reality Popular Science asked a veteran gamer and a professional race driver to extensively test drive both real cars and their virtual prototypes. The conclusion was that the game simulation is accurate.

Similar level of realism is available for the flight simulators, again from Microsoft. Some simulators are so realistic that pilots are allowed to log the virtual hours just like the real ones.

However, these simulations are not completely realistic yet. There are still things that can be improved though before we have perfect VR.  Graphics aren't perfect yet. One of the bigger problems is lighting and shadowing. To make realistic materials technologies such as RealReflect need to be developed. Sound - there is still no good programmatic sound generation. It's all samples, mostly. Global physics - it's possible to simulate several objects (cars, planes) very accurately, but an all-encompassing simulation is still too complex for the tech we have. Simulation of acceleration, tactile contact and everything else related to physically "being there". AI to make the world come alive 

The shader model (introduced in 2002-2005) made possible to move graphics a step up from poligonal textured environments to much more realistic worlds. Games introduced in 2005 simulate realistically such superfluous details as raindrop splashes, smoke clouds (Call of Duty 2), etc. Water shaders and 3D textures further enhance the realism.

'Still, we have already entered the realm of virtual reality. In some aspects, although not in all, virtual environments are already as good as real ones.''' ''

The Interface




As of 2009 vision is the only area that is completely realistic in terms that it is indistinguishable from real life on a limited scale.


 * 1) Currently external stimulation is possible. Large VR gaming stations are being developed . Alternatively a user can wear glasses, headphones, virtual reality gloves. Ultimately this should lead to high-quality retinal projectors (for vision).
 * 2) This video demonstrates how a Tera-scale computer could analyze images from multiple cameras in the home to capture body motions in 3D without any controller, special clothing, or blue screen in the background. The virtual character that mirrors the person's movements has been rendered using ray tracing techniques to display the scene more realistically - note the multiple reflections and shadows in the background. The ray tracing engine calculates the paths of individual light rays realistically using the laws of physics. This scene took many hours to generate on a powerful server, but future servers and PCs with Tera-scale processors will be able to do this in real time.
 * 3) Progress is being made into direct neural connections. The work is being done mostly in cochlear and retinal implants. Other senses can be controlled too, such as the vestibular system,.
 * 4) Ideally the interface would be a direct brain-computer link. First it will be a connection to the cortex, allowing the computer to "read thoughts" and send information directly to the mind. Eventually all brain will become random-access memory, with nanodevices able to control each and every neuron.

Nanomedical Virtual Reality
From Nanotech.biz:

''Question 5: Ray Kurzweil has proposed having billions of nanorobots positioned in our brains, in order to create full-immersion virtual reality. Do you think that such a scenario will ever be feasible?''


 * Yes of course. I first described the foundational concepts necessary for this in Nanomedicine, Vol. I (1999), including noninvasive neuroelectric monitoring (i.e., nanorobots monitoring neuroelectric signal traffic without being resident inside the neuron cell body, using >5 different methods), neural macrosensing (i.e., nanorobots eavesdropping on the body’s sensory traffic, including auditory and optic nerve taps), modification of natural cellular message traffic by nanorobots stationed nearby (including signal amplification, suppression, replacement, and linkage of previously disparate neural signal sources), inmessaging from neurons (nanorobots receiving signals from the neural traffic), outmessaging to neurons (nanorobots inserting signals into the neural traffic), direct stimulation of somesthetic, kinesthetic, auditory, gustatory, auditory, and ocular sensory nerves (including ganglionic stimulation and direct photoreceptor stimulation) by nanorobots, and the many neuron biocompatibility issues related to nanorobots in the brain, with special attention to the blood-brain barrier.


 * The key issue for enabling full-immersion reality is obtaining the necessary bandwidth inside the body, which should be available using the in vivo fiber network I first proposed in Nanomedicine, Vol. I (1999). Such a network can handle 1018 bits/sec of data traffic, capacious enough for real-time brain-state monitoring.  The fiber network has a 30 cm3 volume and generates 4-6 watts waste heat, both small enough for safe installation in a 1400 cm3 25-watt human brain.  Signals travel at most a few meters at nearly the speed of light, so transit time from signal origination at neuron sites inside the brain to the external computer system mediating the upload are ~0.00001 millisec which is considerably less than the minimum ~5 millisec neuron discharge cycle time.  Neuron-monitoring chemical sensors located on average ~2 microns apart can capture relevant chemical events occurring within a ~5 millisec time window, since this is the approximate diffusion time for, say, a small neuropeptide across a 2-micron distance.  Thus human brain state monitoring can probably be “instantaneous”, at least on the timescale of human neural response, in the sense of “nothing of significance was missed.”


 * I believe Ray was relying upon these earlier analyses, among others, when making his proposals.

Completeness and complexity of simulation
Currently most games (or professional simulations) take into account only a few aspects of reality. A car racing game has a detailed simulation of the engine, tires, tracktion, drag, etc., but "pedestrians" are glued to the ground, all other objects, e.g. planes, are moving on a predetermined path, etc. A real time strategy or tycoon game simulates the social dynamics and resource processing to some extent, but ignores the the physics of individual characters moving around.

But the big trend is that the engines all games use become more and more similar. Nowadays a strategy and a shooter game can use the same graphics engine, the same physics engine (such as Havoc 2) and look and feel rather similar (compare it with Dune 2 vs. Doom 2). John Carmack believes that universal engines will emerge around 2010-2015 and he will probably program only two more generations of custom game engines.

Of course, as long as content creation and programming are expensive, the games will avoid simulating parts unnecessary for the core gameplay. But the inevitable emergence of a common engine base will make it possible to integrate different games in one world and eventually it will be done. A crude example of that is the Second Life game where the complexity is not limited, at least in principle. There are also more and more games that use completeness as a selling point, such as GTA series and upcoming Spore from Will Wright.

The increased completeness will eventually make the virtual world real. In that virtual reality a "player" will be able to race, shoot, socialise, control armies, play with "physically real" objects and do a very large subset of what is possible in reality.

Uses of Virtual reality

 * tourism
 * entertainment, emerging from FPS games on one end and interactive attractions at Disneyland on the other.
 * social interaction, emerging from MMORPG and from first feeble attempts at online virtual conferences.

Current adoption


Reuters, Sun, IBM, Toyota, Sony BMG, CNet, Adidas, American Apparel and Starwood Hotels are among the companies that (as of 2006) operate in the Second Life virtual reality.

Social issues
Computer and video games are relatively non-controversial (bar some violent games). Virtual reality hardware, while clumsy and awkward, is accepted too. While full-scale VR a la Matrix, if implemented today, would be scary to most people, gradual development will probably be accepted easily. For example, Sony has discussed future neural interfaces several times.


 * A World of Warcraft World - a good description of the MMORPG-related trends for the VR. Of course, it suffers horribly from the Single factor problem.

In the long run, VR may lead to Marriage obsolescence.

Technological development

 * 2010-2015: video-realistic graphics based on general-purpose stable rendering systems.
 * 2015-2020: integrated persistent worlds.
 * 2015-2020: global physics with unlimited world complexity and simulation of most physical aspects.
 * 2015-2020: sufficiently good non-human and domain specific human AI.
 * 2015-2025: programmatic sound. Most aspects of reality can be simulated sufficiently well.
 * 2015-2025: realistic simulations of all senses (through brain-computer interface).
 * 2030+ : good human-level artificial intelligence.
 * 2045+ : uploading and life in virtual reality.

Japanese NISTEP forecast, 2001
The NISTEP report lists the following predictions related to virtual reality:
 * 2010: Widespread use of electronic travel pamphlets and product catalogs that use virtual reality.
 * 2012: Emergence of electronic media that stimulate the pleasure center in the brain, causing a social problem similar to narcotic drugs. (this isn't VR per se, but similar technologies will be used)
 * 2015: Widespread use of multimedia-based virtual leisure, leading to a decline in the development of ecosystem-threatening resorts.
 * 2015: Sales from on-line shopping through a digital network (shopping through virtual malls) account for more than 50% of total sales by retail shops.

Description
The use of VR in the period covered by NISTEP forecast (2010-2015) will probably include video-realistic virtual worlds with limited physical and AI realism delivered through light, comfortable high-quality VR goggles and possibly some primitive neural interfaces (may be for motor control or mood enhancement). BTW, this will be the PlayStation 5 era providing that a Gaming Depression doesn't occur within the next five or ten years.

By 2025-2030 both the simulation and interface technologies will likely advance to a stage sufficient for a perfect Matrix-like simulation indistinguishable from reality, but the virtual avatar will still be controlled by the actual human brain. While the Matrix scenario of naked immobile humans immersed in a nutrient medium and immersed in a virtual reality permanently is possible, it is likely that most people will still spend much of their time in physical reality.