Virtual Reality Computer Science The Technology of the Future

Virtual Reality Computer Science The Technology of the Future

Virtual reality is the use of computer modeling and simulation to allow a human to interact with an artificial three-dimensional (3-D) visual or another sensory world (VR). Through the use of interactive equipment such as goggles, headsets, gloves, or bodysuits that send and receive information, VR applications immerse the user in a computer-generated environment that mimics reality. In traditional VR style, a person wearing headgear with a stereoscopic screen displays dynamic pictures of a virtual environment.

Motion sensors monitor the user’s movements and change the display on the screen in real-time (the instant the user moves). This creates the sensation of “presence” (telepresence). As a result, a user can walk about a simulated suite of rooms, changing perspectives and viewpoints by moving his head and taking steps. The user can even pick up and handle objects he sees in the virtual environment while wearing data gloves with force-feedback devices that offer the sensation of touch.

Work from the Start

Techniques for building imaginary worlds, putting storylines in fictional places, and tricking the senses have always piqued the imagination of artists, performers, and entertainers. Virtual reality was preceded by numerous precedents for the suspension of disbelief in an artificial environment in creative and entertainment mediums. Paintings or vistas have been used to create illusory environments for homes and public spaces since antiquity, culminating in the massive panoramas of the 18th and 19th centuries.

By blurring the visual boundaries between the two-dimensional images depicting the principal scenes and the three-dimensional spaces from which they were seen, panoramas gave the impression of being involved in the actions shown.¬†Over the course of the twentieth century, this picture heritage sparked the development of a variety of mediums to produce comparable effects, ranging from futuristic theatre designs, stereopticons, and 3-D movies to IMAX movie theatres. For example, Waller’s research of vision and depth perception led to the creation of the Cinerama widescreen film format, which was initially called Vitarama when Fred Waller and Ralph Walker devised it for the 1939 New York World’s Fair.

Education and Training

Training for real-life activities has always been a popular application for virtual reality systems. The attractiveness of simulations is that they can provide training that is comparable to or nearly identical to real-world practice but at a lower cost and with greater safety. This is especially true for military training, and pilot instruction was the first notable application of commercial simulators during World War II. Flight simulators use visual and motion feedback to simulate flight while seated in a closed mechanical system on the ground.

The Link Company, founded by former piano manufacturer Edwin Link in the late 1920s, began constructing the first prototype Link Trainers in the late 1920s, eventually settling on the “blue box” design adopted by the Army Air Corps in 1934. Motion feedback was included in the early systems to help with familiarity with flight controls. Pilots were taught how to fly in a simulated cockpit that moved hydraulically in response to their actions (see photograph).

Sutherland proposed that such displays include multiple sensory outputs, force-feedback joysticks, muscle sensors, and eye trackers, inspired by the Link flight trainer’s controls; a user would be fully immersed in the displayed environment and fly through “concepts that never before had any visual representation.”Advances in flight simulators, human-computer interfaces, and augmented reality systems hinted at the possibility of immersive, real-time control systems for study and training as well as enhanced performance.

The key notion of this research was that human pilots’ ability to deal with spatial data was contingent on the data being “represented in a way that takes advantage of the human’s innate perceptual capabilities.” Furness used the HMD to create a system that projected data including computer-generated 3-D maps, forward-looking infrared and radar pictures, and avionics data into an immersive, 3-D virtual space that the pilot could see and hear in real-time.

Telepresence, the use of robotic devices controlled remotely through mediated sensory feedback to complete a task, was used to bring virtual reality to surgery. The growth of microsurgery and other less intrusive techniques of surgery in the 1970s and 1980s laid the groundwork for virtual surgery. Micro cameras attached to endoscopic devices sent images to a group of surgeons viewing one or more monitors, often in different locations, by the late 1980s.

DARPA financed research to develop telepresence workstations for surgical procedures in the early 1990s. This was Sutherland’s “portal into a virtual world,” complete with sensory feedback that might match a surgeon’s fine motor control and hand-eye coordination. SRI International produced the first telesurgery equipment in 1993, and the first robotic surgery was done at the Broussais Hospital in Paris in 1998.

Virtual Reality Computer Science The Technology of the Future


People began to spend time in virtual worlds as they became increasingly detailed and immersive, for amusement, aesthetic inspiration, and socializing. Virtual locations that were created as fantasy spaces, focusing on the subject’s behavior rather than replicating a genuine environment, were particularly favorable to entertaining. Myron Krueger of the University of Wisconsin began a series of research in 1969 to investigate the nature of human creativity in virtual environments, which he eventually dubbed artificial reality.

Much of Krueger’s work, particularly his VIDEO PLACE system, dealt with how a participant’s digitized picture interacted with computer-generated graphical objects. VIDEO PLACE might analyze and process the user’s activities in the actual world and translate them into preprogrammed interactions with the system’s virtual items. The aesthetic dimension of this system is suggested through several kinds of engagement with titles like “finger painting” and “digital sketching.” VIDEO PLACE varied from training and research simulations in key ways.

The system, in particular, shifted the focus from the user viewing the computer’s produced world to the computer recognizing the user’s actions and turning them into virtual world compositions of objects and space. Krueger discovered that as the focus changed to responsiveness and engagement, the fidelity of representation became less significant than participant interactions and the speed with which they responded to visuals or other forms of sensory information.

The capacity to control virtual things rather than just seeing them is crucial to the presentation of appealing virtual worlds, which is why the data glove has become synonymous with the rise of virtual reality in business and popular culture. Data gloves transmit a user’s hand and finger movements to a virtual reality system, which subsequently converts the wearer’s gestures into virtual object operations. The Sayre Glove, designed in 1977 at the University of Illinois for a National Endowment for the Arts-funded project, was named after one of the team members.

Virtual Worlds Exist

VPL had shuttered its doors by the beginning of 1993, and analysts were predicting the virtual reality’s collapse. Despite the failure of efforts to market VR workstations in the configuration stabilized at VPL and NASA, virtual world, augmented reality, and telepresence technologies were successfully launched as platforms for creative work, research spaces, games, training environments, and social spaces throughout the 1990s and into the twenty-first century. Throughout the 1990s, military and medical needs continued to drive new technologies, frequently in collaboration with academic institutions or entertainment industries.

The purpose of NASA’s Visual Environment Display workstation was to “place viewers inside a picture,” which entailed literally putting them inside an assembly of input and output devices to figuratively put them inside a computer. By the mid-1990s, Mark Weiser of Xerox PARC had begun to describe a research program that aimed to bring computers into the human world rather than the other way around.

Weiser presented the concept of ubiquitous computing in a 1991 essay in Scientific American titled “The Computer for the Twenty-First Century.” He proposed that future computing devices would outnumber people, embedded in real environments, worn on bodies, and communicating with each other through personal virtual agents, arguing that “the most profound technologies are those that disappear” by weaving “themselves into the fabric of everyday life until they are indistinguishable from it.”