The idea that perceptions can be altered is nothing new. Over the centuries, artists, inventors, and magicians have produced sometimes intriguing illusions that trick the human eye—and the mind. The first extended reality most likely appeared in the form of cave drawings and petroglyphs, a fact that Howard Rheingold pointed out in his seminal 1991 book, Virtual Reality: The Revolutionary Technology of Computer-Generated Artificial Worlds—And How It Promises to Transform Society. Essentially, someone sketched an image of a bison, saber-toothed cat, or person on a rock wall.
Centuries later, graphic artists began to experiment with optical illusions. For example, in 1870, German physiologist Ludimar Hermann drew a white grid on a black background. As an individual’s eyes scan across the illustration, the intersection points—essentially dots—change back and forth from white to gray (see figure 1.1). In the 1920s, M. C. Escher, a graphic artist from the Netherlands, began drawing pictures that delivered physical impossibilities, such as water traveling uphill. His art is still popular today.
What Hermann, Escher, and many others understood is that the brain can be tricked into believing things—or seeing things—that aren’t there or don’t necessarily make logical sense. The right stimulus and sensory input can seem just as convincing as reality—or alter the way we see reality. As early as the 1830s, inventors began tinkering with stereoscopes that used optics and mirrors along with a pair of lenses to produce a 3D view of objects. The company View-Master commercialized the concept when it introduced a hand-held stereoscopic viewer in 1939 (ironically, the company is now venturing into the VR space with special goggles and software). The system presented 3D images of things and places around the world—from the Grand Canyon to Paris, France.1 It relied on cardboard disks with pairs of embedded photographic color images to produce a more realistic feeling of “being there.”
Figure 1.1 The Hermann grid was created by physiologist Ludimar Hermann. When a viewer looks at any given dot, it turns white. However, when a person looks away from a dot it shifts between white and gray. Source: Wikipedia Commons.
Virtual reality creates an illusion that a person is in a different place. Perhaps it’s a meeting with business colleagues scattered all over the globe. Or perhaps it’s parachuting out of an airplane, riding a roller coaster, or navigating through the canals of Venice, Italy.
Yet, it is only since the introduction of digital computing that the concept of extended reality (XR) has emerged in the way we think about it today. Computing systems—comprising a variety of digital components and software—deliver convincing images, sound, feel, and other sensory elements that alter the way we experience existing physical things (augmented reality, or AR) or create entirely imaginary but realistic seeming worlds (virtual reality, or VR). Either way, extended reality technologies allow us to step beyond, dare we say, the limitations of the physical world and explore places only imagination could go in the past.
Today, AR and VR, along with mixed reality (MR), which simultaneously blends elements of the physical world with virtual or augmented features, are appearing in all sorts of places and situations. They’re in movies, on gaming consoles, on smartphones, in automobiles, and on glasses and head-mounted displays (HMDs). They are transforming the world around us one click, tap, or glance at a time. A convergence of digital technologies, along with remarkable advances in computer power and artificial intelligence (AI), is delivering AR and VR into new and often uncharted territory.
Smartphone apps use a camera and AR to recognize physical things and display names, labels and other relevant information on the screen. They tackle real-time language translation. They show what makeup or clothing looks like on a person. It’s even possible to see wine labels come alive with an animated display2 or show what a room will look like with a particular piece of furniture or a different color scheme.3
At the same time, VR is appearing in games, research labs, and industrial settings that use headsets, audio inputs, haptic gloves, and other sensory tools to generate ultrarealistic sensations. Over the coming decade and beyond, these systems will change countless tasks, processes, and industries. They will also dramatically alter interactions between people through the use of telepresence and telexistence. The former term refers to systems that allow people to feel “present” when they are physical separated. The latter word revolves around the concept that a person can be in a place separate from his or her physical presence.4
Perhaps it’s a meeting with business colleagues scattered all over the globe. Or perhaps it’s parachuting from an airplane, riding a roller coaster, or rafting the rapids of a raging river. But VR is more than the illusion of being in a different place or time. These worlds can incorporate virtual artifacts (VA) that mimic objects from the physical world or things identifiable only to computers, including digital tokens that trigger certain events. For example, a person might purchase something or get paid using a digital currency such as Bitcoin when a specific automated action takes place, or a virtual object is used in a certain way.
At this point, let’s define the key terms you will encounter throughout this book. Merriam Webster’s dictionary describes augmented reality as “an enhanced version of reality created by the use of technology to overlay digital information on an image of something being viewed through a device (such as a smartphone camera).5 It’s important to note that this process takes place in real time. Merriam Webster’s defines virtual reality as “an artificial environment which is experienced through sensory stimuli (such as sights and sounds) provided by a computer and in which one’s actions partially determine what happens in the environment.”6 Mixed reality, by comparison, juxtaposes real world objects and virtual objects within a virtual space, or on AR glasses. This might mean projecting a virtual dog in a real living room, complete with actual furniture, that’s viewable through a smartphone or glasses. Or it might mean projecting a real dog into a virtual world, filled with both real and imaginary objects. Augmented and mixed realities are similar—and sometimes the same. In a basic sense, it’s best to think of augmented reality as supplemental and mixed reality as a mashup of both real and virtual things.
XR technologies come in many shapes and forms. Virtual reality can incorporate nonimmersive spaces such as surrounding LCD panels where only some of a user’s senses are stimulated; semi-immersive spaces, like a flight simulator, that combine physical and virtual elements in a room; and fully immersive simulations that block out the physical world. Not surprisingly, the latter produces a far more realistic and engaging experience—but it also requires sophisticated hardware and software to produce a high-resolution sensory experience. Immersive VR typically includes a head-mounted display and other input and output devices, such as haptic gloves.
One way to think about extended reality is to consider VR an immersive experience and AR a complementary experience. When elements of VR and AR overlap with the physical world, the result is MR. Apps such as SnapChat and Facebook—and gaming platforms like the PlayStation and Xbox—already use virtual, augmented, and mixed reality. In the coming years, XR will expand digital interaction further. These technologies will transform static 2D environments displayed on screens into realistic and sometimes lifelike 3D representations. Moreover, XR technologies will intersect with other digital tools to produce entirely new features, capabilities, and environments.
Extended reality will profoundly change the way we connect to others in the world around us. Films such as Tron, The Lawnmower Man, Minority Report, The Matrix, and Iron Man showcased what’s possible—or at least imaginable. In the future, various forms of extended reality will deliver us to places that no 2D screen, however sophisticated, can display. As Marc Carrel-Billiard, global senior managing director of Accenture Labs puts it: “The human brain ... is wired to grasp events in 3D. Extended reality further bridges the gap between humans and computers.”
“The human brain ... is wired to grasp events in 3D. Extended reality further bridges the gap between humans and computers.”—Marc Carrel-Billiard, Global Senior Managing Director at Accenture Labs
The path to modern AR and VR has taken plenty of twists and turns. In the 1780s, Irish-born painter Robert Barker began experimenting with the idea of creating a more immersive experience. The Leicester Square panorama, the first grand display of the concept, opened in London in 1793. It featured a 10,000-square-foot panorama and a smaller 2,700-square-foot panorama within a large viewing structure.7 It was hailed as a completely new and revolutionary idea by reviewers. By the early part of the nineteenth century, artists began creating elaborate 360-degree panoramas that produced a more realistic “virtual” sensory experience for a variety of scenes, including battles, landscapes, and famous landmarks.
In 1822, two French artists, Louis Daguerre and Charles Marie Bouton, introduced a new wrinkle: the diorama.8 Their original creations used material painted on both sides of a screen or a backdrop. When illumination changed from front to back or side, the scene would appear differently. For example, a daytime scene would become nighttime or a train would appear on a track and look as though it has crashed. Dioramas are still used today in museums to depict natural scenes. For example, a backdrop might depict the Serengeti in Tanzania with a lion and zebra along with realistic looking plants in the foreground. The environment blends together to create a 3D illusion of actually being there.
Figure 1.2 Cross-section of the Rotunda in Leicester Square, one of the first and most elaborate panoramas. By Robert Mitchell, via Wikimedia Commons.
In 1932, Aldous Huxley, in the novel Brave New World, introduced the idea of movies that would provide sensation, or “feelies,” to transform a scene. By 1935, science fiction author Stanley G. Weinbaum framed the concept in a more tangible and modern way. In a story titled “Pygmalion’s Spectacles,”9 he served up the notion of a person wearing goggles that could depict a fictional world. The lead character in the story, Dan Burke, encountered an inventor named Albert Ludwig, who had engineered “magic spectacles” capable of producing a realistic virtual experience, including images, smell, taste, and touch.
“Pygmalion’s Spectacles” may have been the world’s first digital love story. It opens with Burke uttering the line: “But what is reality?” Ludwig replies: “All is dream, all is illusion; I am your vision as you are mine.” Ludwig then offers Burke a movie experience that transcends sight and sound. “Suppose now I add taste, smell, even touch, if your interest is taken by the story. Suppose I make it so that you are in the story, you speak to the shadows, and the shadows reply, and instead of being on a screen, the story is all about you, and you are in it. Would that be to make real a dream?”
These imaginary spectacles produced a complete sensory experience, including sight, sound, smell, and taste. With a person fully engaged, the “mind supplies” the sensation of touch, Ludwig explained. Weinbaum described the spectacles as “a device vaguely reminiscent of a gas mask. There were goggles and a rubber mouthpiece.” Later in the story, Burke winds up in an imaginary world filled with a forest and a beautiful woman named Galatea. However, the eerie and elusive world disappears like a dream and leaves him longing for the imaginary woman. “He saw finally the implication of the name Galatea ... given life by Venus in the ancient Grecian myth. But his Galatea, warm and lovely and vital, must remain forever without the gift of life, since he was neither Pygmalion nor God.”
While authors like Weinbaum conjured up wild ideas about how a virtual world might look and feel, inventors were tinkering with electronic components that would serve as the humble origins of today’s AR and VR. In 1929, Edward Link introduced the Link Trainer, a primitive version of a flight simulator.10 In 1945, Thelma McCollum patented the first stereoscopic television.11 Then, on August 28, 1962, Morton Heilig, a philosopher, filmmaker, and inventor, introduced a device called the Sensorama Simulator, which he described as an “experience theater.” It transformed the concept of a virtual world into something that could be produced by an actual machine (see figure 1.3).12
Figure 1.3 The Sensorama represented the first attempt to create a multisensory virtual-reality environment. Source: Wikipedia Commons.
Heilig had actually invented the Sensorama Motion Picture Projector and the 3D Motion Picture Camera in 1957.13 But in 1960 he added a component that would transform disparate technologies into a working system. The Telesphere Mask, a head-mounted display, produced stereoscopic images, wide vision, and stereophonic sound (see figure 1.4). Four years later Heilig submitted a series of drawings and notes to the US Patent Office. Essentially, a person would sit on a chair with his or her head extending into a surrounding apparatus that produced the virtual environment. In addition to presenting 3D visual images projected onto the “hood,” the Sensorama would deliver moving air currents, various smells, binaural sound, and different types of vibrations, jolts, and other movements. Heilig created five short films for the Sensorama. They appeared as moving 3D images.
Figure 1.4 An early head-mounted display invented by Morton Heilig. The patent for the Telesphere mask was filed in 1957. Source: Wikipedia Commons.
The Sensorama was a bold but unproven idea. Heilig succeeded in stepping beyond previous efforts to use multiple projectors and systems that filled only about 30 to 40 percent of a person’s visual field. “The present invention [aims to] stimulate the senses of an individual to simulate an actual experience realistically,” Heilig wrote in the original patent filing. “There are increasingly demands for ways and means to teach and train individuals without actually subjecting individuals to possible hazards of particular situations.” The overall goal, he noted, was to “provide an apparatus to stimulate a desired experience by developing sensations in a plurality of senses.”
Then, in 1961, the world’s first head-mounted display (HDM) appeared. Philco Corporation, a manufacturer of electronics and televisions, began exploring the idea of a helmet that would use a remote-controlled closed-circuit video system to display lifelike images. The system, called Philco Headsight,14 relied on head movements to monitor how a person moved and reacted. The device adjusted the display accordingly. Soon, other companies, including Bell Helicopter Company, began exploring the use of a head-mounted display along with an infrared camera to produce night vision. This goal of this AR tool was to help military pilots land aircraft in challenging conditions.
During the same period, the field of computer graphics was beginning to take shape. In 1962, Ivan Edward Sutherland, while at MIT, developed a software program called the Sketchpad, also referred to as the Robot Draftsman.15 It introduced the world’s first graphical user interface (GUI), which ran on a CRT display and used a light pen and control board. The technology later appeared in personal computers—and spawned computer aided design (CAD). Sketchpad’s object-centric technology and 3D computer modeling allowed designers and artists to render convincing representations of real things. Although it would be years before researchers would connect the dots between Sketchpad and XR, Sutherland’s invention was nothing less than groundbreaking.
His contributions didn’t stop there. In 1965, while an associate professor at Harvard University, Sutherland penned a seminal essay about augmented and virtual realities. It essentially served as the foundation for everything that would follow. He wrote:
We live in a physical world whose properties we have come to know well through long familiarity. We sense an involvement with this physical world which gives us the ability to predict its properties well. For example, we can predict where objects will fall, how well-known shapes look from other angles, and how much force is required to push objects against friction.
The ultimate display would, of course, be a room within which the computer can control the existence of matter. A chair displayed in such a room would be good enough to sit in. Handcuffs displayed in such a room would be confining, and a bullet displayed in such a room would be fatal. With appropriate programming such a display could literally be the Wonderland into which Alice walked. 16
Then, in 1968, extended reality took a giant leap forward. Sutherland, along with a student researcher, Bob Sproull, invented a device called the Sword of Damocles (figure 1.5).17 The head-mounted display connected to a device suspended from the ceiling to stream computer-generated graphics to special stereoscopic glasses. The system could track a user’s head and eye movements and apply specialized software to optimally adjust the images. Although the system did not provide a high level of integration between different components, it served as a proof point for future HMDs and goggles.
Morton Heilig was back at it again as the decade wound down. In 1969, he patented the Experience Theater,18 a more sophisticated version of the Sensorama Simulator. It consisted of a motion picture theater with a large semispherical screen displaying 3D motion pictures along with speakers embedded in each chair. The system included peripheral imagery, directional sound, aromas, wind, temperature variations, and seats that tilted. “The invention relates to an improved form of motion picture or television entertainment in which the spectator is enabled, though substantially all of his senses, to experience realistically the full effect or illusion of being a part of or physically responding to the environment depicted in the motion picture,” he wrote in United State Patent Office filing 3,469,837, submitted on September 30, 1969.19
Figure 1.5 The Sword of Damocles, invented by Ivan Sutherland, advanced the concept of a head-mounted display. Source: Wikipedia Commons.
Various researchers continued to explore and advance the technology. In 1981, Steve Mann, then a high-school student, placed an 8-bit 6502 microprocessor—the same chip used in the Apple II personal computer—into a backpack and added photographic gear, including a head-mounted camera. This created a wearable computer that not only captured images of the physical environment but also superimposed computer-generated images onto the scene. A device called an EyeTap allowed a user to have one eye on the physical environment and another viewing the virtual environment. Mann later became a key member of the Wearable Computing Group at MIT Media Lab.
Over the next decade, the technologies surrounding XR advanced considerably. Researchers continued to develop more advanced digital technology that led to more sophisticated AR and VR systems and subsystems. Headsets began shrinking and morphing into goggles and eyeglasses, and designers and engineers began integrating an array of components into AR and VR systems. These included buttons, touchpads, speech recognition, gesture recognition, and other controls, including eye-tracking and brain–computer interfaces. In 1990, Boeing researcher Tom Caudell coined the term “augmented reality” to describe a specialized display that blends virtual graphics and physical reality.
By the end of the decade, augmented reality had made its debut on television. In 1998, a Sportvision broadcast of a National Football League game included a virtual yellow first down marker known as 1st and Ten. Two years later, Hirokazu Kato, a researcher at Nara Institute of Science and Technology in Japan, introduced ARToolkit, which uses video tracking to overlay 3D computer graphics onto a video camera. This open source system is still used today, including on web browsers. By the next decade, AR also began to appear in vehicles. Sports cars and higher-end luxury vehicles included systems that projected the speed of the vehicle onto the windshield. This made it easier for a driver to avoid looking down from the road. Today, AR technology appears in numerous products, from toys to cameras and smartphones to industrial machinery.
The term “virtual reality” also appeared in the common vernacular around this time. Jaron Lanier, a computer scientist who had worked at the gaming company Atari began promoting the concept in 1987. His startup firm, VPL Research, produced VR components that together represented the first commercially available product. It included gloves, audio systems, head-mounted displays, and real-time 3D rendering. Lanier also created a visual programming language used to control various components and combine them into a more complete VR experience. About the same time, computer artist Myron Krueger began experimenting with systems that combined video and audio projection in a personal space, and Douglas Engelbart, who is best known as the inventor of the computer mouse, began developing more advanced input devices and interfaces that served as the starting point for many of today’s AR and VR systems.
The first truly immersive virtual space was introduced by a team of researchers at the Electronic Visualization Laboratory (EVL)—a cross-functional research lab at the University of Illinois at Chicago. In 1992, Carolina Cruz-Neira, Daniel J. Sandin, and Thomas A. DeFanti demonstrated the Cave Automatic Virtual Environment (CAVE).20 It delivered a far more realistic VR experience, including a holodeck that allowed individuals to view their own bodies within the room. The system incorporated rear-projection screens, a downward projection system, and a bottom-projection system to create the illusion of reality within the physical space. A person standing in the CAVE wearing 3D glasses could view objects floating and moving through the room.
CAVE aimed to solve a basic challenge: early HMDs were bulky and presented significant practical limitations, including use for scientific and engineering applications. The first generation of the technology incorporated electromagnetic sensors to track motion and movements. Later versions tapped infrared technology. Motion capture software pulls data from sensors embedded in the glasses and the room to track movements. The video continually adjusts and adapts to the motion. The CAVE projection system ensures that the glasses are constantly synchronized so that the person views the correct images in each eye. The room is also equipped with 3D sound emanating from dozens of speakers. The result is an immersive virtual space that allows a user to manipulate and manage objects in 3D.
Early versions of CAVE were significant because they advanced XR far closer to today’s portable and mobile systems. In fact, the CAVE concept caught on quickly. In 1994, the National Center for Supercomputing Applications (NCSA) developed a second-generation CAVE system so that researchers could explore the use of virtual reality in various fields, including architecture, education, engineering, gaming, mathematics, and information visualization. Using CAVE, an automotive designer could study the interior of a prototype vehicle and gain insight into how and where to position controls. An engineer might view the interior of a high-rise building before it is built, and a scientist might peer inside molecules or biological systems.
Today, many universities and private companies—from design and engineering firms to pharmaceutical firms—operate CAVE systems. These spaces are equipped with high definition projection systems that use state-of-the-art graphics to create lifelike effects. They also incorporate 5.1 surround sound, tracking sensors in walls and haptic interaction to deliver instant feedback. Because CAVE tracks head, eye, and body movements, a user can wave a wand to control virtual objects and move them around at will. It also means that a surgeon learning a new procedure will know instantly if he or she makes an incorrect incision. Over the years, CAVE has evolved into an entire platform with different cubes and configurations to suit different needs and purposes. A commercial offshoot of the project, Visbox, Inc., offers 12 basic configurations, as well the ability to completely customize the design of the space.21
One of the drivers of virtual reality and augmented reality was the US military. The goal of enhancing weapons and improving training is as old as war itself. As the 1960s unfolded, Bell Helicopter (now Textron Inc.) began experimenting with head-mounted 3D displays. In 1982, Thomas A. Furness III, who designed cockpits and instrumentation for the US Air Force, turned his attention to designing virtual-reality and augmented-reality interfaces that could be used in training systems. The Visually Coupled Airborne Systems Simulator overlaid data, graphics, maps, infrared, and radar imaginary in a virtual space in a head-mounted display that later became known as the Darth Vader helmet. The device included voice-actuated controls along with sensors that allowed the pilot to control the aircraft through voice commands, gestures, and eye movements.
Later, Furness designed a “super cockpit” that generated higher resolution graphics. In the UK, researchers embarked on similar projects. By 2012, the US Army had introduced the world’s first immersive virtual training simulator, the Dismounted Soldier Training System.22 The goal wasn’t only to improve training. DSTS generated cost savings by reducing the need to shuffle troops around for training exercises. “[The Dismounted Soldier Training System] puts Soldiers in that environment,” stated Col. Jay Peterson, assistant commandant of the Infantry School for the US Army in an August 2012 news release.23 “They look into it and all of a sudden, they’re in a village. There [are] civilians moving on the battlefield, and there [are] IEDs and vehicles moving. ... If utilized right, you can put a squad in that environment every day and give them one more twist,” he stated.
Over the last two decades, branches of the US military—along with the Defense Advanced Research Projects Agency (DARPA), have devoted considerable attention to the research and development of AR and VR technologies. This has included head-mounted devices, more advanced control systems and wearable components. During the 1940s and 1950s, considerable effort went into creating realistic flight simulators to train both military and commercial pilots. In 1954, the modern era of simulators arrived, when United Airlines spent $3 million for four systems. Today, the technology is mainstream, allowing pilots to practice and perfect procedures without risking a multimillion-dollar aircraft. More advanced simulators also train astronauts for space missions. Virtual reality has allowed the experience to become increasingly realistic.
Yet, the technology has steadily evolved beyond flight and training simulators and become a core component in ships, armed vehicles, and other systems. For instance, Battlefield Augmented Reality System (BARS), funded by the US Office of Naval Research, recognizes landmarks and aids in identifying out-of-sight team members so that troops can remain coordinated and avoid unintentional shootings.24 BARS ties into an information database in real time and it can be updated on the fly.
It should come as no surprise that gaming drives many of the advances in computing and digital technology. What’s more, it monetizes concepts and propels them into the business world. Accordingly, computer and video games featuring XR raced forward during the 1990s and 2000s. In 1991, Sega developed the Sega VR headset. It consisted of two small LCD screens and stereo headphones built into a head-mounted display. The HMD tracked eye and head movements. It could be used with games involving battle, racing, and flight simulation, among others.25 However, the device was never released commercially because, according to then-CEO Tom Kalinske, it caused users to suffer from motion sickness and severe headaches. There were also concerns about injuries and repetitive use problems.
The first networked multiplayer VR system also appeared in 1991. It was named Virtuality. The technology, designed for video arcades, cost upward of $70,000. It was remarkable because it also introduced the idea of real-time interaction. Players could compete in the same space with near-zero latency. The project was the brainchild of Jonathon Waldern, who served as managing director for Virtuality. “It was a concept to us that was perfectly clear but to others we went to for financing it was crazy. They just couldn’t imagine it,” he stated in a later interview.26 Nevertheless, Virtuality enjoyed modest popularity and companies including the likes of British Telecom purchased systems in order to experiment with telepresence and virtual reality.
By the 1990s, Atari, Nintendo, Sega, and other gaming and entertainment companies had begun experimenting in earnest with virtual reality. The film The Lawnmower Man introduced the concept of virtual reality to the masses. In the movie, a young Pierce Brosnan plays the role of a scientist who uses virtual-reality therapy to treat a mentally disabled patient. The original short story, written by author Stephen King, was inspired by VR pioneer Jaron Lanier. As the century drew to a close, another landmark film hit movie theaters: The Matrix. It featured people living in a dystopian virtual world. The film was a blockbuster success and imprinted the idea of virtual worlds on society.
Gaming consoles featuring virtual reality also began to appear—and often quickly disappear. Nintendo’s Virtual Boy was released in Japan in July 1995 and the following month in the United States. The video-game console was the first to deliver stereoscope 3D graphics using an HMD. A year later, however, Nintendo pulled the plug on the project for a number of reasons, including high development costs and low ratings from users. The console did not reproduce a realistic color range—hues were mostly red and black—and those using the console had to endure unacceptable latency. In the end, fewer than two dozen games were produced for the platform. It sold only about 770,000 units worldwide.
Undeterred, engineers continued to focus on developing a viable VR gaming platform. Aided by increasingly powerful graphics chips that produced quantum leaps in video, consoles such as the PlayStation 2 and 3, Xbox, and Wii began using haptic interfaces, goggles, and new types of controllers. Yet, it wasn’t until 2010 that modern VR began to take shape. The Oculus Rift, with a compact HMD, introduced more realistic organic light-emitting diode (OLED) stereoscopic images and a 90-degree field of vision. Over the years, the Oculus platform has continued to advance. In 2014, Facebook purchased Oculus from founder Palmer Luckey for $2 billion. The company has since built Oculus into a major commercial VR platform, and continues to introduce more advanced platforms, including the Quest, which it touts as” the world’s first all-in-one gaming system built for virtual reality.” At the same time, other companies have streamed into the VR marketplace. This includes Sony’s Project Morpheus, a.k.a. PlayStation VR.
Producing an ultrarealistic and ultrauseful XR experience requires more than hardware, software, and sensors. It demands more than incredible graphics and creative ideas. It’s essential to tie together disparate technologies and coordinate devices and data points. For augmented reality, this means managing real-time data streams through mobile devices and the cloud—and applying big-data analytics and other tools without any latency. For virtual reality, it’s essential to design and build practical and lightweight systems that can be worn on the body. The Oculus Rift changed the VR equation by showcasing a lightweight platform that was both practical and viable.
The evolution of XR to smaller and more compact—yet still powerful—systems marches on. Over the last few years, wearable components and backpack systems have begun to emerge. Today hundreds of companies are developing and selling VR systems in different shapes and forms. Not surprisingly, prices continue to drop. Consider: the Oculus Rift, released in 2016, introduced a complete VR platform for $600. By 2018, a far more advanced Oculus Go sold for $199. Meanwhile, computer manufacturer HP has released a backpack system, the HP Z, that can be used not only as a conventional computer but also as a mobile virtual-reality platform. It weighs about 10.25 pounds and it comes with hot-swappable batteries and a mixed-reality head mounted display.
Augmented reality is also speeding forward. In recent years, automobile manufacturers have begun using AR for heads-up displays that show how fast the vehicle is moving. Google Glass, introduced in 2013, projected information on actual glasses and incorporated natural language commands, a touchpad, and internet connectivity so that a user could access a web page or view a weather forecast. The light-emitting diode (LED) display relied on a technique known as s-polarization to reflect light onto the lens. Google Glass also featured a built-in camera that could record events in the user’s field of vision. In April 2013, Google released a version that cost $1,500. Although groups ranging from doctors to journalists began using the glasses, Google halted production of the consumer prototypes in 2015. It refocused its efforts on developing AR glasses for the business world.
Despite technical limitations and privacy concerns—including capturing audio and video in places many would deem “private” and inappropriate, such as a workplace or locker room—Google Glass took AR beyond the smartphone and into the form factor of lightweight glasses. Of course, the idea of projecting images and data on lenses isn’t limited to Google. Dozens of other companies have developed eyewear that puts AR clearly into view. This includes Microsoft, Epson, and Garmin. Moreover, AR is continuing to expand beyond conventional screens, lenses, and uses. For instance, Apple has developed an augmented-reality windshield that can display map directions and accommodate video chats in autonomous vehicles.27
Although AR systems may use holographic projections to generate images on an LCD or other glass screens, they’re steadily gaining other features, including audio, haptic, and laser interaction. What’s more, smartphone apps that incorporate AR are increasingly common. These apps have both business and consumer appeal. For instance, they allow technicians to view data and specs while repairing a machine. Consumers can see what a new sofa might look like in a family room or what a garden will look like with roses rather than hedges. AR apps can also simplify travel and the stress of communicating in a foreign language. For example, Google Translate serves up real time translations for signs, menus, and documents. Simply hovering the phone above text creates a real time overlay in the desired language.
Some industry observers have deemed AR the new personal assistant, in much the way Apple’s Siri, Microsoft’s Cortana, Amazon’s Alexa, and Google Assistant have redefined the way people interact with digital devices. At the center of this thinking is a basic but profound concept: the right combination of tools and technologies can result in dramatically fewer steps and far better results. They can also add new features and capabilities that weren’t possible in the past. The fact that upward of 2.6 billion smartphones exist worldwide makes the technology even more appealing. Suddenly, it’s possible to use AR anywhere, anytime.
In 2017, Apple raised the stakes by incorporating an AR development platform, ARKit, into its iPhone X (Google later introduced its own kit, ARCore, for Android phones). One interesting and novel feature that Apple introduced was animated characters known as Animojis. The iPhone X uses facial recognition and a camera technology called TrueDepth to capture a person’s image and embed it into the animated creature. Using 30,000 dots of infrared light to capture the features of a face, it’s possible to mimic user expressions and motions in the Animoji. This includes smiles, smirks, frowns, laughs, raised eyebrows, and other gestures.
The appeal of AR and VR is indisputable. Humans live in a 3D world that serves up sights, sounds and other sensations on a 24 × 7 basis. It’s how we define and experience our world. Although 2D television screens and computer displays have advanced remarkably, they deliver an experience that is nothing like the physical world. They cannot duplicate or recreate the physiology of vision, movement, sound, and touch. On the other hand, extended reality creates a more complete and immersive sensory experience that allow us to extend our consciousness into new realms and explore new places and things.
Jason Welsh, managing director for Accenture’s Extended Reality Group in North America, believes that consumers and business are ready to embrace these new environments. “Over the next 10 years we will see an enormous shift in social behaviors,” he says. Gaming will take on new dimensions and activities such as watching movies, and experiencing sports, concerts, and travel will become highly immersive. Virtual business meetings will assemble people from all over the world and businesses will use sensor data and feedback data from AR and VR to learn about customer behavior in deeper and broader ways. These technologies could even shape the future of space travel. NASA has introduced Mars 2030, a virtual-reality simulation where participants enter a virtual space and build communities on the red planet. The agency plans to use the data to plan actual missions.28
Of course, additional technology advances are required to produce AR apps and VR systems that slide the dial to mass use. For now, AR graphics continue to advance and, alas, smartphone apps and goggles don’t always work as billed. There are other challenges, too. VR systems must become smaller and better integrated with the human body. Battery performance must improve. And persistent and ubiquitous network connections still aren’t available everywhere. Further advances in microchips, software, and network designs are necessary to take XR performance to the next level.
Nevertheless, extended reality is taking shape—and reshaping society. London College of Communication began offering a master of arts degree in VR in the 2018–2019 academic year.29 Another university, L’École de design Nantes-Atlantique in France, introduced a master’s degree in virtual reality in 2010. Jaron Lanier, in a November 2017 Wired magazine story, offered a compelling perspective.30 “Virtual reality is a future trajectory where people get better and better at communicating more and more things in more fantastic and aesthetic ways.” Although extended reality is often considered a magical thing and is thought of as a pale comparison to the physical world, Lanier views things differently. “Virtual reality—which is sold as an illusion—is real for what it is.”