To get to the virtual world, to view it, to interact with it – we always require a portal to this parallel dimension. That portal can, for Level 1 sensory stimulation (see Chapter 13), use display technology – screens, holographic glass, projections. These come in the form of eyewear like glasses or head-mounted displays (HMDs – devices worn on the forehead that look like big glasses, boxes harnessed to the head, or helmets) in more advanced forms of MR and VR. Smartphone screens have been used in simpler VR and AR applications. A next-level design we’ll see soon will involve contact lenses we can discreetly use to view virtual elements within our environment.
Portals have evolved over the years to connect us with these realms of the imagination. Attempts to make us feel like we’ve been transported to another world date far into the past. The word ‘panorama’ was coined all the way back in 1792 by Irish painter Robert Barker, describing 360 degree mural paintings that would fill the viewer’s vision and make them feel like they were in urban locations, cities and later nature, famous military battles and historical events. Stereoscopes turning 2D pictures into 3D images appeared as far back as 1838, when Charles Wheatstone adapted the bino-cular vision of humans to provide 3D perception. He showcased his stereoscope using two different drawings with a slightly shifted perspective (the first daguerreotype photograph, making photography a reproducible process, was still a year away). In the 1930s, a science fiction story by Stanley G. Weinbaum called ‘Pygmalion’s Spectacles’ described a pair of goggles that gave the wearer an experience of a fictional world through holographics, touch, smell and taste. The mid-1950s saw filmmaker Morton Heilig create an arcade-style machine called the Sensorama that nearly engulfs the user for a multi-sensory experience. He further finessed this idea into a smaller design, inventing the first HMD, called the Telesphere Mask, patented in 1960. This didn’t have interactivity or motion tracking to allow the person to look around, but the following year the Headsight was developed by two engineers, Comeau and Bryan, featuring a video screen for each eye and magnetic motion tracking. This was a very early precursor to the devices worn for VR today.
A range of other devices and designs followed over the next few decades. Along this line of thinking, 1982 saw the release of the film Tron, a visually stunning film for its time, in which a computer programmer ends up trapped inside a digital software world, trying to survive and escape this neon, foreign dimension where programs appear as people. A boom of interest and investment in VR came about in the late 1980s. Jaron Lanier, founder of the Visual Programming Lab, coined the term ‘virtual reality’ in 1987, and we started seeing large arcade devices made for public access in the early 1990s. The sci-fi thriller Lawnmower Man, released in 1992 and in part based on Jaron Lanier, introduced VR, albeit a pretty freaky story of it, to a general audience. Personal gaming brands soon released consumer VR headsets, including Sega and Nintendo, but these completely flopped. The headsets were cumbersome and the graphics were clunky.
I remember Dad letting me try a few early VR systems – headsets and 3D glasses. One of these systems, CrystalEyes by the company Stereographics, allowed you to view the computer screen in 3D. This company was founded by Lenny Lipton, who at the age of nineteen had written a poem that was adapted into the lyrics for ‘Puff the Magic Dragon’. I loved all these pieces of technology despite many giving me headaches due to the jittery refresh rates. The time just wasn’t right. The technology died out while the world focused on the rise of the internet and the dotcom boom. Imaginations had been sparked nonetheless. The reality–virtuality continuum concept was first introduced by engineering professor Paul Milgram in 1994. And then The Matrix by the Wachowskis hit cinemas in 1999. What a visionary film it was too, bringing together ideas of AI rising up, battles between humans and machines and the creation of a virtual world to imprison our minds – the Matrix. It took us into these brain-twisting ideas that we could potentially be living in a simulated reality and not even know it.
I definitely felt VR would one day return and was looking forward to it for a range of reasons. In senior high school (2001) I became instantly hooked when playing the game Halo: Combat Evolved on Microsoft’s Xbox console. Set in the twenty-sixth century, Halo is a first-person shooter game where your character is a cybernetically enhanced super-soldier called Master Chief. You are assisted by an AI called Cortana, and the relationship between you both is a crucial part of the Halo game story, as effectively she becomes your guide and assistant throughout the journey. Cortana would go on to become the inspiration for Microsoft’s intelligent personal assistant of the same name, first demonstrated in San Francisco in 2013.
While exploring and uncovering the secrets of the ring-shaped artificial world, you battle waves of various species of aliens. You also do a lot of running, ducking, dodging and jumping while wielding a wide range of weapons . . . well, that is, your character Master Chief does. In stark contrast, you as the player only really give your fingers a solid workout on the controller. This is what made me pretty obsessed with the ideas of when VR would one day make a return and what might be possible in the next wave. I loved daydreaming of when the time would come that we could go into a game, run around and be completely immersed, mentally and physically. In first year university, I started looking into using my engineering studies to potentially down the track create devices like an ‘omnidirectional treadmill’, a term that only became known to me years later. The idea was a treadmill that allows you to run while immersed in a game, not just in one direction like a conventional treadmill – but in any direction! I thought I’d get to really experience all that exercise my Master Chief character was getting. He would run up and down hills, so I also wanted this treadmill to change in conjunction with the terrain. This would be a huge engineering challenge incorporating some robotic principles. I started planning for the prototype to form my undergraduate thesis. But then the diving accident happened and my trajectory changed for the rest of my degree.
At the time I started viewing other games and movies through a new lens. One of my favourite book sets is The Lord of the Rings by J.R.R. Tolkien and I loved the movies too. After watching the Battle of Helm’s Deep in the second film, The Two Towers, I wanted to be immersed in the battle. The film captures the feeling of the soldiers waiting to defend the fortress as Saruman’s army of Uruk-hai (big beasty orc creatures) closes in. I dreamt of fighting the enemy as Aragorn, feeling the weight of the sword in my hands and being immersed in the action and atmosphere.
I wondered if this would one day be possible.
To find yourself in the shoes of fictional characters.
To see the world through their eyes.
To be them.
Some years later, when I was working on my mind-controlled smart wheelchair, TIM, I was going through many frustrations with the two camera systems (stereoscopic and spherical vision) attached to the wheelchair. Whenever I connected them both to the tablet PC at the same time, it would crash and give me the Blue Screen of Death (when a Windows system crashes, the system error message appears on a full-screen blue background). This was so frustrating and required persistence and problem-solving. One night before heading to bed after a long day of failing to solve this problem, I was hit with a vision. It was a ball of cameras facing outwards, which would solve all my problems if it existed, and actually make designing the vision for the wheelchair much easier. It would be able to see a full 360 degrees in 3D vision. As I drew up my plans and researched the types of cameras I would build into this camera ball, a new revelation hit me: if these types of cameras were possible, then entire environments like rooms in houses would soon be able to be filmed, creating 3D representations for VR.
I instantly put together a pitch in 2008 and, not really thinking like an entrepreneur back then, tried to give the idea away to those who had time and funding and business acumen. I told them that this camera ball with stereo overlap in each direction could be created to produce 360 degree 3D vision. We could film many places around the world for the day VR would return. When asked when this would occur, my consistent answer was, ‘I don’t know. Sometime soon.’
As it turned out, the technology required to bring back VR came packaged in our smartphones. A huge set of innovations brought a range of capabilities to these devices in our pockets, particularly a high-resolution small screen, an incredibly powerful CPU (with millions of times more computing power than NASA’s Apollo 11 guidance computer – truly amazing for its time – used to put man on the moon in 1969) and an inertial measurement unit that tracks the angle and movement of the phone. With these features combined, you take a phone, stick it in a headset with a few lenses so your eyes can focus on the screen, and what you see is a virtual world, with the displays shifted be-tween your individual eyes for 3D perception. You start facing one direction and as you turn your head, the device recognises its own movements and pans the images being displayed. This means you can look in any direction and it will show you what you’d be looking at if you were there. It becomes a neurological trick, providing an instant feeling of immersion. But this is the simplest form. Devices and features get much better from here.
VR resurfaced in 2012, heralding a new wave of extended realities. And this time around, it was here to stay. It began when a new HMD called the Rift was launched on Kickstarter (a crowdfunding website where potential buyers can back products and help get them to market). It was produced by the company Oculus, led by young entrepreneur Palmer Luckey. The campaign raised nearly US$2.5 million. Mark Zuckerberg had Facebook acquire Oculus for around US$2 billion in 2014 as he could see the potential in it all.
Of course, it didn’t stop there. Further devices continued to be released, and these portals even led to a greater emergence of devices for AR. Even though the concept of creating holograms and interacting with them had been around for decades, the technology was finally catching up. Many researchers over the years attempted to develop various forms of hologram technology, originally inspired by the famous scene in the 1977 film Star Wars (later subtitled A New Hope) where Princess Leia appears as a hologram, projected from R2-D2, with a message pleading for help from Obi-Wan Kenobi. But holograms were too expensive and difficult to produce effectively, so the simple idea eventually emerged that if we can perceive holograms in our environment, then that’s achieving the vision. This is a basic idea of AR; we just need a technological portal to visualise these digital holographic objects in the real world.
The first time AR truly spread around the globe was through the smartphone game Pokémon GO. A virtual game land was interlaid with the real world, turning real-world locations into places for catching Pokémon, for obtaining game resources and even as battlegrounds. The AR portal was your smartphone. The app would open the camera so that when you viewed the real environment through the screen you could see the virtual world and characters would appear in your vicinity. It changed human behaviour across the world. My brother Alex drove home from an evening shift at the hospital one night to find that the roads to his house were blocked by people – at 1.30 am! – because the park across the road from his house featured three PokéStops within very close proximity. It was like a 24/7 rave, with people constantly there picking up resources quickly without having to travel. The thing about a lot of real people playing the game in the one location is it brings many more virtual Pokémon to the area, which in turn brings more people who just gotta catch ’em all!
With the release of various HMDs for interactive AR, and simpler glasses and even contact lenses on the way (let alone sensory-hacking levels beyond Level 1), the technology will increasingly infiltrate many areas of our lives. Cameras and other device sensors allow the real world to be mapped, intertwining the two realms and allowing them to effect change in each another. The ability to summon interactive holograms into the world around you is a powerful thing, and when the time comes (and the technology is simple and accessible enough), many will choose to use these devices instead of smartphones. They’ll be increasingly utilised in areas such as education, to allow teachers to bring up holographic educational material the class can see and interact with; hospitals and surgery, where surgeons will have access to charts, AI assistance and remote colleagues around them; collaborative projects and telepresence; entertainment and games; and general computing and productivity. It will also open up a range of virtual controls, connecting individuals to the smart devices around them in their homes and world, even assisting people with disability to independently access computing and their connected environment. With many opportunities yet to be seen, devices that act as portals between real and virtual worlds are continuing to emerge . . . and are changing our reality forever.