For Valentine’s Day 2010, I (Joe) wanted to send cards to my two girls, Becca and Lizzie, who were both away at college. So I went down to the local Hallmark store to pick them out. I found a little section of Disney cards and came across one that highlighted the Disney princesses; I bought two cards to send to each of them.
The caption on the front of the card read, “You’re lovely like Aurora, you’re brave and kind like Belle, you’re Cinderella-sweet … you must be magical, as well!” And on the inside, it simply said, “Happy Valentine’s Day and Happy Ever After.” But here was the kicker: the card offered a surprise: “Watch this card come to life on your computer!” it exclaimed, explaining that all you had to do was to go to www.hallmark.com/extra and follow its instructions.
I had to try it out before sending the cards off. After downloading and running a program particular to the card I purchased, my image came up on the computer screen via the webcam. I picked up the card and placed its front toward the camera—and up popped a three-dimensional animation right on top of the image of the card on the screen! Cinderella herself began singing “Every Girl Can Be a Princess” as a scene from her eponymous movie played. I could still see my own image behind the card, and as I moved it around, the scene changed perspective accordingly, so it could be watched front on or to the side, or even on top. (Turning it over or opening it up caused the program to lose the graphic image of the three princesses on the front, so the animation stopped, only to start over again when I placed the front of the card toward the cam again.) I eventually discovered that if I rotated the card far enough, the Cinderella scene was replaced by one starring Belle of Beauty and the Beast, and further still brought up Aurora, the star of Sleeping Beauty. I enjoyed seeing all it could do before signing and sending off the identical cards to the two girls.
You can try it out for yourself by going to a Hallmark Store to pick up a free sample, or printing one out at the site, and then following the instructions. When you do, you will notice that the Hallmark page welcomes you by saying, “Watch your cards come to life with Augmented Reality.” But it is not Augmented Reality. It is Augmented Virtuality.
Think about it. Recall our definition of Augmented Reality: using bits to augment our experience of the physical world, overlaying it with digital information. That is not what is going on here. My (and my daughters’) primary experience with the Disney princesses did not happen in Reality, but on the computer screen. The webcam-fueled production did not change my real-world experience in any way; rather, it was the card that changed my Virtuality experience. The physical card—a material piece of matter—altered what I experienced through the cam, on the monitor, in my mind.
That is the exact opposite of Augmented Reality! Rather than the digital enhancing the material, with Augmented Virtuality the material enhances the digital. As seen in Figure 7.1, places are formed in virtual space, events are enacted in autonomous time, and with the augmentation constructed of material substances. The essence of Augmented Virtuality is taking a Virtuality experience and flipping No-Matter to Matter, bits to atoms, by using some material substance to alter, enhance, control, or amend how we experience the virtual world.
You can see this even more clearly with other implementations from the French company Total Immersion, the leader in webcam-based Augmented Virtuality and creator of the Hallmark experience. To promote the movie Iron Man 2, it produced a digital marketing campaign in which people can again see themselves through the webcam, but this time with the helmet of Iron Man or another character, War Machine, on their head. As a person’s head moves, the helmet moves with it. People can also change the perspective to inside the helmet to view all the information presented to the character (which for that character was Augmented Reality while remaining Augmented Virtuality for the viewer, if you follow!). Each person’s head, in other words, becomes the physical trigger that augments the virtual experience.
Figure 7.1 Augmented Virtuality
Total Immersion also created the very first implementation we ever heard of, TOPPS baseball cards that audibly introduce each player (complete with cheers, as if at an actual game) and then let the collector manipulate the images to see them hit, pitch, or field on their computer screen. Another developer, James Alliban of London, turned his own business card into an amazing promotional tool at his website, where placing the back of the card, with its graphic marker, in front of a webcam triggers a pixilated 3D image of his own head on top of the card, giving his bio and describing his capabilities. A video of it has been seen hundreds of thousands of times online, but of course it’s called “AR Business Card” and not “AV Business Card.”1
Words have meaning, and it is important to get this right. Much of the strength of the Multiverse lies in understanding the distinctions between realms. To avoid confusion, for example, Layar cofounder Maarten Lens-FitzGerald feels compelled to begin presentations by explaining the difference between proper Augmented Reality as Layar practices it and “webcam-based Augmented Reality” as some mistakenly call it. Discerning the distinctions catalyzes your explorations by giving you conceptual touchpoints to consider in creating customer value.
Other versions of Augmented Virtuality exist besides the webcam kind. In fact, the quintessential example of this realm remains the Wii from Nintendo.
It exploded onto the scene in 2006 (just in time for Christmas) and revolutionized gaming. Its controller, the Wii Remote, senses the direction it is pointed across all three spatial dimensions, along with how fast it is moving, so players can physically manipulate virtual objects. For the first time in any mass-market computer-based game, people could pry their fingertips from their controllers, get their butts off the couch, and put their body into the game. You can not only manipulate an avatar to hit a baseball, tennis ball, or golf ball, but physically be the one hitting the ball, so to speak; you may physically swing, but it’s still a virtual character that hits the virtual ball with a virtual bat. It remains a virtual experience, but the Wii engages both your mind and your body as you physically maneuver the material remote. Nintendo’s system became even more realistic when in 2009 it came out with the Wii Motion-Plus, which added a gyroscope to detect the smallest of gyrations of a player’s wrist, so now a player can, say, draw a golf shot or put topspin on a table tennis ball.
From the very beginning the Wii engaged not just hardcore teen gamers but the entire family, being particularly popular among women and casual gamers. Even before the Wii, however, there was Activision’s Guitar Hero (2005), a video game in which players used a guitar replica, developed by Harmonix Music Systems of Cambridge, Massachusetts, to strum to the beat of popular tunes. Harmonix went on to develop more such instrument controllers, creating Rock Band with both lead and bass guitars plus drums and a microphone for groups of four to rock out, and the Santa Monica-based Activision responded with DJ Hero, where players use facsimile turntables to spin their own music mixes. And even before Guitar Hero there was Dance Dance Revolution from Japan’s Konami, which started as an arcade game in 1998 and later moved in-home to console systems. In Dance Dance Revolution players step onto a dance platform or pad and time their dance steps to the music while hitting the spots indicated by patterns on the screen.
Most recently, Sony came out with a similar Augmented Virtuality offering. Its PlayStation enhancement, called the Move, incorporates a Wii Remote-like device that has a sphere on the end that easily can be tracked by a camera called the Eye, enabling much more precision than with Nintendo’s Wii in order to control PlayStation games.
Augmenting a virtual experience with a physical object—the Matter of the [No-Time – No-Space – Matter] triumvirate that defines Augmented Virtuality—provides a greater bodily experience than Virtuality alone. The device extends the screen-based experience by drawing it back from the sensory extremities to involve the rest of the body. Further, the experience becomes so much more engaging and robust when the device becomes more sophisticated and intelligent than marked cards and papers placed in front of a webcam. And even though such a device uses digital technology, definitionally that’s OK because the experience remains fully one of bits-based Virtuality, with materiality added on top of it, just as Augmented Reality adds digitality atop atoms-based Reality.
Another great benefit ensues: the stimulation of physical activity within a virtual place. You can, in fact, work up quite a sweat playing with the Wii. That’s why Nintendo collaborates with the American Heart Association at activeplaynow.com “to promote physically active play as part of a healthy lifestyle.”2 And that’s why sports and fitness applications proliferate. In addition to such active games as boxing, Nintendo created the Wii Balance Board, which measures a person’s balance and body mass, to enable a number of different physical fitness pursuits through its Wii Fit and Wii Fit Plus titles, including yoga poses, fitness and aerobic exercises, and a number of balance-based activities.
Others have joined in on using virtual places to promote physical well-being. Respondesign’s Yourself!Fitness stars Maya, a virtual fitness coach who leads players through exercise routines specific to the abilities and needs of each person. These are done, interestingly, using normal, physical workout objects such as exercise balls, weights, and steps rather than an electronic controller like the Wii Remote, enhancing the virtual experience all the way. And EA Sports, the Electronic Arts unit that is probably the premier producer of sports video games (including Madden NFL and Tiger Woods PGA Tour), collaborated with manufacturer Toy Island to produce a series of virtual coaching products, including Voice Command Pitching Machine, Voice Command Quarterback, and Sweet Spot Basketball, that “bring classic sports activities to life in a whole new way, utilizing infrared, motion and equilibrium sensors.”3
As the devices here become less digital and more material, the primary experience may slide from what happens virtually on the screen to what happens physically on the field. This would slip the experience along what we might call an “Augmentation Axis” from Augmented Virtuality to Augmented Reality. The very same technology could then shift commensurately from its primary value being its physicality enhancing the virtual experience to its digitality enhancing the physical experience. In every case, though, both aspects remain present, all the time. One can therefore understand the confusion of those who call Augmented Virtuality offerings Augmented Reality, and even vice versa. Still, do keep in mind the distinction as you explore the cosmos incogniti of augmentation, seeking value wherever it can best be found.
While the previous examples of Augmented Virtuality focus on the body, companies can also apply this realm to enhance the mind by increasing what we learn via the base Virtuality experience. Chicago-based VTech Electronics, for example, produces the Wii-like V.Smile Motion Active Learning System, where children bone up on their math, reading, science, and spelling while actively engaging with fun experiences on their TV. And on the webcam-based side of the realm, in London nonprofit Specialist Schools and Academies Trust teamed with James Alliban of Augmatic—he of the amazing business card—to create LearnAR, a set of applications that help teachers and students explore subjects such as math, science, physical education, and languages. In learning about the organs of the body, for example, students can hold a properly marked piece of paper over their own body and via a webcam see all the internal organs displayed on the computer screen in approximately the proper place.
On the intelligent-device side of the spectrum, Amsterdam-based Personal Space Technologies (PST) creates interactive work stations and monitors that let people see virtual objects three-dimensionally. On top of that Virtuality base it then adds plain physical objects that users can move around manually, causing the virtual objects on the screen to move commensurately. (In other contexts we would call them “hockey pucks,” although they generally look like big, soft, twelve-sided dice; the reflector dots on each side enable a camera to determine its orientation relative to the virtual object.) The museumgoudA, in Gouda, The Netherlands, for example, digitized its municipal arts collection, and now uses PST offerings to allow visitors to handle, hold, touch, and examine virtual art pieces without risking the slightest damage to the real ones. They simply pick up the physical object that stands in for a virtual artifact (which in turn represents the original work of art!) and freely manipulate it as they look at its image on the screen.
The company started out in health care, where it provides doctors with Personal Space Stations that enable them to view medical images such as X-rays, MRI results, and CAT scans. The doctors can then pick up the same sort of object to manipulate these images, rotating a scan to all sides, zooming in to get a better look, and so forth. The device can even turn its 3D imaging into 4D by displaying a time-elapsed scan as it changes. (Note that if that capability were tied into a real-time scan as it happened, that would fuse Augmented Virtuality with Mirrored Virtuality.) Through it, doctors can effectively hold a beating heart in their hands.
Although all the examples so far result in visual action (with audio accompaniment), other lines of research and application expand the possibilities. The key technology: haptics, where sensors connected to the body can make virtual objects seem real through the application of forces, vibrations, or motion to create tactile sensations. Such “virtual touch” provides a means of information sharing beyond sights and sounds. Long worked on in the laboratory as part of the Virtual Reality thrust, many researchers have created Virtuality environments where subjects interact with their surroundings while wearing body suits with actuators that “push back” to provide people with a sense of touch on various areas of their bodies. Commercialization, however, has come only in highly specialized fields such as avionics and medicine, where doctors use it in training simulations and increasingly for “teleoperations,” procedures in which the patient is remote from the doctor, who operates with local instruments (via haptic response) while a robot performs the actual operation on the patient.
There is, of course, one more arena of haptic commercialization: gaming. Dr. Mark Ombrellaro of Bellevue, Washington, founded TN Games as an offshoot of his medical information company, TouchNetworks, to market the 3RD Space Vest. Thanks to air pistons in the vest connected electronically to the game console, players who suit up when they shoot ’em up can actually feel it when they get shot up.4 Many other virtual touch innovations augment gaming experiences, from something as simple as a joystick controller shaking as your race car rubs against the wall of a virtual racetrack, to something as encompassing as game chairs that move three-dimensionally, in synch with whatever virtual vehicle—motorcycle, car, plane, spaceship—you control, as well as responding tactilely to the virtual environment; for example, you feel the surface of the road over which you are driving.
Fresh Green Light of Rye, New York, uses such driving simulators to teach teens without having to take unprepared drivers on the open road. Cofounder Laura Shuler told us, “Our ‘Apple Store meets Driver’s Ed’ approach is making training cool; one kid recently told me ‘these are sick’—the ultimate compliment!” The technology has become so realistic, in fact, that the Sideways Driving Club in Hong Kong counts among its clientele real race car drivers, who train using the Club’s gaming simulators. These simulators come complete with “a narrow fiberglass cockpit, realistic steering system and a calibrated brake pedal” that “mimic the feel of a genuine car, while a video screen shows the track and headphones provide realistic sound.”5 Many NASCAR drivers own their own “sim racing” units to prepare for upcoming races. Carl Edwards says he uses it “at tracks where I don’t feel real comfortable, specifically the road courses.”6
Much of Augmented Virtuality involves providing such special-built devices as these driving environments, the Guitar Hero guitar, the Wii Remote, the Hallmark card, or Personal Space Technologies’ “hockey puck” with its reflector dots. In a different way of thinking about Augmented Virtuality, UK-based Violet produces the Mir:ror, an elegant RFID reader that lets any object drive actions on your personal computer. It comes with a set of RFID (radio-frequency identification) tags that you place on any object, and then you tell your computer what you want to occur whenever that tag is read. So, for example, suppose you have tagged your family’s keys; placing your car keys on the Mir:ror looks up the day’s weather on the Web, whereas your daughter’s house key sends an e-mail to you at work that she arrived at home.
In a similar vein, Personal Space Technologies lets any object—a pen, helmet, tennis racket, even body parts—serve as the virtual manipulation device by placing tracking markers on it, which can then be “seen” by its camera-based Personal Space Tracker. Director Marc Lausberg says, “It’s better than the Wii. We can make anything into a 3D interaction device.”7 You can’t get more general-purpose than anything, an advance owing to moving the technological capabilities from the hand to the camera and its connection to a computer (just as in the webcam-based experiences discussed earlier).
The first commercialized system with such a camera-based interface was of course in games: the EyeToy for Sony’s PSP console. Released in 2003, its TV-mounted camera (which evolved into the Eye of the PlayStation Move) recognizes the player’s body in order for it to become the physical controller of the virtual experience. It lets your avatar lean, jump, duck, kick, fight, dodge, bat, or perform scores of other actions that mimic your own body movements. In an update to Dance Dance Revolution, for example, your hands have to move to the rhythm of the beat, not just your feet.8
Microsoft’s Kinect technology for its Xbox 360 takes this to a whole new level by adding a microphone for verbal commands, increased resolution, and, most importantly, a second camera, on the infrared spectrum, to see depth. Through it the system can take an image of a player’s entire body to place it inside games, so rather than gross motor control of an avatar, Kinect provides rather fine body control over the player’s actual image. After its release in November 2010 (just in time for Christmas!), three separate New York Times articles (one from a business reporter, another from a business columnist, and the last from a video-game reviewer) all described it breathlessly using the same imagery: It has an “almost magical technology for gesture and voice recognition”; “There’s a crazy, magical omigosh rush the first time you try the Kinect. It’s an experience you’ve never had before”; “Most of the time Kinect simply feels like magic.”9
After playing Xbox 360 Kinect games and experiencing this magic for myself, I (Kim) reflected on my recent Augmented Virtuality experiences—from the Hallmark Card that Joe shared with me, to the Wii, to the Kinect. I noticed a progression of increasing value in the ways these technologies augment Virtuality to create ever more engaging experiences. For example, a Hallmark Webcam Greetings card—simply being a device that launches and, to a limited degree, controls the way the virtual message plays out—has little physical experience nor interactive mental engagement. The card accomplishes its purpose of entertaining you, but that’s it.
With Wii Sports I play such virtual games as tennis, bowling, boxing, and baseball with the handheld Wii directly controlling my avatar. The level of physical and mental engagement increases as I strive to learn, by trial and error, how to play more effectively. Pursued diligently, the games spur a significant level of physical exercise that initially might go unnoticed but over time produces valuable health benefits. In my ongoing personal transformation quest to find entertaining and engaging physical activity, Wii Fit Plus took me to yet a higher level of engagement thanks to the Wii Balance Board. The addition of standing and stepping on this input device provided a greater degree of interaction due to richer real-time feedback on my balance, pace of activity, and overall coordination while performing the Wii Fit exercises, treating them in essence as a game.
When the Xbox 360 Kinect came along, my experiences graduated to yet another degree. With this gaming system, no physical device attaches to you or gets stepped on. In this Augmented Virtuality experience, YOU are the matter, constructing the physical control “device.” The Kinect cameras profile you, the game learns who you are, and the system then tracks the motion of your whole body and its parts. The comprehensive and tight interactivity between my bodily movements and my virtual avatar boosted my engagement tremendously. Whether shooting down Curvey Creek in Kinect Adventure: River Rush, taking dance lessons with Dance Central, or exercising to Your Shape Fitness Evolved, my every movement comes into play.
Moreover, with the two instructional games my virtual coach commented on my moves, praised me for the correct actions, encouraged me to do right what I had yet to master, gave me very specific critiques to improve my performance (e.g., “Squat lower. Kick higher! Stay in rhythm.”), and, in general, matched the workout sessions to challenge me at my current skill level. The instantaneous feedback and continual encouragement from a rather patient and never-tiring coach provides for exceptional learning and workout experiences that move me toward mastery and self-transformation.
Augmented Virtuality technology thus holds the potential for such life-altering activity in many facets of being, such as making work as engaging and intrinsically rewarding as play. We want to play for the experience itself. So why not design other life activities, including work, to be as engaging as play? What if we pursued work for the experience itself, where we surrender ourselves in the flow of its activities to lose our consciousness of actually toiling away? As we shall see further in Chapter 13, “From Design to Deployment,” whether we view what we do as work or play comes down to our personal perspective of an activity. As Mark Twain put it, “Work consists of whatever a body is obliged to do, [while] Play consists of whatever a body is not obliged to do.”10 Adam Penenberg says in Fast Company of the original Augmented Virtuality device, “Nintendo’s Wii is so widely used in rehabilitation that some have dubbed it ‘Wii-hab.’ Patients who have suffered strokes, paralysis, torn rotator cuffs, broken bones, and combat injuries play Wii baseball, boxing, bowling, and tennis…. Grueling rehabilitative exercise becomes a game—a competition so engrossing patients can forget they are engaged in occupational therapy.”11
The capabilities for such nonobligatory play continues to advance rapidly beyond the Wii and even the Kinect. Microsoft sees the latter enabling, in the words of engineer Alex Kipman, “a world where technology more fundamentally understands you, so you don’t have to understand it.”12 Virtual Reality pioneer Jaron Lanier, currently Partner Architect at Microsoft Research, wants to take this view even further to encompass us understanding the world. He now works on what he calls “somatic cognition,” a “new frontier of human potential” where “the human body is extended by physical objects that map body motion into a theater of thought and strategy.”13 He sees the Kinect camera and its successors as enabling people to map their bodies not just to avatars but to anything they wish to study—chemical molecules, mathematical shapes, plants and animals—because people will learn more when they “become the thing” they are studying.14 His colleagues at Microsoft Research also work on projects advancing the state-of-the-art in Augmented Virtuality beyond the Kinect, such as an armband that directly translates muscle movement into a game controller and, potentially, a controller for all sorts of human-computer interactions.15 Meanwhile, people across the globe extend the capabilities of the Kinect itself by hacking it to perform ever more varied and amazing feats of augmentation.16
Sometimes, though, you do not want a workout via your actions but information to guide your actions. Remember SixthSense, from the MIT Media Lab’s Fluid Interfaces Group, and the G-Speak Spatial Operating Environment from Oblong Industries? Described in Chapter 3, “Augmented Reality,” for their ability to augment the real world with digital information displayed in front of the user, they seem of a kind with Kinect, at least directionally. And there’s that Augmentation Axis again, for although both involve gestures and motion, whether the right realm is Augmented Reality or Augmented Virtuality really depends on where the primary experience lies, the Space-based real place of the former or the No-Space-based virtual place of the latter, as well as its aim, the Time-centered informational needs of the former or the No-Time-centered gaming aspects of the latter. (Perhaps there are other such axes, planes, or even curves to be discovered within the Multiverse.)
And as with Augmented Reality, over time more and more of Augmented Virtuality will embrace the mobile phone. People can use smartphones not just for their capabilities to bring digital information to any physical location but to bring physical activity to any virtual location. Already most phone manufacturers are copying the Apple iPhone’s accelerometer and motion and proximity sensors, used in applications as trivial as quaffing back a virtual beer to as serious as medical training. And Immersion Corporation of San Jose, California, brings its background in medical and industrial haptics to the mobile phone, licensing “immersive messaging” technology that enables people to feel their phones in new ways. Spouses, for example, can send beating hearts as a symbol of their love or simultaneously draw “finger trails” on their individual screens, seen by both parties in a virtual “shared space,” with vibration feedback when they touch. The company proclaims, “Digital information remains two-dimensional: something we look at, listen to, and read about. Haptic touch is the missing piece, the sensory element that will transform information into experience.”17
Through the addition of all these technologies to the mobile phone, over time its use as a physical controller will be integrated into more and more Virtuality environments, augmenting them with the physicality of Matter.
Using physical substances—whether general-purpose or special-built, whether intelligent devices or body parts—to enhance autonomous events within virtual places is the essence of Augmented Virtuality. Consider how you can apply this realm of the Multiverse to your business through the following principles:
∞ In order to augment Virtuality, you first must start with a Virtuality experience. If you already stage one today, consider how you can enhance it through material substances.
∞ If you do not already stage one, you might then explore the webcam-based version of Augmented Virtuality, creating a graphic trigger on your physical product. While this often comes off as a gimmick, especially when used just for marketing purposes, integrating it into the offering itself as Hallmark and TOPPS did could yield great value.
∞ Recognize that the physical object always stands in for a virtual artifact. So with Augmented Virtuality you generally must design two experiences: one virtual and one physical.
∞ Engage the body. In whatever way you implement this realm—with a graphic image, with a Wii-like controller, with a marker-enabled device, or directly with the body itself as a natural interface—the fact that you are using something material will naturally involve the body, so focus on doing so in a way that makes the bodily experience engrossing and absorbing.
∞ Game controllers, musical instruments, sports equipment, vehicles, and medical instruments have all proven amenable to this approach; seek out new physical controllers/instruments/equipment/devices as special-purpose controllers.
∞ And of course, as with much of the Multiverse, consider shifting from the use of special-built to general-purpose devices. Always keep up to date with smartphone technology to see how it can be incorporated.
∞ Whether it is the stated goal or not, ensure greater physical fitness results from the bodily engagement.
∞ Focus on intellectual learning as well.
∞ Embrace haptic technology and other ways of going beyond sight and sound. This can make the experience both more engaging as well as more realistic.
∞ Finally, discern the distinctions between Augmented Virtuality and Augmented Reality to power your explorations. As we saw throughout this chapter, there seems to be an Augmentation Axis that repeatedly links these two realms. Keeping in mind the exact nature of each, without muddying them together, can double your chances of finding new opportunities.
As evidenced by the relative dearth of viable Augmented Virtuality innovations to date—many are gimmicky and others still in the research labs—companies have spent far more time investigating and applying Augmented Reality than Augmented Virtuality. (So much so that sometimes even when they do the latter they call it the former.) This realm therefore begs for further exploration; look diligently in your business for possibilities to include material substance in otherwise virtual experiences.