8

Reach Out and Touch Someone

Haptics, Tactile Presence, and Making VR Physical

IF YOURE LOOKING for the forefront of VR as entertainment, you’d likely guess it’s somewhere in Los Angeles or New York, where densely clustered content companies are pioneering new forms of storytelling. Maybe you’d take a flyer on the Bay Area, where billion-dollar companies are sinking millions into research. Chances are you wouldn’t say, “Oh, about a half hour south of Salt Lake City.” But that’s where I am—in Lindon, Utah, to be precise, on a pancake-flat plateau of land sandwiched between Utah Lake and the mountains of the Wasatch Range.

This whole area, the stretch of Interstate 15 from Salt Lake City down to Provo, has dubbed itself Silicon Slopes in solidarity with so many other tech-heavy areas. Ancestry.com and Overstock both have headquarters nearby. So does DigiCert, which—to make a technical story short—helps web browsers verify the authenticity of secure websites. In fact, DigiCert’s founder is the reason for my visit.

Ken Bretschneider grew up in an Ontario fishing town that took its holidays seriously, and Halloween was his favorite; when Ken was a kid, his friend’s father transformed their garage into a maze of horrors. In 2008, a grown-up Bretschneider turned his own Utah home into a haunted house—and then did so each year thereafter, ultimately expanding to occupy a half acre of land and bringing in more than ten thousand people. He decided to turn the seasonal idea into a year-round venture, and in 2014 he unveiled his plan: Evermore Park, as he called it, would be a Victorian-era “adventure park” that prioritized immersion over thrills. Period-appropriate employees would interact with visitors, and the park would change to adapt to holidays like Halloween and Christmas. Think Disneyland mixed with Colonial Williamsburg. (Actually, think Disneyland mixed with Brooklyn Williamsburg—the mustaches are a better match.)

But at the same time Evermore was getting planned, another idea was taking shape. Two of the people Bretschneider had hired to help make Evermore a reality were Curtis Hickman, a professional magician and illusion builder, and James Jensen, a multimedia designer. Jensen would create a digital version of the park as it took shape. Seeing the digitally rendered Evermore made them think of VR—and Jensen pitched an idea he’d wanted to realize since the early 2000s. He had been working on a scuttled movie adaptation of Little Red Riding Hood; it leaned heavily on computer-generated backgrounds and used a position tracker on the camera to help directors see where actors should stand for proper placement in the CGI world. Why don’t we try to map a VR world over a real physical space? Jensen asked Bretschneider and Hickman.

That idea became The VOID.

ENTER THE VOID

The VOID—in suitably grandiose fashion, it stands for Vision of Infinite Dimensions—is the realization of Jensen’s dream. It’s part of a wave of “location-based” VR facilities that are merging presence with sport for activities that are half holodeck, half laser tag. The experience hinges on creating a VR video game that’s physically re-created on a large soundstage. Any feature you see in the game is also in the real world, so if in VR you see a wall, and you reach out your hand to touch that wall, your flesh-and-blood hand will touch an actual wall. The soundstage is ringed by a huge array of trackers, which allows you to roam over a much larger distance than you’d be able to with a home VR setup. See a chair in your headset? Walk over to it and sit down—like, bend your knees and lower your butt—and you’ll find your corporeal hiney supported by a chair that’s the same size as the one you see.

Two adventures are up and running when I visit: an Indiana Jones–style jungle adventure called “The Curse of the Serpent’s Eye,” and a second experience set in the Ghostbusters universe, and blessed by Ivan Reitman himself, the film’s director. Both can accommodate up to four players at a time, but I’m going to go through each of them with only one other person.

Before that happens, though, I need the proper gear. That’s why I’m standing here in The VOID’s prep area, re-creating one of my all-time favorite movie tropes: the “suiting up” montage that’s familiar to any action-movie fan. First, I shrug on a heavy vest they call the backtop. It’s got big plastic patches on the front that rumble and buzz, to give me sensory feedback, and a brawny laptop on the back, along with a big battery and some more sensory modules. The headset connects to the laptop, which turns me into a self-contained unit, able to move around the soundstage freely.

Next up: the headset, which is basically a souped-up version of an Oculus Rift. It’s got huge headphones, and slides down over my face like a sci-fi visor. Small silver balls that ring the visor ensure that the tracking system high overhead can follow me accurately no matter what crazy contortions I make on the soundstage. And perhaps most noteworthy, a small Leap Motion module attached to the front of the visor brings my hands into VR, no controllers necessary. With my headset on, I hold my hand out in front of me—and there’s a hand, fingers wiggling just like mine.

Enough standing around. There’s a door in front of me with a hand symbol on it. I push it open and walk through, the feedback modules on my chest and back buzzing, and find myself walking down a stone hallway in the ruins of a Mayan temple. And when I say I’m walking, I don’t mean a couple of steps—I mean I’m walking—like, more than I should be able to on a thirty-foot-by-thirty-foot stage. That’s because of “redirected walking,” a VR technique that takes advantage of humans’ actually-kinda-horrible navigation skills.

The thing is, you stink at walking in a straight line. Yes, you. And me, and everyone else as well. As a rule, people go off course when they can’t see where they’re going; that’s why we began using the stars as a guide when traveling at night or on open water. Redirected walking takes advantage of that fuzzy sense of direction by presenting you with a path in your headset that’s slightly different from the one you’re actually walking on. Your vestibular system is far more finicky than your vision; as long as your eyes tell you you’re moving in the same general direction that your inner ear feels, everything’s fine. So in your headset, you might think you see a straight hallway ahead of you, but you’re actually walking in a curved line—and a VR experience can turn you more than 49 percent more than you think you’ve rotated without your noticing. At The VOID, redirected walking is how you can roam through an expansive adventure like “The Curse of the Serpent’s Eye” while in reality tracing a circuitous, but compact, path on a surprisingly small soundstage.

There’s more to my adventure than just walking, though. There are “4-D effects,” as the theme-park world has come to call them: misters and fans that blow cool air and moisture across my face to simulate the jungle environs, motors in floor panels that can make me feel like the temple is collapsing. There are objects that belie their pedestrian real-life appearance in VR: what looks like a spray-painted club with some tracking balls on it becomes a torch I can pick up and light on a brazier. Likewise, that brazier is just a radiant heater behind a grate, but in VR I can still feel its warmth on my hands. When I hold the lit torch up to a seal on a door and the door explodes, I can feel the concussive impact in my chest—and reach out to steady myself against a wall. These sorts of immersive touches simply aren’t possible in a home setup, but that’s the whole point of location-based VR. It’s presence on a grander scale.

That presence is also compounded by the fact that in The VOID, I’m not alone. In “The Curse of the Serpent’s Eye,” I can hand the torch to my companion, and because she sees it in her headset, the hand-off is perfect. Sure, I can hand an “object” to another “person” in regular VR, but every part of that is simulation. The object is my hand controller with the trigger held down; when I let go of the trigger and thus the “object,” I’ve still got the controller in my hand. The other person might be another person somewhere, but in VR she’s just an avatar—our fingers can’t brush against each other the way they do in The VOID.

And that’s just an exploratory adventure. The Ghostbusters experience adds action to the equation. Along with the backtop and headset, my companion and I tote big rifles; in VR, those backpacks and rifles turn into proton packs and blasters, because we’re Ghostbusters.

Let me repeat that: WE’RE GHOSTBUSTERS.

Look, I was ten when that movie came out, and in the thirty-odd years since, I’ve seen it dozens more times. I’ve made more than one “last of the Meketrex supplicants” joke at work. I truly believe it’s one of the greatest comedies—nay, movies—ever made. So maybe I’m not the most objective source here. But I’ve gone through more VR experiences than I can count, from gaming to entertainment to social to spiritual to sexual (don’t worry, we’ll get there), and I’m here to tell you that standing on a rickety metal catwalk on the edge of a building, its chain railing bouncing against my thighs, yelling at my co-worker to cross the streams so we can take out the Stay Puft Marshmallow Man, might just be the most fun I’ve ever had in there.

But why? What do you call this kind of presence? Well, to start to answer that, we’re going to need to go from a virtual Mayan temple to a different kind of Temple.

COPRESENCE

In 2003, Shanyang Zhao, a sociologist at Temple University, published a paper called “Toward a Taxonomy of Copresence.” It’s very smart and very long and not strictly about virtual reality, but its chief purpose is to establish a system that accounts for the various ways that two people can be together. (Or, in his terms, “the conditions in which human individuals interact with one another face to face from body to body.”) This was especially important, he wrote, to account for the ways in which the internet had expanded the parameters of what we mean when we talk of being “with” someone.

Zhao laid out two different criteria. The first was whether or not two people are actually in the same place—basically, are they or their stand-ins physically close enough to be able to communicate without any other tools? Two people, he wrote, can either have “physical proximity” or “electronic proximity,” the latter being some sort of networked connection. The second criterion was whether each person is corporeally there; in other words, is it their actual flesh-and-blood body? This second condition can have three outcomes: both people can be there corporeally; neither can be there corporeally, instead using some sort of stand-in like an avatar or a robot; or just one of them can be there corporeally, with the other using a stand-in.

The various combinations of those two criteria lead to six different types of copresence, Zhao wrote. “Corporeal copresence” is plain old face-to-face physical contact: two people in a coffee shop. If those real people are networked together and can communicate face-to-face, like a Skype call, that’s “corporeal telecopresence.” And if something isn’t corporeal, it’s virtual. “Virtual copresence” is when a flesh-and-blood person interacts physically with a representative of a human; if that sounds confusing, a good example is using an ATM, where the ATM is a stand-in for a bank teller. “Virtual telecopresence” replaces a physically present thing like an ATM—something that spits out money when you insert a card and press some buttons—and replaces it with, for instance, Waze’s turn-by-turn navigation, which you listen to while you drive but is “tele” because it’s being delivered from a network server.

Got it so far? Don’t worry, there’s no quiz. It does get a little more difficult, though. See, there’s “hypervirtual copresence,” which involves nonhuman devices that are interacting in the same physical space in a humanlike fashion. This is pretty uncommon, but Zhao uses the example of robots playing soccer. And finally, “hypervirtual telecopresence,” in which nonhuman stand-ins interact via networking—like two bots communicating over the internet.

So that’s copresence. Except something is missing. That something is social VR. It’s obviously copresence, but it doesn’t quite fit into any of these categories. Zhao refers to this sort of hybrid as a “synthetic environment” and claims that it’s a combination of corporeal telecopresence (like Skyping) and virtual telecopresence (like Waze directions)—“human individuals [interacting] with each other remotely in real time via avatars that operate in virtual settings.”

Checks out, right? That sounds like every form of social VR we’ve talked about so far. Well, every form except something like The VOID. It’s a “synthetic environment,” sure, but one that takes its cues from a corporeal environment—and vice versa. It’s physical proximity and electronic proximity, blended together to create an entirely new type of immersion.

So new, in fact, that it doesn’t exactly have a name yet. For now, let’s call it tactile presence.

HAPTIC TACTICS FOR TACTILE APTNESS

Of the five human senses, a VR headset can currently stimulate only two: vision and hearing. That leaves three others—and while smell and taste may come someday, for now let’s just file those away under Slightly Creepy Gimmick. (That doesn’t mean people aren’t working on both of those, though. Some researchers have shown off an “olfactory display” made out of micropumps and acoustic devices, and others have worked out how to electrostimulate the tongue to induce taste sensations. Both of those projects, it may not surprise you to learn, were developed in Japan.) What really matters for VR right now, and in the coming decades, is touch. Touch is how we comfort each other; it’s how we please each other. Even almost-touch can be a factor, because the heat we all radiate arouses tactile sensations. Remember how being excluded in a VR game of catch could make people act more antisocial in real life? Well, being caressed with slow, “affective” touches can mitigate the feelings of ostracization that can arise from that same VR game.

Communicating that touch across distance, though, might just be the most difficult piece of the presence puzzle. Imagine wearing a VR headset and reaching out your hand to touch a surface. The VOID’s magic is that it presents a solid surface for your hand to touch—but what if you were at home? And what if it weren’t a wall you were reaching out for, but another person? Even more difficult, what if that person were reaching out their hand to touch yours? How will we get to a point where that becomes not just possible, but realistic?

I don’t know, to be completely honest. But given what we’re capable of doing now, and how we’ve gotten here, and what people are experimenting with, we at least have some sense of a road map. So let’s start by talking about haptics. The word just means “relating to the sense of touch,” but it has become an increasingly important field in the world of what’s known as “human-computer interaction,” and now it's often used to refer to technology that seeks to re-create touch.

The idea of haptic feedback to create tactile presence has been around since at least 1932. That’s when Aldous Huxley’s future-set novel Brave New World imagined the “feelies”—movies that induced physical sensations in viewers that matched up with what was happening on the screen. In Huxley’s mind, this was a satirical extension of the “talkies” that had evolved from silent movies a few years earlier, but the book’s memorable (and, okay, straight-up offensive) feely scene offered up a use that was more than just satire:

The house lights went down. . . . “Take hold of those metal knobs on the arms of your chair,” whispered Lenina. “Otherwise you won’t get any of the feely effects.”

The Savage did as he was told.

Those fiery letters, meanwhile, had disappeared; there were ten seconds of complete darkness; then suddenly, dazzling and incomparably more solid-looking than they would have seemed in actual flesh and blood, far more real than reality, there stood the stereoscopic images, locked in one another’s arms, of a gigantic negro and a golden-haired young brachycephalic Beta-Plus female.

The Savage started. That sensation on his lips! He lifted a hand to his mouth; the titillation ceased; let his hand fall back on the metal knob; it began again. The scent organ, meanwhile, breathed pure musk. Expiringly, a sound-track super-dove cooed “Oo-ooh”; and vibrating only thirty-two times a second, a deeper than African bass made answer: “Aa-aah.” “Ooh-ah! Ooh-ah!” the stereoscopic lips came together again, and once more the facial erogenous zones of the six thousand spectators in the Alhambra tingled with almost intolerable galvanic pleasure. “Ooh . . .”

That idea of induced tactile pleasure would pop up in sci-fi books and movies again and again, from the Orgasmatron in Woody Allen’s 1973 spoof Sleeper to the electrode-link VR hookup in Demolition Man. (I know I’ve mentioned it before, but it’s just so weird.) Given that personal computers were still decades away, though, the only haptic feedback that really entered the public consciousness came courtesy of “Magic Fingers,” which made your motel bed vibrate for a quarter. Novelty aside, haptic interaction was largely confined to mechanical devices, like those in the 1940s that would let workers remotely handle hazardous materials.

Moving from these analog examples to digital ones began to happen in the 1960s in research labs, and in 1971 one of those researchers published a doctoral dissertation that detailed a surprising breakthrough. “A computer helping an individual feel some object which existed only in the memory of the computer could justifiably seem to be a far-out idea,” wrote A. Michael Noll—but he had devised a machine which could accomplish just that.

His invention looked like a metal cube filled with a ceiling-mounted array of beams and motors. On its top was a joystick-like device that could be moved in all three dimensions. The beams and motors could make the stick more difficult to maneuver, according to instructions it received from a computer. If users moved the stick around enough, taking note of where it stopped, they could deduce that they were moving the device along the outside of an invisible cube, or the inside of a sphere.

Noll described it as “similar to a blind person exploring and poking around three-dimensional shapes and objects with the tip of a hand-held pencil.” It was the first time such a thing was possible, and Noll seemed to sense that even something so “far-out” had wide-ranging applications. He imagined a person in New York City “feeling” a cloth produced by a manufacturer in Tokyo. “‘Teleportation’ in one sense would be closer to reality,” he wrote.

Now that he had invented a way for people to feel virtual objects, Noll planned to build a 3-D head-mounted display that could show the computer shape—essentially the first VR headset. However, he accepted a job with the office of President Richard Nixon’s science advisor and never returned to that research.

More than forty years later, Noll seemed to be disappointed with the amount of progress VR had made without him. “It is perplexing that with the advances in technology that have occurred since the early 1970s that what is today called ‘virtual reality’ and ‘haptic’ seem behind our vision back then,” he wrote in 2016. “I therefore issue my challenge to today’s community to create what was envisioned decades ago. Otherwise, much of today’s virtual reality indeed is little more than real fantasy.”

Noll’s work was groundbreaking. But where did the rest of the world get its first glimpse of synchronized haptic feedback? Video games, of course. In 1976, still the early days of arcades, Sega released a game called Moto-Cross (later to be rebranded as Fonz in a bald attempt to capitalize on the Happy Days character’s popularity) in which players controlled their tiny motorcycle-riding characters via a set of handlebars mounted to the game. If the bike collided with another, those handlebars would vibrate in players’ hands, making them feel the “collision.”

The technology was simple, essentially a very localized version of Magic Fingers, but it opened up a new world of so-called force feedback in video games. Haptic features spread to driving games, then to other arcade machines—and in the meantime, to pagers and cellular phones—and in 1997, they entered the home. Nintendo began selling Rumble Pak, a small module that attached to the bottom of one of the company’s game controllers for use with now-classic games like Star Fox or GoldenEye 007; when players steered their ships into others or fired guns, the controller vibrated appropriately. Thereafter, virtually every video game console came with controllers that had force feedback built into them.

When VR rearrived, haptics became significantly more important. The added immersion that came from a vibrating steering wheel, or the ability to feel recoil in a shooting game, paled in comparison to hand presence. Looking at your “hands” in VR, the controllers you’re holding melt away. Your brain effectively collapses the boundary between flesh and artifice, so haptic feedback is no longer a matter of feeling the controller you’re holding, but of holding the feeling.

The hand presence that devices like the Oculus Touch enable, though, are merely a first step toward tactile presence, truly immersive sensations of touch. Like conventional video game controllers before them, today’s VR controllers use tiny motors embedded within them. These motors and their vibrations can communicate impacts of various strength and duration—buzzes, taps, knocks, thuds—but little else. There’s no shape, no weight, no texture. That’s the magic of The VOID, and other VR facilities like it: they induce tactile presence through real-world objects. But in order for tactile presence to get out of The VOID and into the Metaverse, for Noll’s vision of sampling suit cloth from a continent away to go from far-out to attainable, we’re going to need to be able to experience those properties. No one way gives us all of those things yet, but a number of solutions exist that can provide one or two pieces of the puzzle.

Haptic-feedback accessories take the premise of force feedback—using rumble to communicate impact and contact—and distribute the feedback over more of your body, rather than just your hands. At The VOID, this takes the form of a vest packed with vibrating patches that go off in sync with explosions. However, that’s just the beginning. Lotte World, a popular indoor attraction in Seoul, South Korea, features a VR experience that puts each user in a five-hundred-dollar suit packed with eighty-seven distinct feedback points; that’s enough to make it feel as if a zombie is raking its claws across your back.

Gloves have been a part of the public’s fascination with VR since the Nintendo Power Glove, a short-lived late-’80s video game peripheral that allowed people to control a game with their gestures. (Although the Power Glove wasn’t a VR device, it was the brainchild of the same pioneers who had come up with the Data Glove, which was.) By now, you know that despite Mark Zuckerberg’s Spider-Man gesture in Chapter 3, gloves aren’t necessary as an input device; thanks to outward-facing sensors like Leap Motion that can track our individual fingers, we can use our own hands. Gloves do, though, have a distinct advantage in output: because they cover your hands completely, they’re able to provide much more detailed haptic feedback. This means that if you’re using, say, a virtual keyboard, you’re able to feel a little tap or buzz on the relevant finger when you “press” a key—intensifying the sensation and likely increasing your speed and accuracy.

Full body suits are the logical end of direct haptic feedback. Like gloves, they’ve been in the cultural imagination for decades, largely as input devices. (Once again, I direct you to Lawnmower Man, this time for the wetsuits that allowed people to move freely in VR.) However, like gloves, the output potential is far more interesting for presence, and especially for intimacy. A haptics company called HaptX has secured a patent for what it calls a “whole-body human-computer interface” that’s basically a VR nerd’s ultimate dream. Imagine a wetsuit, its interior lined with tiny actuators that can deliver temperature and pressure changes. Now imagine that you put that on and then climb into an exoskeleton that suspends you in the air and allows you to move freely, while still delivering actual force feedback—like, making it more or less difficult to move various body parts. That’s more than “rumble”; that’s damn near groundbreaking.

Or it would be, if it ever panned out. Is HaptX anywhere near having a consumer product? Kind of. The company has announced plans to release a glove in 2018, and even brought it to Sundance this year. (Previously, HaptX would show off its technology via a box you stuck your hand into.) In previous demos, a small deer walks across your hand—and you can feel its legs skittering on your palm. The demo chills your hand when a virtual snowball is placed on it and warms it when a virtual dragon breathes fire in your direction. That “whole-body human-computer interface” isn’t something we’re talking about for tomorrow, or next year, or even longer—but as proof of concept, it ain’t bad.

These are all wearable options, but there are a lot of other ones as well that might just end up helping transport your body along with your mind—like, say, using ultrasonics, low-frequency sound waves, to approximate objects that your hands can feel. I know it sounds odd, but it works. Not long ago, I stood with a headset on my face and my hand hovering over a small square board. The board was lined with a dense, orderly array of small black circles, each of them an “ultrasonic transducer”—essentially a tiny speaker capable of pumping out bursts of acoustic pressure.

In the headset, I saw a cube and a ball; when I reached my hand out to push them around, I could feel them. They didn’t have heft, and they didn’t exactly feel solid—if I tried to cup them in my palm, they disappeared, since the ultrasound waves couldn’t pass through my hand—but I could sense their presence, and even distinguish the edges of the cube and the roundness of the ball.

VR may not even be the place we see (well, feel) ultrasonic haptics in our lives for the first time. The company that created the VR demo I experienced, Ultrahaptics, has sold development kits to a number of car companies, some of which are doubtless experimenting with ways to create invisible dials and knobs for the instrument panel of the future. But even if VR is second or third in line, ultrasonics’ applications may go far beyond holding objects. In fact, because of its formlessness, it might be the haptic technology best suited to approximate the effects of human touch.

I’m not just saying that idly; a collaboration between an interdisciplinary team of British academic researchers and the founder of Ultrahaptics (who came up with the idea as a graduate student at the University of Bristol) found that the technology could be used to communicate emotion through the air. In their study, they asked a group of ten people to use the Ultrahaptics device to create patterns based on images like a car on fire, a graveyard, and a calm scene with trees. A second group of ten people then felt all the patterns and narrowed them down to what they considered to be the best “touch” for each image. Last, yet another group of ten people felt the patterns that made the cut, except the patterns were presented in a random order that didn’t necessarily match with the image the pattern had been created for—and, when asked to rate how well the “touch” matched with the images, they significantly preferred the pairings that had actually been created by the first group. It was like a game of telephone; the third group had no way to know what the first group had done, yet as a whole the entire group of subjects had settled on touch patterns that they agreed fit with the moods of the images.

“Our findings suggest that for a positive emotion through haptic stimulation one might want to stimulate the area around the thumb, the index finger and the middle part of the palm,” the researchers wrote. Similarly, “if one wants to elicit negative emotions . . . the area around the little finger (pinky) and the outer parts of the palm become a relevant design space.” The article outlines not just areas of the hand that might elicit certain emotional reactions, but also the direction of the touch (vertical simulation toward a person is positive, while caresses moving away from the palm and toward the fingers is negative) and even the sonic frequency and duration of the touch. This is just a starting point, to be sure, but it suggests a path that may help us unlock, and even codify, the power of touch in VR.

But what about texture? How might we be able to distinguish soft pima cotton from pebbled leather—or even smooth skin? As difficult as communicating the shape of an object is, texture is even more difficult. Yet, a pair of prototype controllers that Microsoft’s research department has built might be able to do both of these things. One of them, called NormalTouch, is in many ways a tiny, handheld version of A. Michael Noll’s 1971 breakthrough haptic device: a small pad underneath the fingertip can tilt and telescope in keeping with the contours of a virtual object. Its companion, TextureTouch, adds to that fingertip pad sixteen tiny pillars that can extend or retract in order to help you feel, say, the ridges on a statue. It may also let you feel some other things; as one commenter on a YouTube video demonstrating the controllers asked, “can i touch virtual tity’s.” (Thanks, Smack Thrustcrusher, whoever you are; your spelling, grammar, and punctuation skills are a credit to keyboard-bound horndogs everywhere.)

Granted, the fact that Microsoft is exploring this stuff is about as surprising as Bill Gates wearing a sweater—but you can’t say the same for one of the other major companies that might unlock the path to feeling texture in VR. Disney’s research department has been investigating haptics for years. Some of that is because it’s an integral part of the theme-park experience, but the company also clearly has an eye on the future; scientists working in Disney Research’s Pittsburgh headquarters have released multiple papers detailing tactile feedback technology for both VR and its cousin, augmented reality. (We’ll get a bit more into AR later on.) One of its most interesting projects involves creating an electrical field around a person’s fingers, which can then be manipulated, allowing programmers to make a smooth real-world object feel to the user as though it’s bumpy. “In a broad sense,” the researchers wrote in an article detailing the project, “we are programmatically controlling the user’s tactile perception.”

All of which is to say: the human brain, as remarkable as it is, is hackable. So yes, tactile presence of the type we experience in The VOID—solid, textured, immovable objects we can interact with—is currently exclusively an IRL-only phenomenon. But the idea of inducing such sensations virtually isn’t just plausible. Increasingly, it’s probable.

LOCATION, LOCATION, LOCATION: YOUR FIRST MIND-BLOWING VR EXPERIENCE MIGHT NOT BE AT HOME

Meanwhile, facilities like The VOID have become one of the most vibrant of VR’s many dimensions. Whether you think of them as arcades or theme parks, they offer a number of distinct advantages over a home setup. For one, depending on the gear, you can roam freely inside a much larger space than an off-the-shelf VR system can manage. Thanks to the wonders of redirected walking, it doesn’t take a football field to make you think you’re on a football field. Besides, these venues are sinking some serious cash into a bespoke setup, from ultra-high-powered PCs to tracking systems to headsets that aren’t even available for home use.

One of those, StarVR, looks like something out of an episode of Star Trek. It’s not uncomfortable, but it’s huge, with a display that extends around the sides of your head. Inside the headset, the virtual world goes well past the edges of your peripheral vision. This is the headset that IMAX uses at its four dedicated VR centers—in Los Angeles, New York City, Shanghai, and Toronto—as well as in an enormous VR center that opened in Dubai at the end of 2017.

And one of the games you can play on that StarVR headset, at that Dubai arcade no less, perfectly illustrates why all this talk about haptics and tactile presence matters. (And before you ask: no, I didn’t get to go to Dubai. I played it in a conference room in a San Francisco hotel during a VR event. Then again, it was in the middle of a crazy heat wave, and the hotel’s air conditioning was clearly outmatched, so it wasn’t entirely unlike Dubai.)

When I walked into the room, all I knew was I’d be playing something called Ape-X, but I had no idea what it entailed—and I certainly didn’t anticipate the setup. On the ground was a hexagonal metal grate, maybe five feet across; a metal pillar rose out of the center of the grate. Before I knew it, one of the developers was helping me situate the giant headset over my eyes. Then the headphones went on. Then I saw two massive metal gauntlets floating in my headset, almost like metal Hulk hands. These were real controllers, as it turned out; when I reached out my hands, the developer helped slide them on.

Finally, the game started, and I understood what the grate and pillar were for. I was standing on a narrow circular catwalk ringing the uppermost spire of a skyscraper. I peeked over the edge and saw a line of cars a hundred feet below—but the cars were flying and were themselves hundreds of feet above the ground. I was impossibly high up, perched precariously in a situation that I legitimately have had nightmares about. Even crazier, I was clearly the titular Ape X (at the apex! Yeah, yeah, we all get the title), an escaped super-intelligent simian, a sci-fi update of King Kong. I had no Fay Wray, just my enormous gauntlets—which, thankfully, were armed with laser rifles and a small supply of guided missiles.

I’d need them all, too; for the next seven minutes, I had to fend off wave after wave of flying enemies. Some I could shoot out of the sky. Others came so close I had to knock them away, swinging my huge gauntlets. Still others came flying in from directions where I wasn’t looking, strafing me with their own laser fire and forcing me to shuffle around the catwalk, using the skyscraper’s spire as cover. When I wasn’t able to keep an arm hooked around it, I pushed my back against the spire as best I could, both for defense and for some semblance of stability in the chaos I was trying to navigate.

My lasers felled the last enemy. Finally, I thought, the game might end. And it did . . . but not before one more unexpected challenge. A hovercraft pulled up next to the catwalk, like a tiny floating metal barge. It had been sent to help me escape, but I’d have to board it, which meant not only peeling myself off the spire that had become my one friend in the world, but actually taking a step off the grated catwalk. Do it, my rational brain said to me. You know you’re in VR. You know that out there, the grate doesn’t end in nothingness—it ends in carpet. My rational brain may know what it knows, but my reptile brain only knows what it feels. And at that moment, my reptile brain was feeling like I was hundreds of feet in the air and my knees might never unlock.

So there I stood, the two sides of my brain deliberating the nature of presence for what felt like another seven minutes, until the rational half willed my leg to move. Slowly, I stepped one foot toward the hovercraft, my other leg bending to jump. And I jumped . . . directly into the waiting arms of the developer, who was making sure I didn’t go ass over teakettle into the expensive computer setup. “Nice work,” he said amiably. “Not a lot of people end up jumping.”

And honestly? If I were back in there right now, I might not be able to jump again. With the grate under my feet and the spire’s comforting solidity at my back, I was more there than I was anywhere else, rational brain or no. The things we can’t yet sense in VR—true tactile presence, with heft and texture and warmth—are the very things that intensify the feelings we do have there. The question remains, though, what that tactile presence might mean for the connections we form.

For that, we’re going to need to go on a blind date.

THE DATING GAME: HOW TOUCH CHANGES INTIMACY

John and Shelby met fewer than five minutes ago, but they’re already slow-dancing outside in the night air.

“Okay, I have a question for you,” Shelby says, her blonde hair curiously still as she turns her head. “Do you dance well because you’re a Southerner?”

“Actually, I grew up all over the country, so I don’t really consider myself a Southerner,” John says.

“Are you an army brat?”

“No, I grew up poor,” he says in a sing-songy voice, letting go of her waist and extending himself outward like Fred Astaire. “So we went all over the country to find different homes and different places to live.” His tone might be a little glib, given what he’s telling her, but it’s hard not to marvel at his candor, especially with someone who’s effectively a complete stranger. And whose feet are on backwards. And who doesn’t seem to mind that he’s hovering half a foot above the ground. Under the light of the Earth, it seems, anything is possible.

Wait, under the light of the what?

Yeah, you know this move by now, right?

Ha ha, clearly these are avatars and they’re dancing in VR, and what, now you’re going to talk about presence and intimacy and being on the moon?

Mostly, sure. But something else is going on here too. Because while John and Shelby are dancing on the moon, and eventually turning into aliens and astronauts and T. rexes and cardboard box people and cacti and skeletons and otherwise enjoying the novelty of imagination becoming flesh, their dance isn’t as clumsy and claylike as it looks. (“I like the way my hand goes through your entire neck,” John says to Shelby at one point.) In VR, their physical interactions are sub–Gong Show levels of terrible, but in the studio where they’re wearing headsets and motion-capture suits, they’re actually not un-graceful.

See, there’s meeting someone in real life, and then there’s meeting someone in VR. But what do you call it when it’s meeting someone in both at the same time? When tactile presence gets involved, and you can blend the virtual and the real not just in your eyes and ears and brain, but on your skin, how does that change the intensity of an experience? And, more important, how does that change the way a relationship evolves?

Let’s back up a bit. Like twenty-five years.

PRESENCE TENSE

In 1992, MIT Press began publishing Presence, the first academic journal that sought to unify an interdisciplinary exploration of what it called “teleoperator and virtual environment systems.” People with all kinds of backgrounds contributed: engineering, computer science, media, and the arts. (Fun fact: one of the articles in the very first issue was by Warren Robinett, whose name might be familiar to gamers as the person who created what’s widely considered the first “easter egg” in a video game.) And even then, in the early days of civilian VR, those people agreed on a general definition of presence. But people were just starting to figure out how to classify, let alone measure, such a slippery concept.

Some of the first to try were Bob Witmer and Michael Singer, two army researchers who had cooked up a thirty-two-item questionnaire for researchers to use in experiments. By asking volunteers to answer questions like “How natural did your interactions with the environment seem?” and “How distracting was the control mechanism?,” they hoped they would be able to begin to delineate the many factors that contribute to presence. After using that questionnaire in four experiments, Witmer and Singer broke presence down into four categories—users’ perceived control, sensory stimulation, distraction, and realism—and further identified a total of seventeen subcategories that constituted the building blocks of the phenomenon. “We do not claim to have identified all of the factors that affect presence,” they wrote in their conclusions, “nor do we fully understand the presence construct, but we believe we have made considerable progress.”

That questionnaire became widely cited in the field of presence research—as much for the responses it engendered from other researchers as for its influence. Not only did another questionnaire arise just about simultaneously with Witmer and Singer’s, but its authors used these questionnaires to show that they were an intrinsically flawed way to discuss presence: the questionnaires, they argued, couldn’t even distinguish between a real experience and a virtual one. (Granted, that might be the platonic ideal of VR, but in the late 1990s, when this war was happening, that “virtual experience” looked like something someone created in fifteen minutes using Microsoft Paint.)

This sort of discussion went on for years and bears mentioning here only to point out that its utility didn’t really go outside the psych lab. Methodological explorations of presence may have clinical applications, but they were basically the province of the science community. However, as time went on, people started thinking about presence more holistically, taking it out of the lab and exploring it from the user’s perspective. Then in 2010, one study mashed up the world of video games and presence questionnaires to cook up an entirely new way of thinking about presence.

Three researchers—two professors at the University of Central Florida and one at a private research company in Maryland—took their inspiration not from the world of VR, but from the world of design. In recent years, the concept of “experiential design” had emerged as a bit of a buzzword, blending together disparate fields like psychology, brand strategy, and theater to create a multidisciplinary approach to design. The researchers used the underlying principles of experiential design to create a questionnaire that hewed to a new taxonomy of presence—five categories that they saw coming together to create presence for a user.

Sensory: The stimuli created by the hardware—visual display or haptic feedback

Cognitive: Mental engagement, like solving mysteries

Affective: The ability of a virtual environment to provoke a fitting emotional response

Active: Empathy or other personal connection to the virtual world

Relational: The social aspects of an experience

They then used the video game Mirror’s Edge to test their new system. (If you don’t remember that one, it was a first-person action game in which you race across the tops of skyscrapers; think of it as parkour on meth.) Despite the game not even being in VR—this was just a plain old Xbox 360 game played on a plain old television that wasn’t even high-def—the researchers’ hypotheses held, meaning that there was at least some merit to thinking about presence this way.

This is only one proposed classification of presence, of course; there are others as well. But clearly, it resonates with everything we’ve been talking about so far—especially the affective, active, and relational aspects. There’s one thing that’s missing from this system, though, and that’s the effect of tactile presence.

GLITCHING TOWARD BETHLEHEM

Ryan Staake has creativity in his genes. His father, Bob, is an accomplished illustrator who is responsible for some of the most iconic New Yorker covers in recent memory (a onetime St. Louis resident, Bob responded to the 2014 unrest in Ferguson, Missouri, by depicting the Gateway Arch as half black and half white, with an unbridgeable gap in the middle). So the fact that Ryan has spent his young career evolving should come as no surprise. A graphic designer by trade, he started out working on user interfaces at Apple and then founded a production studio and moved into making music videos. Some of them contained the seeds of VR, like 360-degree video or digitized versions of musical artists.

But if you know Staake’s work, it’s probably because of a fortuitous failure. One video he directed, an elaborate production for a song by Young Thug, fell apart when the rapper never showed up to the shoot—so instead Staake compiled all the footage he had shot, interspersed with title cards explaining what was supposed to happen. The result went viral, pulling in nearly thirty million views on YouTube and prompting coverage from seemingly every music publication on the internet.

In the aftermath, a screening series in LA invited Staake to come out from New York for an onstage interview; he couldn’t go, but he did the next best thing. Using a motion-capture suit and a scanning system that he and his colleague had cooked up, he recorded a VR video in which a digitized version of himself pretended to take questions from an imaginary live audience.

The result was hilariously imperfect: his avatar’s right hand seemed to have its middle finger permanently extended, and his iPad-scanned face with its unmoving mouth could easily have been the mayor of Uncanny Valley. His legs bowed out cartoonishly; his arms bent at the elbows like they were paper towel rolls. At the end, avatar-Staake walked over to a pristine digital sports car and got in—only to have his arm stick out through the middle of the closed door. It was a self-aware mess, and was all the more charming for it. Through all that lo-fi glitchery, you could see the promise of something marvelous.

That something marvelous came to fruition in Virtually Dating, a show that Staake and his company created with Condé Nast Entertainment that streams on Facebook’s video platform, Watch. Could there be a more 2018 sentence than that? (Just for the record, and what they call “full disclosure”: Condé Nast owns WIRED and is thus my employer; however, I met Staake long before I knew Condé Nast was involved with the “VR dating” project we first spoke about.) The genius of the show, a send-up of blind-date shows like Love Connection, is its kaleidoscopic treatment of reality: strangers meet for a date, but they do it in VR, before ever having met face-to-face. Even better, they’re sharing the same physical space as well. Just like in The VOID, they can reach out and touch each other.

This being the early days of this sort of experiment, plenty goes wrong. There’s the aforementioned arm through the neck, but the participants are constantly walking through virtual objects, their limbs flailing about in defiance of the motion-tracking modules they have clipped to their joints and feet. The show mines all of it for comedy, transporting each couple from one location to another (the beach! a zombie movie! ancient Egypt!) and shape-shifting each person so many times that Optimus Prime couldn’t keep up.

Each episode is only seven or eight minutes long, but they’re easily as entertaining as any half-hour dating show, mostly because the participants are so damn giddy about everything. VR’s current shortcomings make every meet-cute a meet-cuter, it seems. Even better, tactile presence becomes a literal conductor of intimacy. With the glitchiness kicking self-consciousness out the window, touching each other is simply a natural response. It’s innocent play, a moment without the weight of seduction—yet it still contains all the necessary ingredients.

“The biggest thing to me was simply seeing the power of shared presence,” Ryan Staake says. “It’s huge. It’s beyond the elements of social lubricant and relationship implications, and more that core sense of not feeling like this lonely single person in this vast digital world. You’re truly there with another person.” Maybe that’s why at the end of each episode, when the headsets come off and each person has to decide whether or not they want to go on a regular IRL date, you find yourself rooting for everyone. Dating is already hard; maybe VR can make it a little bit easier.

But don’t take my word for it. One Facebook user summed it up more concisely (and profanely) than I ever could. “This is a mile stone [sic],” he wrote in a comment on an early episode of Virtually Dating. “A staple point in history. when VR is perfected in 20 years, we will look back on this like we do when we watch old school episodes of blind date and laugh whilst playing uber diamond platinum ultra tinder and insta fuck each other.”

He’s wrong about one thing: it might not take twenty years. Everything else, though? That’s up for grabs. Speaking of which, it’s time we headed to the culmination of all this intimacy.