CHAPTER 13
Future Pitfalls—Manipulation, Takedowns, and the Experience Machine

Whatever transpires with Meta's plans for the Metaverse, it's also worth noting another, little known goal that's been rumbling beneath the surface across Silicon Valley from the very beginning.

Back in 2014, shortly after Meta announced acquisition of virtual reality company Oculus VR for $2 billion, the firm's very young founder, Palmer Luckey, appeared onstage at the Computer History Museum in Mountain View, at a conference devoted to VR and creating the Metaverse. Someone in the audience asked Luckey why he spoke of a “moral imperative” to bring virtual reality to the masses.

“This is one of those crazy man topics,” Luckey began, “but it comes down to this: Everyone wants to have a happy life, but it's going to be impossible to give everyone everything they want.”

Instead, he went on, developers can now create virtual versions of real experiences reserved only for the wealthy. By which he meant people sitting in the room with him.

“It's easy for us to say, living in the great state of California, that VR is not as good as the real world,” he concluded, “but a lot of people in the world don't have as good an experience in real life as we do here.”

Instead of providing the poor of the world with a better material life, in other words, we might provide them with virtual versions.

“VR can make it so anyone, anywhere can have these experiences,” as Luckey put it to me later.

Luckey has since left the Metaverse industry, unceremoniously defenestrated by Mark Zuckerberg after the 2016 election, when it was revealed that Luckey was secretly funding a pro-Trump troll group. (Which is somewhat ironic, because after Luckey helped elect Trump, many miserable people really did yearn for an escape to a better world.)

But he is far from the only person in the VR/Metaverse space who has said similar things. It was actually John Carmack who first spoke about a “moral imperative” to develop the Metaverse, and as far back as 1999. The term doesn’t allude to Kant, but, as Carmack once told me by email, is a line from the ’80s movie Real Genius. (“So don't take it too terribly seriously.”)

But he is quite serious about the moral part:

“There is no technical reason why a VR headset needs to be much more expensive than a cell phone,” as he put it to me. “These are devices that you could imagine almost everyone in the world owning. This means that some fraction of the desirable experiences of the wealthy can be synthesized and replicated for a much broader range of people, and that is a reasonable characterization of the positive aspects of a technological civilization.”

I tend to think a future where billions of the globe's most destitute queue up to enter Internet cafes in smoggy, sunbaked megalopolises so that they can briefly enjoy a virtual beach on a tropical island in the Metaverse seems somewhat dystopian. But then again, many making VR and metaverse platforms would strenuously disagree with me.

As I cover in Chapter 1, it's a mistake to assume that simply because Snow Crash takes place in a dystopian future, the Metaverse itself is dystopic. Stephenson himself denied that interpretation even before cofounding a metaverse company himself.

But as the 21st century (if not the 20th century) has taught us, a full-blown dystopia need not emerge for us to still witness a low-pitched but constant level of preventable suffering and cruelty.

The most likely and imminent pitfalls for the Metaverse will not involve pacifying the poor with virtual experiences.

The far more plausible future will simply expand on what is happening now: people enjoying metaverse platforms and the possibilities in them—but not being treated fairly or openly by the companies who profit from them.

The Dangers of an Unregulated Metaverse

Starting in 2022, officials and staffers with at least two major world governments reached out to me, asking for advice on regulating metaverse platforms. Their findings may become public before you read this book. I was impressed by the level of knowledge they already had about their underlying technology.

All of which is a roundabout way of saying: Government regulation is coming.

So if you're a metaverse developer—or for that matter, if you consume, create, sell, or market content on metaverse platforms—you should anticipate that legislation directly targeting your interests will soon be up for serious consideration.

This should not be too much of a surprise. Nearly half of the United States Congress are Gen X or Gen Y, people who grew up with games and online virtual worlds and are often personal users of them. As just one notable example: Alexandra Ocasio-Cortez is not only an avid gamer but uses platforms like Twitch and the virtual world Animal Crossing in her political outreach.

As for the specific regulation, I'm pretty sure (or know for a fact) that U.S./EU government bodies are looking into the topics below, framed in terms of five starting questions:

  • Are metaverse companies doing everything possible to prevent harassment of their users, especially women and vulnerable minorities?
  • Are metaverse platforms that are predominantly used by children fully informing these children's parents about all the risks and costs involved with creating and selling content, mostly on behalf of a for-profit company—along with the risks of interacting with other users in general, especially adults who might be predators and other bad actors?
  • Should metaverse platforms officially allow users to form labor unions, so that they may collectively bargain with companies for a fair revenue share, not to mention having a say about the use of their personal data, especially if they are minors?
  • Are blockchain-based metaverse platforms accurately communicating all the risks involved with NFTs and virtual real-estate speculation?
  • Should the government impose restrictions on VR headset manufacturers to protect user privacy and data?

From my view the answer key is currently: No, No, Yes, No, Yes.

Meantime, metaverse platforms themselves seem amazingly slow to confront these issues, let alone acknowledge them. Then again, they could always wait for Congress to subpoena their CEOs to face the flinty glare of AOC.

Several concerns soon to be scrutinized deserve an in-depth look:

User Manipulation Through Avatars

When Meta's potential to cause harm to metaverse users is discussed, most experts point to the way the company tracks and records user activity while wearing a Quest headset—giving Meta incredible, unprecedented (and easily exploitable) insights into the user's interests and desires, with the power to literally see what each consumer sees and what they gaze at most.

That is certainly a deep and valid concern, though the VR industry is fairly aware of it—and any danger this might pose is limited by the Quest's slow sales and relatively low user activity. If Zuckerberg's ultimate goal with the Quest and the company's metaverse technology is to create the ultimate consumer behavior tracking technology, it's falling short due to a notable shortage of actual consumers.

Less known is how avatars themselves might be abused to manipulate our behavior, even for decisions with global consequences. And the use of VR is not even required.

Between 2004 and 2008, Stanford academics led by Jeremy Bailensen and Nick Yee were able to alter a group of volunteers’ preferences for the upcoming U.S. presidential election.

They were able to do that simply by showing these volunteers photos of what appeared to be the candidates’ faces—Hillary Clinton, George W. Bush, John Kerry, and so on—that had been altered in photo editing software. The researchers subtly combined the faces of presidential candidates with photos of the volunteer test subjects themselves.

Photographs of Jeremy et al., 2008, Oxford University Press.

FIGURE 13.1

Jeremy et al., 2008, Oxford University Press

Yee publicly reported the terrifying results in the run-up to the 2020 election, when there was concern that Putin's Russia was once again manipulating U.S. social media on behalf of Donald Trump.

“We found,” Nick announced then, “that people are more likely to vote for candidates that look more like them (even in high-stakes/information races like the 2004 Kerry/Bush election), and this photo manipulation was largely undetectable.”

Doing this changed the volunteers’ aggregate vote preference by up to 9 percent, Nick told me—more than enough to change the outcome of most elections.

Worse, very few volunteers were even aware of this manipulation.

“With lesser known candidates,” as Nick recounts to me now, “volunteers said, ‘This guy kind of looks like my pastor or my uncle.’ But very few of them said, ‘Oh, you took my photograph and you blended it with this political candidate.’ Because at a 20 percent [morphing] ratio, it really was not detectable on the conscious level…. For a lot of these manipulations, they can be so subtle, and yet still have a measurable effect.”

It's a dark application of the Proteus effect that I discuss in the Introduction:

If an attractive avatar unconsciously encourages the user to be more confident, an avatar that seems to resemble a blood relative can unconsciously encourage the user to trust it more—even when that trust can be exploited.

This experiment was conducted nearly two decades ago. Since then, new morphing technology (and its sophisticated, AI-powered cousin, the Deep Fake) has become even more powerful, with an ability to convert a user's photo onto the appearance of a virtual world avatar. (And Meta, it need not be said, likely has the greatest repository of people's photos—voluntarily uploaded to Facebook and Instagram—in the entire world.)

“[It would be] really easy for them to take the photographs stored in Facebook and then reuse them in the VR platform,” Nick Yee tells me. “So we're living in this very different age … [it potentially] opens an interesting door to them, dynamically creating virtual salesmen that adopt 25 percent of your facial structure. They look similar to you but not identical to you.”

This concern still holds with Meta's current approach to avatars, which are designed to resemble the user in a cartoon form. Even though the overall look is nonrealistic, these avatars still maintain a core resemblance to their user. And the fact that they are cartoonish opens up another potential vector for manipulation:

“Once you move into the more cartoony faces,” Nick tells me, “you're more likely to come up with attractive faces.” This kind of avatars tend to resemble babies in their attractiveness: “Your eyes are wide, your skin is smoother … so that sets into motion other kinds of psychological effects.”

Manipulation of User Data and a New Glass-Steagall Act

All this potential for abuse is careening toward us at a time when social media algorithms have already created an alternate perception of the world where, say, a teeming mass of rioters is encouraged through Facebook to storm the U.S. Capitol.

“Reality's being warped for a lot of people even without VR,” as XR technologist Avi Bar-Zeev puts it to me, “but as soon as you are literally able to warp people's reality and change [it], all bets are off in terms of manipulations.”

Bar-Zeev has been a relatively lonely prophet on this point, as Silicon Valley embraced VR and then casually shifted its obsession to the Metaverse, rarely discussing the implications of a technology that can track the windows to the soul, collecting data on what items and people a VR user looks at for the longest time—and then alter what is being looked at, according to their soul's desire.

“So if a company wants to cycle through 100 iterations of cars to figure out which is the exact car that I really like, I just need to spend time in the [virtual] world. The more time I spend, the closer the system comes to showing me the exact car that I want. And I accidentally give them feedback that lets them know when they're ready.

“They don't know what thoughts you're having, but they can tell what you're paying attention to and how you're reacting to it.”

Dire warnings of impending doom can get tiresome without a specific call to action, so I ask Avi what he would say if a leading senator asked him to frame his concerns in terms of an enforceable law.

“We need a Glass-Steagall Act for data,” he tells me immediately. That's the former U.S. law that prohibited banks from also operating as investment firms. “We repealed it, and we regret it, right?”

Largely eliminated in 1999, the law's expiration was a core cause of the financial crisis of 2007–2008 that nearly led to a global depression.

So, yes.

“What we need,” Bar-Zeev goes on, “is a separation from the parts of the companies that collect the data and use it for practical purposes like improving the quality of the world.”

He means VR/virtual world user-interaction data that helps a company track down performance problems and system bugs.

“So that part of the company has access to data. And maybe it needs to be a whole other company that sells all the ads. They shall not talk. They shall not share information. That's the way I would write it: The information must be firewalled so that the ads can't use it.”

It's not enough, he adds, to ask that metaverse companies strip the user's names from the tracking data.

“A lot of people make this mistake. They say, well, as long as we anonymize it, we're fine. But no, it's not about anonymizing it. It's about how effective it is. You don't have to know who I am, if you can build a profile of me on the fly in 10 minutes.”

So does that mean breaking up Meta from its XR division?

“I would be okay if they can technically prove that the data does not flow between the biometric collection and the advertising [division],” Bar-Zeev speculates. “If they can firewall it in such a way that literally the user has the keys to the encrypted data for anything that's saved, and the company does not have the key, and it's never sent to the cloud. And there's no possibility of that data leaking over to the other part of the company. That would be a good start.

“But if they can't prove that, I think you have to separate the financial concerns just like we did with Glass-Steagall.”

Corporate Encroachment on User Creativity

While I was researching VRChat for this book, an interview subject decided I should upgrade my avatar and instantiated a purple teleport portal in the air.

It instantly transported us to an underground complex displaying custom-made avatars free for the taking—but since nearly every option is a replica of characters from Disney and Warner Brothers’ cartoons, it might as well have been called “Joe Bob's Copyright Infringement Emporium.” (Recall also the hundreds of user-made re-creations of Netflix's Squid Game in Roblox.)

These are just brief glimpses below the surface of the IP infringement iceberg bearing down on the Metaverse across multiple platforms.

By definition, a metaverse platform enables user-generated content. By the inevitability of human nature on the Internet, much of that content will actually be fan tributes or outright knockoffs of highly protected intellectual property owned by Disney and other giant conglomerates.

To put it another way: The lawyers are coming for the Metaverse, and they're bringing avatars.

This is not hyperbole. Due to Second Life's prominent public profile back in the day, lawyers regularly scanned the virtual world, often sending terse cease-and-desist messages to everyday users. In 2009, for instance, lawyers for the Frank Herbert Estate sent takedown orders to a small user community that had created a virtual desert to engage in roleplay based on Herbert's Dune novels.

The mystery is why the IP lawyers have not yet struck in concerted force already.

To some extent, I imagine they are largely looking the other way while their media clients consider partnerships with or even outright acquisition of these metaverse platforms. At some point, however, this benign neglect will end, and the DMCA takedown requests will start flying.

Brands should always keep this in mind, whenever planning their content strategies. User creators should also learn their rights—and begin engaging with organizations like the Electronic Frontier Foundation.

“We're definitely very adamant about fair use and fan-made content to be protected as a form of expression,” Rory Mir, Associate Director of Community Organizing at EFF, tells me. “If someone wants to make an Iron Man avatar—yeah, that's something that should be protected.”

My hope (and it may be in vain) is that metaverse companies preemptively work with major media holders to carve out fair use principles that allow fan-made tributes to their favorite IPs under reasonable parameters.

Media companies themselves should consider enlightened ways of offering official editions of their work to metaverse creators—and encourage fair use creativity around their IPs.

Before Disney acquired Marvel, for instance, the comics giant did indeed distribute copies of an official Iron Man avatar outfit in Second Life, part of the promotion for the first Iron Man movie in 2008. And rather than release a static version, the marketing company made it fully editable, so users could create their own customized versions and take them on zany adventures in the virtual world.

To judge by the results, this gamble has paid off, leading to a large number of Iron Man–related screenshots and machinima shared on social media, amplifying promotion for the movie.

“The challenge for lots of corporations and established companies is whether they end up defaulting to the controllable level of where the user can modify the outfits but can't really create [with them],” VC Hunter Walk observes. He recommends the Iron Man route, “as opposed to some of their inclinations around protecting and extending the IP rights.”

Those are just three areas that deserve immediate attention. At some point, however, we will need to squarely face the Experience Machine problem.

Confronting the Experience Machine

When he was still an engineer at Linden Lab in the mid-2000s, Jim Purbrick would often give presentations about Second Life at various European tech conferences. He began to hear the oddest response from attendees:

“On more than one occasion,” as he put it to me, “I was left flat-footed by enthusiastic [SL users] thanking Linden for building Second Life. So that they would have a virtual world they could upload their consciousness to.

“It was pretty mind-blowing and more than a little terrifying, given that I'd seen the code.”

I have also spoken with people like this in Second Life and other virtual worlds—confidently telling me that they hope these digital realms can become realistic and immersive enough that before they die, they'll be able to hook electrodes into their brain and digitize themselves into this artificial afterlife.

In an influential thought experiment from 1974, Harvard philosopher Robert Nozick argued against hedonism by asking readers to imagine an “experience machine”:

  • Suppose there was an experience machine that would give you any experience you desired. Super-duper neurophysicists could stimulate your brain so that you would think and feel you were writing a great novel, or making a friend, or reading an interesting book. All the time you would be floating in a tank, with electrodes attached to your brain.

“Would you plug in?” he asks the reader. After all, “What else can matter to us, other than how lives feel from the inside?”

To Nozick, it was patently obvious we would reject this vision. He believed our rejection proved there was something besides mere experience that is fundamental to human existence.

“We learn that something matters to us in addition to experience,” wrote Nozick, “by imagining an experience machine and then realizing that we would not use it.”

But some 50 years later, Nozick's device is no longer a fanciful thought experiment. Many leaders in metaverse development believe they are now building what is, for most intents and purposes, Robert Nozick's Experience Machine.

And strikingly, many of them explicitly reject Nozick's conclusion that there is more to a happy life than pleasing simulation. And not just as a philosophical stance but as a serious business proposition targeting billions of consumers, backed by some of the world's largest corporations—either to give the poorest of us the simulation of a better life or to create immortality, or both.

If they are right, the implications for our culture, economy, even the world itself promise to be profound in ways we have barely begun to contemplate.

In fairness, VR and the Metaverse are only the latest technologies that simulate human experience.

“I think you are making a false distinction between real and virtual life,” as John Carmack put it to me once. “Activities in VR are aspects of your real life, just like the movies you watch and the books you read. If your experiences in VR are soothing and happy, they contribute to a better ‘real’ life.

“If people are having a virtually happy life, they are having a happy life. Period. If someone wanted nothing more in life than to read books, providing them with a massive library is not doing them a disservice, even if that means that they are less likely to be involved in other activities.”

This strikes me as well-meaning sophistry, since no one is also arguing that books in themselves can or should be a sufficient replacement for material and social needs they are lacking.

Carmack and Luckey are hardly alone among pioneers of the virtual reality/metaverse business who well and truly believe that their technology is an adequate pacifier for the underprivileged.

“In a sense some of the things he's saying are mild in relation to what some of my friends in Silicon Valley say,” as VR pioneer Jaron Lanier once told me, when I asked for his thoughts on Palmer Luckey's vision of a virtual utopia for the poor. “I hear a lot of talk that people who are rich and successful will be immortal and everyone else will get a simulated reality. And that's the kind of thing that's really evil that might lead to a violent reaction.”

To judge by the heavy sigh when I bring up this topic with Matthew Ball, The Metaverse author has also heard similar pronouncements many times.

“I think that's a depressing and sad argument to make. There are so many ways in which we can understand 3D simulation and the Metaverse as solving or helping to solve current problems—access to opportunity, to jobs, to education. But those who believe that it is the solution, that it's sufficient … I really do find it offensive, insensitive, and ignorant.

“And that's because these are societal problems; they're human problems. They're questions of me-versus-them or us-versus-you. And the idea that, ‘well, we've now got technology that's good enough for the real thing, and thus, let's give it to them and move on,’ is disrespectful and wrong.”

On this question I have a decided bias, having grown up in Hawaii. By day, the sun marinates the earth, creating heat waves infused with the scent of hibiscus, a humid perfume that follows you even into the most urban areas of Honolulu. Outside my family's home at night, a warm, softly fierce wind swaddles you, bringing with it the ambient distant roar of ocean waves, gently crashing forever.

I've experienced many simulations of my island home on multiple metaverse platforms. While many are visually impressive, none come close to approximating the lived essence of Hawaii. No simulation of the empirical senses can capture it. There is no haptic platform capable of replicating aloha, a concept far more resonant to islanders than tourist brochures suggest, and there is little corporate impetus to re-create the pangs of past colonialism and moral debts yet unpaid. So it is not even conceivable for me to imagine a desirable Metaverse whose moral imperative is to simulate paradise for the world's poor. (Many of whom, sad to say, are themselves native Hawaiian.)

Far better, I believe, to fight for a Metaverse that benefits all of us now, connecting us across borders and accidents of birth to other people. Because in the end, the simulations of the world itself only matter if there is a community to share them with.

That future is possible. But only if we draw from the often harsh lessons we have learned so far.