CHAPTER SIX

Cause Six: The Rise of Technology That Can Track and Manipulate You (Part One)

James Williams told me I had made a fundamental mistake in Provincetown. He was a senior Google strategist for many years, and he left, horrified, to go to Oxford University, to study human attention, and figure out what his colleagues in Silicon Valley have done to it. He told me a digital detox is “not the solution, for the same reason that wearing a gas mask for two days a week outside isn’t the answer to pollution. It might, for a short period of time, keep, at an individual level, certain effects at bay. But it’s not sustainable, and it doesn’t address the systemic issues.” He said our attention is being deeply altered by huge invasive forces in the wider society. Saying the solution is primarily to personally abstain is just “pushing it back onto the individual,” he said, when “it’s really the environmental changes that will really make the difference.”

For a long time I didn’t fully understand what this meant. What would changing our environment entail, when it came to attention, if not each of us trying to change our own personal behavior? The answer slowly became clear to me when I met with many people who had designed crucial aspects of the world in which we now live. In the hills of San Francisco and the hot, arid streets of Palo Alto, I realized that there are six ways in which our technology, as it currently works, is harming our ability to pay attention—and that these causes are united by one deeper underlying force that needs to be overcome.

One of the first people to guide me on this journey was Tristan Harris, another former Google engineer, who, after I had been interviewing him for several years, became globally famous for appearing in the viral Netflix documentary The Social Dilemma. That film explored a whole range of ways in which social media, as it is currently designed, can be destructive. I wanted to tease out something the film largely didn’t explore—its effect on our focus. To grasp it, I think it helps to know Tristan’s own story, and what he witnessed at the heart of the machine that is repatterning the world’s attention.


In the early 1990s, in the town of Santa Rosa, California, a little boy with a bowl haircut and a bright golden bow tie was learning magic. Tristan was seven years old when he first tried out one of the most basic tricks. He would ask you to hand him a coin, and then—poof! It was gone. After he mastered more tricks, he put on a magic show for his elementary-school class, and then—to his glee—he was selected to go to a magic camp out in the hills, where he was taught for a week by professional magicians. It seemed to him like a real-life Jedi training camp.

He discovered, at this young age, the most important fact about magic. He explained years later: “It’s really about the limits of attention.” The job of a magician is—at heart—to manipulate your focus. That coin didn’t really vanish—but your attention was somewhere else when the magician moved it, so when your focus comes back to the original spot, you’re amazed. To learn magic is to learn to manipulate someone’s attention without them even realizing it—and once the magician controls their focus, Tristan realized, he can do what he wants. One of the things that he was taught at camp is that a person’s susceptibility to magic has nothing to do with their intelligence. “It’s about something more subtle,” he said later. It’s “about the weaknesses, or the limits, or the blind spots, or the biases that we’re all trapped inside of.”

Magic, in other words, is the study of the limits of the human mind. You think you control your attention; you think that if somebody messes with it, you will know, and you’ll be able to spot and resist it right away, but, in reality, we are fallible sacks of meat, and we are fallible in predictable ways that can be figured out by magicians and messed with.

As he got to know better and better magicians—eventually befriending one of the best in the world, Derren Brown—Tristan learned something he found both remarkable and disconcerting. It is possible to manipulate your attention to such a degree that a magician can, in many cases, turn you into his puppet. He can make you choose whatever he wants you to choose, while all along you think you’re simply using your own free will. When Tristan first said this to me, I thought he was overstating his case, so he introduced me to another magician friend, James Brown. Tristan told me James would show me what it meant. I’ll give you one example. When we sat together, James showed me a standard pack of cards. He said: See? Some of them are red, and some of them are black, and they are all mixed up together. Then he turned the cards so the colors were facing toward him, and I couldn’t see them anymore. He told me he was going to get me to sort them neatly into two piles—one black, one red—without me ever getting to look at the color of the cards for myself. It was, obviously, impossible. How could I sort cards I couldn’t see?

He told me to look into his eyes, and—entirely using my own free will—to tell him whether to put the next card into a pile on the left, or a pile on the right. So I gave him my orders—left, left, right, and so on—according to what I was confident were my own random whims. At the end, he lifted up the piles of cards and showed them to me. The red cards were neatly in one pile; the black cards were in the other.

I was baffled. How did he do it? He eventually told me he had been subtly guiding my choices. He said he would do it again, a little more crudely this time, to see if I could spot it. Finally—and he had to be pretty blatant—I saw it. When he told me to pick the next card, he indicated very slightly with his eyes to the left or to the right—and I always chose in the way he unconsciously guided me to. Everyone always does, he told me. Later, Tristan explained to me that this is a core insight of magic—you can manipulate people and they don’t even know it’s happening. They will swear to you that they made their own free choices—as I would have about those cards.

One morning, in his office in San Francisco, Tristan leaned forward and said to me: “How does a magician do their work? It works because they don’t have to know your strengths—they just have to know your weaknesses. How well do you know your weaknesses?” I wanted to believe I understood my weaknesses very well, but Tristan shook his head gently. “If people did know their weaknesses,” he said, “then magic wouldn’t work.”

Magicians play on these weaknesses to delight and entertain us. As Tristan grew up, he became part of another group of people who were figuring out our weaknesses to manipulate us—but they had very different goals.


It was in his first year at Stanford University, in 2002, that Tristan first heard whispers of a course on campus that took place in a mysterious-sounding place known as the Persuasive Technology Lab. It was, the rumors went, a place where scientists were figuring out how to design technology that could change your behavior—without you even knowing you were being changed. In his teens Tristan had become obsessed with coding, and he had already been an intern at Apple after his freshman year at Stanford, designing a piece of code that is still used in many of your devices today. This secretive and much-discussed course, he learned, was about taking everything scientists had discovered over the twentieth century about how to change other people’s behavior, and figuring out how the students could integrate these forms of persuasion into their code.

The course was taught by a warm, upbeat Mormon behavioral scientist in his forties named Professor B. J. Fogg. At the start of each day, he would take out a stuffed frog and a cuddly monkey and introduce them to the class, and then he would play on his ukulele. Whenever he wanted the group to break or wrap up, he would tap on a toy xylophone. B.J. explained to students that computers had the potential to be far more persuasive than people. They could, he believed, “be more persistent than human beings, offer greater anonymity,” and “go where humans cannot go or may not be welcome.” Soon, he was sure, they would be changing the lives of everyone—persuading us persistently, throughout the day. He had previously worked on a course dedicated to “the psychology of mind control.” He assigned to Tristan and his other students a small mound of books that explained hundreds of psychological insights and tricks that had been discovered about how to manipulate human beings and to get them to do what you want. It was a treasure trove. Many of them were based on the philosophy of B. F. Skinner, the man who, as I had learned earlier, had found a way to get pigeons and rats and pigs to do whatever he wanted by offering the right “reinforcements” for their behavior. After falling out of fashion for years, his ideas were back with full force.

“It really woke up the magic part of me,” Tristan told me. “I was like—oh wow, there really are these invisible rules that govern what people do. And if there are rules that govern what people do, that’s power. That’s like Isaac Newton discovering the laws of physics. It felt like somebody’s showing me the code—the code of how you can influence people. I remember the experience of sitting there in the graduate area of campus reading those books over the weekends, and underlining furiously these passages, and just being like—oh my God, I can’t even believe that works.” He was so intoxicated by the excitement of it that, he said, “I will admit, I don’t think the ethical bells were firing in my brain yet.”

As part of the class, he was paired with a young man named Mike Krieger, and they were tasked with designing an app. Tristan had been thinking for a while about something named “seasonal affective disorder”—a condition where, if you are stuck in gloomy weather for a long time, you are more likely to become depressed. How, they asked, could technology help with that? They came up with an app called Send the Sunshine. Two friends would choose to be connected through it, and it would track where they both were and the online weather reports for their locations. If the app realized that your friend was starved of sunshine, and you had some, it would prompt you to take a photo of the sun and send it to him. It showed that somebody cared; and it sent some sunshine your way. It was sweet, and simple, and it helped to spur Mike and another person in the class, Kevin Systrom, to think about the power of sharing photographs online. They were already thinking about another of the key lessons of the class, taken from B. F. Skinner: build in immediate reinforcements. If you want to shape the user’s behavior, make sure he gets hearts and likes right away. Using these principles, they launched a new app of their own. They named it Instagram.

The class was filled with people who were going to use the techniques B.J. taught to change how we live our lives, and B.J. was quickly dubbed “the millionaire maker.” But something was starting to nag at Tristan. After a while, he noticed he had become obsessed with checking his email. He would do it repetitively, mindlessly, again and again, and he felt his attention span was beginning to atrophy. He realized, he told me, that the email app he was using “operates on a bunch of different levers, and it’s very powerful, and it sucks, and it’s super-stressful, and it ruins hours and hours of people’s lives.” He had been learning in the Persuasive Technology Lab how to hack people, but he came to ask a disconcerting question: Am I somehow being hacked by other tech designers myself? He wasn’t yet sure how they might be doing it—but he began to have a strange feeling about it. B.J. taught his students that they should only use these powers for good, and he laced ethical debates throughout his course. Yet Tristan was going to start to wonder: Were these secrets, this code, actually being used ethically in the real world?

In the final class Tristan attended, all the students discussed ways in which these persuasive technologies could be used in the future. One of the other groups had come up with an eye-catching plan. They asked: “What if in the future you had a profile of every single person on earth?” As a designer, you would track all the information they offer up on social media and build a detailed profile of them. It’s not just the simple stuff—their gender, or age, or interests. It would be something deeper. This would be a psychological profile—figuring out how their personality works, and the best ways to persuade them. It would know if the user was an optimist or pessimist, if they were open to new experiences or they were prone to nostalgia—it would figure out dozens of characteristics they have.

Think, the class wondered out loud, about how you could target people if you knew this much about them. Think about how you could change them. When a politician or a company wants to persuade you, they could pay a social-media company to perfectly target their message just for you. It was the birth of an idea. Years later, when it was revealed that the campaign for Donald Trump had paid a company named Cambridge Analytica to do exactly that, Tristan would think of that final class in Stanford. “This was the class that freaked me out,” he told me. “I remember saying—this is horribly concerning.”


But Tristan had a deep belief in the power of tech to do good. So he took what he had learned at Stanford and designed an app with a straightforward positive purpose. He was trying to stop one of the ways the web screws with our attention. Let’s say you are checking out the CNN site, and you start to read a news story about Northern Ireland, a topic you don’t know much about. Normally, you will then open a new window and begin googling for info—and before you know it, you vanish down a rabbit hole and emerge half an hour later, lost in articles and videos about a totally different topic (usually cats playing the piano). Tristan’s app was designed so that in this situation, you could do something different: you could highlight any phrase (say, “Northern Ireland”), and it would pull up a simple pop-up window giving you a straightforward summary of the topic. No clicking away from the site; no rabbit holes. Your attention is preserved. The app did well—it started to be used by thousands of websites, including the New York Times, and quite soon, Google made a substantial offer to buy the whole thing and for Tristan to come and work for them. They told him it was so he could integrate it into their web browser, Chrome, and make people less distracted. He jumped at the chance.

It is hard to convey, Tristan believes, quite what it was like to go to work for Google at that moment in history, in 2011. Every day, the company he worked for—from its base, the Googleplex in Palo Alto—was shaping and reshaping how one billion people navigated their way through the world: what they got to see, and what they didn’t. He told one audience later: “I want you to imagine walking into a room. A control room, with a bunch of people, a hundred people, hunched over a desk with little dials—and that that control room will shape the thoughts and feelings of a billion people. This might sound like science fiction, but this actually exists right now, today. I know, because I used to work in one of those control rooms.”

Tristan was assigned for a while to work on the development of Gmail, Google’s email system—precisely the app that was driving him wild, and that he suspected might be using some manipulative tricks he hadn’t yet figured out. Even as he worked on it, he would obsessively check his email, making him less focused, and whenever he looked at a new message, he found it took him a long time to get his mind back to where it had been before. He started trying to think through how you might design a system of email that was less prone to nuking your attention—but whenever he tried to discuss this idea with his colleagues, the conversation didn’t seem to go far. At Google, he quickly learned, success was measured, in the main, by what was called “engagement”—which was defined as minutes and hours of eyeballs on the product. More engagement was good; less engagement was bad. This was for a simple reason. The longer you make people look at their phones, the more advertising they see—and therefore the more money Google gets. Tristan’s co-workers were decent people, struggling with their own tech distractions—but the incentives seemed to lead only one way: you should always design products that “engage” the maximum number of people, because engagement equals more dollars, and disengagement equals fewer dollars.

With each month that passed, Tristan became more startled by the casualness with which the attention of a billion people was being corroded at Google and the other Big Tech companies. One day he would hear an engineer excitedly saying: “Why don’t we make it buzz your phone every time we get an email?” Everyone would be thrilled—and a few weeks later, all over the world, phones began to buzz in pockets, and more people found themselves looking at Gmail more times a day. The engineers were always looking for new ways to suck eyeballs onto their program and keep them there. Day after day, he would watch as engineers proposed more interruptions to people’s lives—more vibrations, more alerts, more tricks—and they would be congratulated.

As the number of people using Google and Gmail continued to spike up, Tristan started to ask his colleagues: “How do you ethically persuade two billion people’s minds?…How do you ethically structure two billion people’s attention?” But instead, he found that most other people in the company were being pushed to ask simply, “How can we make this more engaging?” And that meant more attention-sucking, more interrupting—on and on it went, with better techniques being discovered every week. One day, when we were walking in San Francisco, Tristan said to me: “Things look pretty bad from the outside, but when you’re on the inside, things can look even worse.” Tristan was starting to realize: It’s not your fault you can’t focus. It’s by design. Your distraction is their fuel.

After working intensively on the Gmail team, Tristan saw that when it came to questioning what they were doing to people’s attention, “the conversation was not happening.” He looked out across his friends now working in every part of Silicon Valley, and this grab-and-raid approach to our focus was being taken in almost all the companies they worked in. “What started to really concern me over the years,” he told me, “was just watching my friends who had originally gotten into this business because they thought they could make the world better, [and now] were caught in this arms race to manipulate human nature.”

To pluck one example out of dozens Tristan could offer, his friends Mike and Kevin had launched Instagram, and after a little while, “they added these filters, because it was a cool thing. So you could take a photo, and just have it look artistic instantly.” It didn’t cross their minds, he’s sure, that it would start a race with Snapchat and others to see who could “provide better beautification filters”—and that this would, in turn, change how people thought of their own bodies so much that today there’s a whole category of people who undergo surgery so they can look more like their filters. He could see that his friends were setting in motion changes that were transforming the world in ways they couldn’t predict or control. “The reason we have to be so careful about the way that we design technology,” he said, is that “they squeeze, they squish, the entire world down into that medium—and out the other end comes a different world.”

But here was Tristan, at the center of the machine unleashing these transformations, and he could see that behind closed doors, the dials in the control room were being set to ten.


After a few years at the heart of the Googleplex, Tristan couldn’t take it anymore, and he decided to leave. As a final gesture, he put together a slide-deck for the people he worked with, to appeal to them to think about these questions. The first slide said simply: “I’m concerned about how we’re making the world more distracted.” He explained: “Distraction matters to me, because time is all we have in life…. Yet hours and hours can get mysteriously lost here.” He showed a picture of a Gmail inbox. “And [on] feeds that suck huge chunks of time away here.” He showed a Facebook feed. He said he was worried that the company—and others like it—was inadvertently “destroy[ing] our kids’ ability to focus,” pointing out that the average child between the ages of thirteen and seventeen in the U.S. was sending one text message every six minutes they were awake. People were, he warned, living “on a treadmill of continuous checking.”

He asked: We know that interruptions cause a deterioration in people’s ability to focus and think clearly—so why are we ramping up the interruptions? Why are we finding better and better ways to do it all the time? “Think about that,” he told his colleagues. “We should feel an enormous responsibility to get this right.” All humans have natural vulnerabilities, and instead of exploiting those vulnerabilities—like a malign magician—Google should be respecting them. He suggested some modest changes as a place to start. Instead of notifying someone every time they have a new email, he suggested, we could notify them once a day, in a batch—so it’d be like getting a newspaper in the morning, instead of constantly following the rolling news. Every time we prompt somebody to click over to a new photo their friend has posted, we could warn them—on the same screen—that the average person who clicks on a photo is pulled away for twenty minutes before they get back to their task. We could tell them: You think it’ll only take a second, but it won’t.

He suggested giving users a chance to pause every time they click to do something potentially seriously distracting, to check: Are you sure you want to do this? Do you know how much time it will take from you? “Humans make different decisions when we pause and consider,” he said.

He was trying to give his colleagues a sense of the weight of the decisions they made every day: “We shape more than eleven billion interruptions to people’s lives every day. This is nuts!” The people sitting around you in the Googleplex, he explained, control more than 50 percent of all the notifications on all the phones in the whole world. We are “creating an arms race that causes companies to find more reasons to steal people’s time,” and it “destroys our common silence and ability to think.” He asked: “Do we really know what we’re doing to people?”

This was an almost insanely bold thing to do. At the heart of the machine that was changing the world, here was a smart and talented but fairly junior engineer, still only twenty-nine years old, saying something that directly challenged the whole direction of the company. It would be like a junior exec in 1975 standing up in front of the whole of ExxonMobil and telling them that they were responsible for global warming by showing them images of the melting of the Arctic. Everyone in Silicon Valley was scrambling to get into and suck up to Google. But here was Tristan, with the ability to stay at its heart forever and make a lot of money, writing what seemed to be his own professional death certificate, because he believed somebody, somewhere, had to say something.

He shared his slide-deck with his colleagues, and went home, depressed. Then something unexpected happened.


With each hour that passed, more and more Google employees shared Tristan’s slide-deck. The next day, he was inundated with messages from within the company enthusing about it. It turned out he had tapped into a latent mood. Just because you design these products, it doesn’t mean you are more insulated than anyone else from becoming hooked on them. The workers at the Googleplex could feel this tsunami of distractions hitting them too. Many of them wanted to have a serious conversation about what they were doing to the world. People were drawn in particular to the question Tristan had put to them: “What if we designed [our products] to minimize stress and create calmer states of mind?”

There was some pushback too. A few of his colleagues said that every new technology brings with it a panic where people say it’ll trash the world—after all, Socrates said writing things down would ruin people’s memories. We were told that everything from the printed book to television would trash the minds of the young, but here we are, and the world survived. Some others responded from a libertarian perspective, saying that what he was suggesting would invite government regulation, which they believed was contrary to the whole spirit of cyberspace.

Tristan’s presentation caused such a ruction within Google that he was asked to stay in a special new position, created just for him. They offered him the role of being Google’s first “design ethicist.” He was thrilled. Here was a chance to think through some of the most challenging questions of our time, in a place where—if he could get people to listen—he could make an enormous difference. For the first time in a long time, he felt optimistic. He thought his new appointment meant Google was serious about exploring these questions. He knew there was enthusiasm for it among his fellow workers, and he believed in the good faith of his bosses.

He was assigned a desk, and—in effect—left to think. So he started to research the effects of many things. For example, he looked at the way Snapchat hooks teenagers. The app had an option called “Snapchat streaks,” where two friends—almost always teens—would check in with each other every day through the app. Every day they checked in, their streak got longer, so you would aim to build up a streak of two hundred, three hundred, four hundred days, all on a brightly colored display full of emojis. If you missed a single day, it would reset to zero. It was a perfect way to take the desire of teens for social connection and manipulate it to get them hooked. You came every day to extend your streak, and you stuck around to scroll, often for hours.

But whenever he came up with a specific proposal for how Google’s own products could be less interrupting and presented it to people above him, he was told, in effect: “This is hard, it’s confusing, and it’s often at odds with our bottom line.” Tristan realized he was bumping up against a core contradiction. The more people stared at their phones, the more money these companies made. Period. The people in Silicon Valley did not want to design gadgets and websites that would dissolve people’s attention spans. They’re not the Joker, trying to sow chaos and make us dumb. They spend a lot of their own time meditating and doing yoga. They often ban their own kids from using the sites and gadgets they design, and send them instead to tech-free Montessori schools. But their business model can only succeed if they take steps to dominate the attention spans of the wider society. It’s not their goal, any more than ExxonMobil deliberately wants to melt the Arctic. But it’s an inescapable effect of their current business model.

When Tristan warned about these negative effects, most people inside the company sympathized and agreed. When he suggested alternatives, people changed the subject. To give you a sense of the money involved: the personal wealth of Larry Page, one of the founders of Google, is $102 billion; his colleague Sergey Brin is worth $99 billion; and their colleague Eric Schmidt is worth $20.7 billion. That’s separate from Google’s wealth as a company, which as I write stands at $1 trillion. These three men alone are worth roughly the same as the total combined wealth of every single person, building, and bank account in the oil-rich country of Kuwait, and Google is worth roughly the entire wealth of the whole of Mexico or Indonesia. Telling them to distract people less was like telling an oil company not to drill for oil—they didn’t want to hear it. “You don’t even really get to make that ethical decision” to improve people’s attention spans, Tristan realized, “because your business model and your incentives are making that decision for you.” Years later, testifying before the U.S. Senate, he explained: “I failed because companies don’t [currently] have the right incentive to change.”

Tristan was in the ethicist job for two years, and toward the end, as he told an audience later, “I felt completely hopeless. There were literally days when I went to work and I would read Wikipedia all day and check my email and I would have no idea how, once you see something as massive as the attention economy and its perverse incentives, a system this big could ever change. I truly felt hopeless. I felt depressed.” So, finally, he quit Google, and went out into a Silicon Valley where, as he put it to me, “everything is a race for attention.” In that lonely time in Tristan’s life, he was about to team up with another person who felt depressed and lost—and who felt guilty about what he personally had done to you, me, and everyone we know.


You probably haven’t heard of Aza Raskin, but he has directly intervened in your life. He will, in fact, probably affect how you spend your time today. Aza grew up in the most elite sliver of Silicon Valley, at the height of its confidence that it was making the world better. His dad was Jef Raskin, the man who invented the Apple Macintosh for Steve Jobs, and he built it around one core principle: that the user’s attention is sacred. The job of technology, Jef believed, was to lift people up and make it possible to achieve their higher goals. He taught his son: “What is technology for? Why do we even make technology? We make technology because it takes the parts of us that are most human and it extends them. That’s what a paintbrush is. That’s what a cello is. That’s what language is. These are technologies that extend some part of us. Technology is not about making us superhuman. It’s about making us extra-human.”

Aza became a precocious young coder, and he gave his first talk about user interfaces when he was ten years old. By the time he was in his early twenties, he was at the forefront of designing some of the first internet browsers, and he was the creative lead on Firefox. As part of this work, he designed something that distinctly changed how the web works. It’s called “infinite scroll.” Older readers will remember that it used to be that the internet was divided into pages, and when you got to the bottom of one page, you had to decide to click a button to get to the next page. It was an active choice. It gave you a moment to pause and ask: Do I want to carry on looking at this? Aza designed the code that means you don’t have to ask that question anymore. Imagine you open Facebook. It downloads a chunk of status updates for you to read through. You scroll down through it, flicking your finger—and when you get to the bottom, it will automatically load another chunk for you to flick through. When you get to the bottom of that, it will automatically load another chunk, and another, and another, forever. You can never exhaust it. It will scroll infinitely.

Aza was proud of the design. “At the outset, it looks like a really good invention,” he told me. He believed he was making life easier for everyone. He had been taught that increased speed and efficiency of access were always advances. His invention quickly spread all over the internet. Today, all social media and lots of other sites use a version of infinite scroll. But then Aza watched as the people around him changed. They seemed to be unable to pull themselves away from their devices, flicking through and through and through, thanks in part to the code he had designed. He found himself infinitely scrolling through what he often realized afterward was crap, and he wondered if he was making good use of his life.

One day, when he was thirty-two, Aza sat down and did a calculation. At a conservative estimate, infinite scroll makes you spend 50 percent more of your time on sites like Twitter. (For many people, Aza believes, it’s vastly more.) Sticking with this low-ball percentage, Aza wanted to know what it meant, in practice, if billions of people were spending 50 percent more on a string of social-media sites. When he was done, he stared at the sums. Every day, as a direct result of his invention, the combined total of 200,000 more human lifetimes—every moment from birth to death—is now spent scrolling through a screen. These hours would otherwise have been spent on some other activity.

When he described this to me, he still sounded a little stunned. That time is “just completely gone. It’s like their entire life—poof. That time, which could have been used for solving climate change, for spending time with their family, for strengthening social bonds. For whatever it is that makes their life well-lived. It just…” He trailed off. I pictured my young godson Adam and all his teenage friends, scrolling, scrolling, infinitely scrolling.

Aza told me he felt “sort of dirty.” He realized: “These things we do, they really can change the world. Then the question immediately follows: In what way did we change the world?” He realized he thought making tech easier to use meant the world would get better. But he began to think that “one of my biggest learnings as a designer or technologist is—making something easy to use doesn’t mean it’s good for humanity.” He thought about his father—who had since died—and his commitment to make tech that set people free to be better, and he wondered if he was living up to his dad’s vision. He began to ask if he and his generation in Silicon Valley were actually “mak[ing] technology that tears us, rips us, and breaks us.”

He carried on designing more things in the vein of infinite scroll, and getting more and more uncomfortable. “It was about the time that we were getting to be really successful at this that my stomach started to drop,” he told me. He felt that he was seeing people become more unempathetic, angry, and hostile as their social-media use went up. At the time, he was running an app he had designed named Post-Social, which was a social-media site designed to help people interact more in the real world, away from their devices. He was trying to raise money for the next phase of its development, and all any investor wanted to know was: How much of people’s attention do you capture and run through your app? How often? How many times a day? That’s not what Aza wanted to be—a person who thought solely about how to drain away people’s time. But “you could see this gravity, pulling this product back to everything that we were trying to fight against.”

The logic of the underlying system was being laid bare for Aza. Silicon Valley sells itself by articulating “a big, lofty goal—connecting everyone in the world, or whatever it is. But when you’re actually doing the day-to-day work, it’s about increasing user numbers.” What you are selling is your ability to grab and hold attention. When he tried to discuss this, he thwacked into raw denial. “Say you were baking bread,” he said to me, “and you had this incredible bread, and you used this secret substance—and all of a sudden, you’re making free bread for the world, and everyone’s eating it. Then one of your scientists comes and says: ‘By the way, we think it causes cancer, this secret substance.’ What do you do? You would almost certainly say: ‘That can’t be right. We need more research. Maybe it’s something [else] that the people out there are doing. Maybe there’s some other factor.’ ”

All throughout the industry, Aza kept meeting people who were going through similar crises. “There were a number of dark nights of the soul that I personally witnessed,” he says. He watched as Silicon Valley’s own inhabitants seemed to be hijacked by their own creations, and then tried to escape. When I met with several of these tech dissidents, it struck me how young they were—like they were almost children who had invented toys and watched their toys conquer the world. Everyone was scrambling to meditate in an attempt to resist the programs they had invented. He realized “one of the ironies is there are these incredibly popular workshops at Facebook and Google about mindfulness—about creating the mental space to make decisions nonreactively—and they are also the biggest perpetrators of non-mindfulness in the world.”


When Tristan and Aza started to speak out, they were ridiculed as wildly over-the-top Cassandras. But then, one by one, all over Silicon Valley, people who had built the world we now live in were beginning to declare in public that they had similar feelings. For example, Sean Parker, one of the earliest investors in Facebook, told a public audience that the creators of the site had asked themselves from the start: “How do we consume as much of your time and conscious attention as possible?” The techniques they used were “exactly the kind of thing that a hacker like myself would come up with, because you’re exploiting a vulnerability in human psychology…. The inventors, creators—it’s me, it’s Mark [Zuckerberg], it’s Kevin Systrom on Instagram, it’s all of these people—understood this consciously. And we did it anyway.” He added: “God only knows what it’s doing to our children’s brains.” Chamath Palihapitiya, who had been Facebook’s vice president of growth, explained in a speech that the effects are so negative that his own kids “aren’t allowed to use that shit.” Tony Fadell, who co-invented the iPhone, said: “I wake up in cold sweats every so often thinking, what did we bring to the world?” He worried that he had helped create “a nuclear bomb” that can “blow up people’s brains and reprogram them.”

Many Silicon Valley insiders predicted that it would only get worse. One of its most famous investors, Paul Graham, wrote: “Unless the forms of technological progress that produced these things are subject to different laws than technological progress in general, the world will get more addictive in the next forty years than it did in the last forty.”


One day, James Williams—the former Google strategist I met—addressed an audience of hundreds of leading tech designers and asked them a simple question: “How many of you want to live in the world you are designing?” There was a silence in the room. People looked around them. Nobody put up their hand.