Tristan said to me that if you want to understand the deeper problems in the way our tech currently works—and why it is undermining our attention—a good place to start is with what seems like a simple question.
Imagine you are visiting New York and you want to know which of your friends are around in the city so you can hang out with them. You turn to Facebook. The site will alert you about lots of things—a friend’s birthday, a photo you’ve been tagged in, a terrorist attack—but it won’t alert you to the physical proximity of somebody you might want to see in the real world. There’s no button that says “I want to meet up—who’s nearby and free?” This isn’t technologically tricky. It would be really easy for Facebook to be designed so that when you opened it, it told you which of your friends were close by and which of them would like to meet for a drink or dinner that week. The coding to do that is simple; Tristan and Aza and their friends could probably write it in a day. And it would be hugely popular. Ask any Facebook user: Would you like Facebook to physically connect you to your friends more, instead of keeping you endlessly scrolling?
So—it’s an easy tweak, and users would love it. Why doesn’t it happen? Why won’t the market provide it? To understand why, Tristan and his colleagues explained to me, you need to step back and understand more about the business model of Facebook and the other social-media companies. If you follow the trail from this simple question, you will see the root of many of the problems we are facing.
Facebook makes more money for every extra second you are staring through a screen at their site, and they lose money every time you put the screen down. They make this money in two ways. Until I started to spend time in Silicon Valley, I had only naively thought about the first and the most obvious. Clearly—as I wrote in the last chapter—the more time you look at their sites, the more advertisements you see. Advertisers pay Facebook to get to you and your eyeballs. But there’s a second, more subtle reason why Facebook wants you to keep scrolling and desperately doesn’t want you to log off. When I first heard about this reason, I scoffed a little—it sounded far-fetched. But then I kept talking with people in San Francisco and Palo Alto, and every time I expressed skepticism about it, they looked at me like I was a maiden aunt in the 1850s who had just heard the details of sex for the first time. How, they asked, did you think it worked?
Every time you send a message or status update on Facebook, or Snapchat, or Twitter, and every time you search for something on Google, everything you say is being scanned and sorted and stored. These companies are building up a profile of you, to sell to advertisers who want to target you. For example, starting in 2014, if you used Gmail, Google’s automated systems would scan through all your private correspondence to generate an “advertising profile” exactly for you. If (say) you email your mother telling her you need to buy diapers, Gmail knows you have a baby, and it knows to target ads for baby products straight to you. If you use the word “arthritis,” it’ll try to sell you arthritis treatments. The process that had been predicted in Tristan’s final class back in Stanford was beginning.
Aza explained it to me by saying that I should imagine that “inside of Facebook’s servers, inside of Google’s servers, there is a little voodoo doll, [and it is] a model of you. It starts by not looking much like you. It’s sort of a generic model of a human. But then they’re collecting your click trails [i.e., everything you click on], and your toenail clippings, and your hair droppings [i.e., everything you search for, every little detail of your life online]. They’re reassembling all that metadata you don’t really think is meaningful, so that doll looks more and more like you. [Then] when you show up on [for example] YouTube, they’re waking up that doll, and they’re testing out hundreds of thousands of videos against this doll, seeing what makes its arm twitch and move, so they know it’s effective, and then they serve that to you.” It seemed like such a ghoulish image that I paused. He went on: “By the way—they have a doll like that for one in four human beings on earth.”
At the moment these voodoo dolls are sometimes crude and sometimes startlingly specific. We’ve all had one kind of experience of searching online for something. I recently tried to buy an exercise bike, and still, a month later, I am endlessly being served advertisements for exercise bikes by Google and Facebook, until I want to scream, “I bought one already!” But the systems are getting more sophisticated every year. Aza told me: “It’s getting to be so good that whenever I give a presentation, I’ll ask the audience how many think Facebook is listening to their conversations, because there’s some ad that’s been served that’s just too accurate. It’s about a specific thing they never mentioned before [but they happen to have talked about offline] to a friend the day before. Now, it’s generally one-half to two-thirds of the audience that raises their hands. The truth is creepier. It’s not that they are listening and then they can do targeted ad serving. It’s that their model of you is so accurate that it’s making predictions about you that you think are magic.”
It was explained to me that whenever something is provided by a tech company for free, it’s always to improve the voodoo doll. Why is Google Maps free? So the voodoo doll can include the details of where you go every day. Why are Amazon Echo and Google Nest Hubs sold for as cheap at $30, far less than they cost to make? So they can gather more info; so the voodoo doll can consist not just of what you search for on a screen but what you say in your home.
This is the business model that built and sustains the sites on which we spend so much of our lives. The technical term for this system—coined by the brilliant Harvard professor Shoshana Zuboff—is “surveillance capitalism.” Her work has made it possible for us to understand a lot of what is happening now. Of course, there have been increasingly sophisticated forms of advertising and marketing for over a hundred years—but this is a quantum leap forward. A billboard didn’t know what you googled at three in the morning last Thursday. A magazine ad didn’t have a detailed profile of everything you’ve ever said to your friends on Facebook and email. Trying to give me a sense of this system, Aza said to me: “Imagine if I could predict all your actions in chess before you made them. It would be trivial for me to dominate you. That’s what is happening on a human scale now.”
Once you understand all this, you can see why there is no button that suggests you meet up with your friends and family away from the screen. Instead of getting us to maximize screen time, that would get us to maximize face-to-face time. Tristan said: “If people used Facebook just to quickly get on, so they could find the amazing thing to do with their friends that night, and get off, how would that [affect] Facebook’s stock price? The average amount of time people spend on Facebook today is something like fifty minutes a day…. [But] if Facebook acted that way, people would spend barely a few minutes on there per day, in a much more fulfilling way.” Facebook’s share price would collapse; it would be, for them, a catastrophe. This is why these sites are designed to be maximally distracting. They need to distract us, to make more money.
Tristan has seen, on the inside, how these business incentives work in practice. Imagine this, he said to me: An engineer proposes a tweak that improves people’s attention, or gets them to spend more time with their friends. “Then what happens is they will wake up two weeks to four weeks later, and there’ll be some review on their dashboard looking at the metrics. [Their manager will] be saying, ‘Hey, why did time spent [on the site] go down about three weeks ago? Oh, it’ll be [because] we added these features. Let’s just roll back some of those features, to figure out how we get that number back up.’ ” This isn’t some conspiracy theory, any more than it’s a conspiracy theory to explain that KFC wants you to eat fried chicken. It’s simply an obvious result of the incentive structure that has been put in place and that we allow to continue. “Their business model,” he says, “is screen time, not life time.”
It was at this point in learning Tristan’s story—from him, his friends, his colleagues, and his critics—that I realized something so simple that I am almost embarrassed to say it. For years, I had blamed my deteriorating powers of attention simply on my own failings or on the existence of the smartphone itself as a technology. Most of the people I know do the same. We tell ourselves: The phone arrived, and it ravaged me. I believed any smartphone would have done the same. But what Tristan was showing is that the truth is more complicated. The arrival of the smartphone would always have increased to some degree the number of distractions in life, to be sure, but a great deal of the damage to our attention spans is being caused by something more subtle. It is not the smartphone in and of itself; it is the way the apps on the smartphone and the sites on our laptops are designed.
Tristan taught me that the phones we have, and the programs that run on them, were deliberately designed by the smartest people in the world to maximally grab and maximally hold our attention. He wants us to understand that this design is not inevitable. I had to really think this over, because, of all the things I learned from him, this seemed the most important.
The way our tech works now to corrode our attention was and remains a choice—by Silicon Valley, and by the wider society that lets them do it. Humans could have made a different choice then, and they can make a different choice now. You could have all this technology, Tristan told me, but not design it to be maximally distracting. In fact, you could design it with the opposite goal: to maximally respect people’s need for sustained attention, and to interrupt them as little as possible. You could design the technology not so that it pulls people away from their deeper and more meaningful goals, but so that it helps them to achieve them.
This was shocking to me. It’s not just the phone; it’s the way the phone is currently designed. It’s not just the internet; it’s the way the internet is currently designed—and the incentives for the people designing it. You could keep your phone and your laptop, and you could keep your social-media accounts—and have much better attention, if they were designed around a different set of incentives.
Once you see it in this different way, Tristan came to believe, it opens up a very different path forward, and the beginnings of a way out of our crisis. If the existence of the phone and the internet is the sole driver of this problem, we’re trapped and in deep trouble—because as a society, we’re not going to discard our tech. But if it’s the current design of the phones and the internet and the sites we run on them that is driving a lot of the problem, there’s a very different way they could work that would put us all in a very different position.
After you’ve adjusted your perspective in this way, seeing this as a debate between whether you are pro-tech or anti-tech is bogus and lets the people who stole your attention off the hook. The real debate is: What tech, designed for what purposes, in whose interests?
But when Tristan and Aza said that these sites are designed to be as distracting as possible, I still didn’t really understand how. It seemed like a big claim. To grasp it, I had to first learn something else embarrassingly basic. When you open your Facebook feed, you see a whir of things for you to look at—your friends, their photos, some news stories. When I first joined Facebook back in 2008, I naively thought that these things appeared simply in the order in which my friends had posted them. I’m seeing my friend Rob’s photo because he just put it up; then my auntie’s status update comes next because she posted it before him. Or maybe, I thought, they were selected randomly. In fact, I learned over the years—as we all became more informed about these questions—that what you see is selected for you according to an algorithm.
When Facebook (and all the others) decide what you see in your news feed, there are many thousands of things they could show you. So they have written a piece of code to automatically decide what you will see. There are all sorts of algorithms they could use—ways they could decide what you should see, and the order in which you should see them. They could have an algorithm designed to show you things that make you feel happy. They could have an algorithm designed to show you things that make you feel sad. They could have an algorithm to show you things that your friends are talking about most. The list of potential algorithms is long.
The algorithm they actually use varies all the time, but it has one key driving principle that is consistent. It shows you things that will keep you looking at your screen. That’s it. Remember: the more time you look, the more money they make. So the algorithm is always weighted toward figuring out what will keep you looking, and pumping more and more of that onto your screen to keep you from putting down your phone. It is designed to distract. But, Tristan was learning, that leads—quite unexpectedly, and without anyone intending it—to some other changes, which have turned out to be incredibly consequential.
Imagine two Facebook feeds. One is full of updates, news, and videos that make you feel calm and happy. The other is full of updates, news, and videos that make you feel angry and outraged. Which one does the algorithm select? The algorithm is neutral about the question of whether it wants you to be calm or angry. That’s not its concern. It only cares about one thing: Will you keep scrolling? Unfortunately, there’s a quirk of human behavior. On average, we will stare at something negative and outrageous for a lot longer than we will stare at something positive and calm. You will stare at a car crash longer than you will stare at a person handing out flowers by the side of the road, even though the flowers will give you a lot more pleasure than the mangled bodies in a crash. Scientists have been proving this effect in different contexts for a long time—if they showed you a photo of a crowd, and some of the people in it were happy, and some angry, you would instinctively pick out the angry faces first. Even ten-week-old babies respond differently to angry faces. This has been known about in psychology for years and is based on a broad body of evidence. It’s called “negativity bias.”
There is growing evidence that this natural human quirk has a huge effect online. On YouTube, what are the words that you should put into the title of your video, if you want to get picked up by the algorithm? They are—according to the best site monitoring YouTube trends—words such as “hates,” “obliterates,” “slams,” “destroys.” A major study at New York University found that for every word of moral outrage you add to a tweet, your retweet rate will go up by 20 percent on average, and the words that will increase your retweet rate most are “attack,” “bad,” and “blame.” A study by the Pew Research Center found that if you fill your Facebook posts with “indignant disagreement,” you’ll double your likes and shares. So an algorithm that prioritizes keeping you glued to the screen will—unintentionally but inevitably—prioritize outraging and angering you. If it’s more enraging, it’s more engaging.
If enough people are spending enough of their time being angered, that starts to change the culture. As Tristan told me, it “turns hate into a habit.” You can see this seeping into the bones of our society. When I was a teenager, there was a horrific crime in Britain, where two ten-year-old children murdered a toddler named Jamie Bulger. The Conservative prime minister at the time, John Major, responded by publicly saying that he believed we need “to condemn a little more, and understand a little less.” I remembered thinking then, at the age of fourteen, that this was surely wrong—that it’s always better to understand why people do things, even (perhaps especially) the most heinous acts. But today, this attitude—condemn more, understand less—has become the default response of almost everyone, from the right to the left, as we spend our lives dancing to the tune of algorithms that reward fury and penalize mercy.
In 2015 a researcher named Motahhare Eslami, as part of a team at the University of Illinois, took a group of ordinary Facebook users and explained to them how the Facebook algorithm works. She talked them through how it selects what they see. She discovered that 62 percent of them didn’t know their feeds were filtered at all, and they were astonished to learn about the algorithm’s existence. One person in the study compared it to the moment in the film The Matrix when the central character, Neo, discovers he is living in a computer simulation.
I called several of my relatives and asked them if they knew what an algorithm was. None of them—including the teenagers—did. I asked my neighbors. They looked at me blankly. It’s easy to assume most people know about this, but I don’t think it’s true.
When I pieced together what I’d learned, I could see that—when I broke it down—the people I interviewed had presented evidence for six distinct ways in which this machinery, as it currently operates, is harming our attention. (I will come to the scientists who dispute these arguments in chapter eight; as you read this, remember that some of it is controversial.)
First, these sites and apps are designed to train our minds to crave frequent rewards. They make us hunger for hearts and likes. When I was deprived of them in Provincetown, I felt bereft, and had to go through a painful withdrawal. Once you have been conditioned to need these reinforcements, Tristan told one interviewer, “it’s very hard to be with reality, the physical world, the built world—because it doesn’t offer as frequent and as immediate rewards as this thing does.” This craving will drive you to pick up your phone more than you would if you had never been plugged into this system. You’ll break away from your work and your relationships to seek a sweet, sweet hit of retweets.
Second, these sites push you to switch tasks more frequently than you normally would—to pick up your phone, or click over to Facebook on your laptop. When you do this, all the costs to your attention caused by switching—as I discussed in chapter one—kick in. The evidence there shows this is as bad for the quality of your thinking as getting drunk or stoned.
Third, these sites learn—as Tristan put it—how to “frack” you. These sites get to know what makes you tick, in very specific ways—they learn what you like to look at, what excites you, what angers you, what enrages you. They learn your personal triggers—what, specifically, will distract you. This means that they can drill into your attention. Whenever you are tempted to put your phone down, the site keeps drip-feeding you the kind of material that it has learned, from your past behavior, keeps you scrolling. Older technologies—like the printed page, or the television—can’t target you in this way. Social media knows exactly where to drill. It learns your most distractible spots and targets them.
Fourth, because of the way the algorithms work, these sites make you angry a lot of the time. Scientists have been proving in experiments for years that anger itself screws with your ability to pay attention. They have discovered that if I make you angry, you will pay less attention to the quality of arguments around you, and you will show “decreased depth of processing”—that is, you will think in a shallower, less attentive way. We’ve all had that feeling—you start prickling with rage, and your ability to properly listen goes out the window. The business models of these sites are jacking up our anger every day. Remember the words their algorithms promote—attack, bad, blame.
Fifth, in addition to making you angry, these sites make you feel that you are surrounded by other people’s anger. This can trigger a different psychological response in you. As Dr. Nadine Harris, the surgeon general of California, who you’ll meet later in this book, explained to me: Imagine that one day you are attacked by a bear. You will stop paying attention to your normal concerns—what you’re going to eat tonight, or how you will pay the rent. You become vigilant. Your attention flips to scanning for unexpected dangers all around you. For days and weeks afterward, you will find it harder to focus on more everyday concerns. This isn’t limited to bears. These sites make you feel that you are in an environment full of anger and hostility, so you become more vigilant—a situation where more of your attention shifts to searching for dangers, and less and less is available for slower forms of focus like reading a book or playing with your kids.
Sixth, these sites set society on fire. This is the most complex form of harm to our attention, with several stages, and I think probably the most harmful. Let’s go through it slowly.
We don’t just pay attention as individuals; we pay attention together, as a society. Here’s an example. In the 1970s, scientists discovered that all over the world, people were using hairsprays that contained a group of chemicals named CFCs. These chemicals were then entering the atmosphere and having an unintended but disastrous effect—they were damaging the ozone layer, a crucial part of the atmosphere that protects us from the sun’s rays. Those scientists warned that, over time, this could pose a serious threat to life on earth. Ordinary people absorbed this information and saw that it was true. Then activist groups—made up of ordinary citizens—formed, and demanded a ban. These activists persuaded their fellow citizens that this was urgent and made it into a big political issue. This put pressure on politicians, and that pressure was sustained until those politicians banned CFCs entirely. At every stage, averting this risk to our species required us to be able to pay attention as a society—to absorb the science; to distinguish it from falsehood; to band together to demand action; and to pressure our politicians until they acted.
But there is evidence that these sites are now severely harming our ability to come together as a society to identify our problems and to find solutions in ways like this. They are damaging not just our attention as individuals, but our collective attention. At the moment false claims spread on social media far faster than the truth, because of the algorithms that spread outraging material faster and farther. A study by the Massachusetts Institute of Technology found that fake news travels six times faster on Twitter than real news, and during the 2016 U.S. presidential election, flat-out falsehoods on Facebook outperformed all the top stories at nineteen mainstream news sites put together. As a result, we are being pushed all the time to pay attention to nonsense—things that just aren’t so. If the ozone layer was threatened today, the scientists warning about it would find themselves being shouted down by bigoted viral stories claiming the threat was all invented by the billionaire George Soros, or that there’s no such thing as the ozone layer anyway, or that the holes were really being made by Jewish space lasers.
If we are lost in lies, and constantly riled up to be angry with our fellow citizens, this sets off a chain reaction. It means we can’t understand what is really going on. In those circumstances, we can’t solve our collective challenges. This means our wider problems will get worse. As a result, the society won’t just feel more dangerous—it will actually be more dangerous. Things will start to break down. And as real danger rises, we will become more and more vigilant.
One day, Tristan was shown how this dynamic works when he was approached by a man named Guillaume Chaslot, who had been an engineer designing and administering the algorithm that picks out the videos that are recommended to you on YouTube when you watch a video there. Guillaume wanted to tell him what was happening behind closed doors. Just like Facebook, YouTube makes more money the longer you watch. That’s why they designed it so that when you stop watching one video, it automatically recommends and plays another one for you. How are those videos selected? YouTube also has an algorithm—and it too has figured out that you’ll keep watching longer if you see things that are outrageous, shocking, and extreme. Guillaume had seen how it works, with all the data YouTube keeps secret—and he saw what it means in practice.
If you watched a factual video about the Holocaust, it would recommend several more videos, each one getting more extreme, and within a chain of five or so videos, it would usually end up automatically playing a video denying the Holocaust happened. If you watched a normal video about 9/11, it would often recommend a “9/11 truther” video in a similar way. This isn’t because the algorithm (or anyone at YouTube) is a Holocaust denier or 9/11 truther. It was simply selecting whatever would most shock and compel people to watch longer. Tristan started to look into this, and concluded: “No matter where you start, you end up more crazy.”
It turned out, as Guillaume leaked to Tristan, that YouTube had recommended videos by Alex Jones and his website Infowars 15 billion times. Jones is a vicious conspiracy theorist who has claimed that the 2012 Sandy Hook massacre was faked, and that the grieving parents are liars whose children never even existed. As a result, some of those parents have been inundated with death threats and have had to flee their homes. This is just one of many insane claims he has made. Tristan has said: “Let’s compare that—what is the aggregate traffic of the New York Times, the Washington Post, the Guardian? All that together is not close to fifteen billion views.”
The average young person is soaking up filth like this day after day. Do those feelings of anger go away when they put down their phone? The evidence suggests that for lots of people, they don’t. A major study asked white nationalists how they became radicalized, and a majority named the internet—with YouTube as the site that most influenced them. A separate study of far-right people on Twitter found that YouTube was by far the website they turned to the most. “Just watching YouTube radicalizes people,” Tristan explained. Companies like YouTube want us to think “we have a few bad apples,” he explained to the journalist Decca Aitkenhead, but they don’t want us to ask: “Do we have a system that is systematically, as you turn the crank every day, pumping out more radicalization? We’re growing bad apples. We’re a bad-apple factory. We’re a bad-apple farm.”
I saw a vision of where this could take us all when in 2018, I went to Brazil in the run-up to their presidential election, in part to see my friend Raull Santiago, a remarkable young man I got to know when I was writing the Brazilian edition of my book about the war on drugs, Chasing the Scream.
Raull grew up in a place named Complexo do Alemão, which is one of the biggest and poorest favelas in Rio. It’s a huge, jagged ziggurat of concrete and tin and wire that stretches far up on the hills, way above the city, until it seems to be almost in the clouds. At least 200,000 people live there, in narrow concrete alleyways that are crisscrossed with makeshift wires providing electricity. The people here built this whole world brick by brick, with little support from the state. The alleyways of Alemão are surreally beautiful: they look like Naples after some undefined apocalypse. As a child, Raull would fly kites high above the favela with his best friend, Fabio, where they could see out all across Rio, toward the ocean and the statue of Christ the Redeemer.
Often the authorities would send tanks rolling into the favela. The attitude of the Brazilian state toward the poor was to keep them suppressed with periodic threats of extreme violence. Raull and Fabio would regularly see bodies in the alleyways. Everyone in Alemão knew that the cops could shoot poor kids and claim they were drug dealers, and plant drugs or guns on them. In practice, the police had a license to murder the poor.
Fabio always seemed like the kid most likely to get away from all of this. He was great at math, and determined to earn money for his mother and disabled sister. He was always figuring out deals—he persuaded the local bars to let him buy their bottles so he could sell them in bulk, for example. But then, one day, Raull was told something terrible: Fabio had—like so many kids before him—been shot dead by the police. He was fifteen years old.
Raull decided he couldn’t just watch his friends being killed one by one—so, as the years passed, he decided to do something bold. He set up a Facebook page named Coletivo Papo Reto, which gathered cellphone footage from across Brazil of the police killing innocent people and planting drugs or guns on them. It became huge, their videos regularly going viral. Even some people who had defended the police began to see their real behavior and oppose it. It was an inspiring story about how the internet made it possible for people who have been treated like third-class citizens to find a voice, and to mobilize and fight back.
But at the same time as the web was having this positive effect, the social-media algorithms were having the opposite effect—they were supercharging anti-democratic forces in Brazil. A former military officer named Jair Bolsonaro had been a marginal figure for years. He was way outside the mainstream because he kept saying vile things and attacking large parts of the population in extreme ways. He praised people who had carried out torture against innocent people when Brazil was a dictatorship. He told his female colleagues in the senate that they were so ugly he wouldn’t bother raping them, and that they weren’t “worthy” of it. He said he would rather learn his son was dead than learn his son was gay. Then YouTube and Facebook became one of the main ways people in Brazil got their news. Their algorithms prioritized angry, outrageous content—and Bolsonaro’s reach dramatically surged. He became a social-media star. He ran for president openly attacking people like the residents of Alemão, saying the country’s poorer, blacker citizens “are not even good for breeding,” and should “go back to the zoo.” He promised to give the police even more power to launch intensified military attacks on the favelas—a license for wholesale slaughter.
Here was a society with huge problems that urgently needed to be solved—but social-media algorithms were promoting far-right-wingers and wild disinformation. In the run-up to the election, in favelas like Alemão, many people were deeply worried about a story that had been circulating online. Supporters of Bolsonaro had created a video warning that his main rival, Fernando Haddad, wanted to turn all the children of Brazil into homosexuals, and that he had developed a cunning technique to do it. The video showed a baby sucking a bottle, only there was something peculiar about it—the teat of the bottle had been painted to look like a penis. This, the story that circulated said, is what Haddad will distribute to every kindergarten in Brazil. This became one of the most-shared news stories in the entire election. People in the favelas explained indignantly that they couldn’t possibly vote for somebody who wanted to get babies to suck these penis-teats, and so they would have to vote for Bolsonaro instead. On these algorithm-pumped absurdities, the fate of the whole country turned.
When Bolsonaro unexpectedly won the presidency, his supporters chanted “Facebook! Facebook! Facebook!” They knew what the algorithms had done for them. There were, of course, many other factors at work in Brazilian society—this is only one—but it is the one Bolsonaro’s gleeful followers picked out first.
Not long afterward, Raull was in his home in Alemão when he heard a noise that sounded like an explosion. He ran outside and saw that a helicopter was hovering above the favela and firing down at the people below—precisely the kind of violence Bolsonaro had pledged to carry out. Raull screamed for his kids to hide, terrified. When I spoke to Raull on Skype later, he was more shaken than I had seen him before. As I write, this violence is being ramped up more and more.
When I thought about Raull, I could see the deeper way the rage-driven algorithms of social media and YouTube damages attention and focus. It’s a cascading effect. These sites harm people’s ability to pay attention as individuals. Then they pump the population’s heads full of grotesque falsehoods, to the point where they can’t distinguish real threats to their existence (an authoritarian leader pledging to shoot them) from nonexistent threats (their children being made gay by penises painted on baby bottles). Over time, if you expose any country to all this for long enough, it will become a country so lost in rage and unreality that it can’t make sense of its problems and it can’t build solutions. This means that the streets and the skies actually become more dangerous—so you become hypervigilant, and this wrecks your attention even more.
This could be the future for all of us if we continue with these trends. Indeed, what happens in Brazil alone directly affects your life and mine. Bolsonaro has dramatically stepped up the destruction of the Amazon rain forest, the lungs of the planet. If this continues for much longer, it will tip us into an even worse climate disaster.
When I was discussing all this with Tristan one day back in San Francisco, he ran his fingers through his hair and said to me that these algorithms are “debasing the soil of society…. You need…a social fabric, and if you debase it, you don’t know what you are going to wake up to.”
This machinery is systematically diverting us—at an individual and a social level—from where we want to go. James Williams, the former Google strategist, said to me we should imagine “if we had a GPS and it worked fine the first time. But the next time, it took you a few streets away from where you wanted to go. And then later, it took you to a different town.” All because the advertisers who funded GPS had paid for this to happen. “You would never keep using that.” But social media works exactly this way. There’s a “destination we want to get to, and most of the time, it doesn’t actually get us there—it takes us off track. If it was actually navigating us not through informational space but through physical space, we would never keep using it. It would be, by definition, defective.”
Tristan and Aza started to believe that all these effects, when you add them together, are producing a kind of “human downgrading.” Aza said: “I think we’re in the process of reverse-engineering ourselves. [We discovered a way to] open up the human skull, find the strings that control us, and start pulling on our own marionette strings. Once you do that, an accidental jerk in one direction causes your arm to jerk further, which pulls your marionette string farther…. That’s the era that we’re headed into now.” Tristan believes that what we are seeing is “the collective downgrading of humans and the upgrading of machines.” We are becoming less rational, less intelligent, less focused.
Aza told me: “Imagine if you have worked your entire career toward a technology that you feel is good. It’s making democracy stronger. It’s changing the way you live. Your friends value you because of these things you’ve made. All of a sudden you’re like—that thing I’ve been working on my entire life is not just meaningless. It’s tearing apart the things you love the most.”
He told me that literature is full of stories where humans create something in a burst of optimism and then lose control of their creation. Dr. Frankenstein creates a monster only for it to escape from him and commit murder. Aza began to think about these stories when he talked with his friends who were engineers working for some of the most famous websites in the world. He would ask them basic questions, like why their recommendation engines recommend one thing over another, and, he said to me, “they’re like: ‘We’re not sure why it’s recommending those things.’ ” They’re not lying—they have set up a technology that is doing things they don’t fully comprehend. He always says to them: “Isn’t that exactly the moment, in the allegories, where you turn the thing off—[when] it’s starting to do things you can’t predict?”
When Tristan testified about this before the U.S. Senate, he asked: “How can we solve the world’s most urgent problems if we’ve downgraded our attention spans, downgraded our capacity for complexity and nuance, downgraded our shared truth, downgraded our beliefs into conspiracy-theory thinking, where we can’t construct shared agendas to solve our problems? This is destroying our sense-making, at a time when we need it the most. And the reason why I’m here is because every day it’s incentivized to get worse.” He said he was especially worried about this, he told me later, because we are now, as a species, facing our biggest challenge ever—the fact that we are destroying the ecosystem we depend on for life by triggering the climate crisis. If we can’t focus, what possible hope do we have to solve global warming?
So Tristan and Aza started to ask with increasing urgency: How, in practice, do we change the machinery that is stealing our attention?