Chapter 8

 

Fallible: The Network Is Often Wrong

In The Gulag Archipelago (1973), Aleksandr Solzhenitsyn chronicles the history of the Soviet labor camps and of the information network that created and sustained them. He was writing partly from bitter personal experience. When Solzhenitsyn served as a captain in the Red Army during World War II, he maintained a private correspondence with a school friend in which he occasionally criticized Stalin. To be on the safe side, he did not mention the dictator by name and spoke only about “the man with the mustache.” It availed him little. His letters were intercepted and read by the secret police, and in February 1945, while serving on the front line in Germany, he was arrested. He spent the next eight years in labor camps.[1] Many of Solzhenitsyn’s hard-won insights and stories are still relevant to understanding the development of information networks in the twenty-first century.

One story recounts events at a district party conference in Moscow Province in the late 1930s, at the height of the Stalinist Great Terror. A call was made to pay tribute to Stalin, and the audience—who of course knew that they were being carefully watched—burst into applause. After five minutes of applause, “palms were getting sore and raised arms were already aching. And the older people were panting from exhaustion…. However, who would dare be the first to stop?” Solzhenitsyn explains that “NKVD men were standing in the hall applauding and watching to see who quit first!” It went on and on, for six minutes, then eight, then ten. “They couldn’t stop now till they collapsed with heart attacks!…With make-believe enthusiasm on their faces, looking at each other with faint hope, the district leaders were just going to go on and on applauding till they fell where they stood.”

Finally, after eleven minutes, the director of a paper factory took his life in his hands, stopped clapping, and sat down. Everyone else immediately stopped clapping and also sat down. That same night, the secret police arrested him and sent him to the gulag for ten years. “His interrogator reminded him: Don’t ever be the first to stop applauding!”[2]

This story reveals a crucial and disturbing fact about information networks, and in particular about surveillance systems. As discussed in previous chapters, contrary to the naive view, information is often used to create order rather than discover truth. On the face of it, Stalin’s agents in the Moscow conference used the “clapping test” as a way to uncover the truth about the audience. It was a loyalty test, which assumed that the longer you clapped, the more you loved Stalin. In many contexts, this assumption is not unreasonable. But in the context of Moscow in the late 1930s, the nature of the applause changed. Since participants in the conference knew they were being watched, and since they knew the consequences of any hint of disloyalty, they clapped out of terror rather than love. The paper factory director might have been the first to stop not because he was the least loyal but perhaps because he was the most honest, or even simply because his hands hurt the most.

While the clapping test didn’t discover the truth about people, it was efficient in imposing order and forcing people to behave in a certain way. Over time, such methods cultivated servility, hypocrisy, and cynicism. This is what the Soviet information network did to hundreds of millions of people over decades. In quantum mechanics the act of observing subatomic particles changes their behavior; it is the same with the act of observing humans. The more powerful our tools of observation, the greater the potential impact.

The Soviet regime constructed one of the most formidable information networks in history. It gathered and processed enormous amounts of data on its citizens. It also claimed that the infallible theories of Marx, Engels, Lenin, and Stalin granted it a deep understanding of humanity. In fact, the Soviet information network ignored many important aspects of human nature, and it was in complete denial regarding the terrible suffering its policies inflicted on its own citizens. Instead of producing wisdom, it produced order, and instead of revealing the universal truth about humans, it actually created a new type of human—Homo sovieticus.

As defined by the dissident Soviet philosopher and satirist Aleksandr Zinovyev, Homo sovieticus were servile and cynical humans, lacking all initiative or independent thinking, passively obeying even the most ludicrous orders, and indifferent to the results of their actions.[3] The Soviet information network created Homo sovieticus through surveillance, punishments, and rewards. For example, by sending the director of the paper factory to the gulag, the network signaled to the other participants that conformity paid off, whereas being the first to do anything controversial was a bad idea. Though the network failed to discover the truth about humans, it was so good at creating order that it conquered much of the world.

The Dictatorship of the Like

An analogous dynamic may afflict the computer networks of the twenty-first century, which might create new types of humans and new dystopias. A paradigmatic example is the role played by social media algorithms in radicalizing people. Of course, the methods employed by the algorithms have been utterly different from those of the NKVD and involved no direct coercion or violence. But just as the Soviet secret police created the slavish Homo sovieticus through surveillance, rewards, and punishments, so also the Facebook and YouTube algorithms have created internet trolls by rewarding certain base instincts while punishing the better angels of our nature.

As explained briefly in chapter 6, the process of radicalization started when corporations tasked their algorithms with increasing user engagement, not only in Myanmar, but throughout the world. For example, in 2012 users were watching about 100 million hours of videos every day on YouTube. That was not enough for company executives, who set their algorithms an ambitious goal: 1 billion hours a day by 2016.[4] Through trial-and-error experiments on millions of people, the YouTube algorithms discovered the same pattern that Facebook algorithms also learned: outrage drives engagement up, while moderation tends not to. Accordingly, the YouTube algorithms began recommending outrageous conspiracy theories to millions of viewers while ignoring more moderate content. By 2016, users were indeed watching one billion hours every day on YouTube.[5]

YouTubers who were particularly intent on gaining attention noticed that when they posted an outrageous video full of lies, the algorithm rewarded them by recommending the video to numerous users and increasing the YouTubers’ popularity and income. In contrast, when they dialed down the outrage and stuck to the truth, the algorithm tended to ignore them. Within a few months of such reinforcement learning, the algorithm turned many YouTubers into trolls.[6]

The social and political consequences were far-reaching. For example, as the journalist Max Fisher documented in his 2022 book, The Chaos Machine, YouTube algorithms became an important engine for the rise of the Brazilian far right and for turning Jair Bolsonaro from a fringe figure into Brazil’s president.[7] While there were other factors contributing to that political upheaval, it is notable that many of Bolsonaro’s chief supporters and aides had originally been YouTubers who rose to fame and power by algorithmic grace.

A typical example is Carlos Jordy, who in 2017 was a city councilor in the small town of Niterói. The ambitious Jordy gained national attention by creating inflammatory YouTube videos that garnered millions of views. His videos warned Brazilians, for example, against conspiracies by schoolteachers to brainwash children and persecute conservative pupils. In 2018, Jordy won a seat in the Brazilian Chamber of Deputies (the lower house of the Brazilian Congress) as one of Bolsonaro’s most dedicated supporters. In an interview with Fisher, Jordy frankly said, “If social media didn’t exist, I wouldn’t be here [and] Jair Bolsonaro wouldn’t be president.” The latter claim may well be a self-serving exaggeration, but there is no denying that social media played an important part in Bolsonaro’s rise.

Another YouTuber who won a seat in Brazil’s Chamber of Deputies in 2018 was Kim Kataguiri, one of the leaders of the Movimento Brasil Livre (MBL, or Free Brazil Movement). Kataguiri initially used Facebook as his main platform, but his posts were too extreme even for Facebook, which banned some of them for disinformation. So Kataguiri switched over to the more permissive YouTube. In an interview in the MBL headquarters in São Paulo, Kataguiri’s aides and other activists explained to Fisher, “We have something here that we call the dictatorship of the like.” They explained that YouTubers tend to become steadily more extreme, posting untruthful and reckless content “just because something is going to give you views, going to give engagement…. Once you open that door there’s no going back, because you always have to go further…. Flat Earthers, anti-vaxxers, conspiracy theories in politics. It’s the same phenomenon. You see it everywhere.”[8]

Of course, the YouTube algorithms were not themselves responsible for inventing lies and conspiracy theories or for creating extremist content. At least in 2017–18, those things were done by humans. The algorithms were responsible, however, for incentivizing humans to behave in such ways and for pushing the resulting content in order to maximize user engagement. Fisher documented numerous far-right activists who first became interested in extremist politics after watching videos that the YouTube algorithm auto-played for them. One far-right activist in Niterói told Fisher that he was never interested in politics of any kind, until one day the YouTube algorithm auto-played for him a video on politics by Kataguiri. “Before that,” he explained, “I didn’t have an ideological, political background.” He credited the algorithm with providing “my political education.” Talking about how other people joined the movement, he said, “It was like that with everyone…. Most of the people here came from YouTube and social media.”[9]

Blame the Humans

We have reached a turning point in history in which major historical processes are partly caused by the decisions of nonhuman intelligence. It is this that makes the fallibility of the computer network so dangerous. Computer errors become potentially catastrophic only when computers become historical agents. We have already made this argument in chapter 6, when we briefly examined Facebook’s role in instigating the anti-Rohingya ethnic-cleansing campaign. As noted in that context, however, many people—including some of the managers and engineers of Facebook, YouTube, and the other tech giants—object to this argument. Since it is one of the central points of the entire book, it is best to delve deeper into the matter and examine more carefully the objections to it.

The people who manage Facebook, YouTube, TikTok, and other platforms routinely try to excuse themselves by shifting the blame from their algorithms to “human nature.” They argue that it is human nature that produces all the hate and lies on the platforms. The tech giants then claim that due to their commitment to free-speech values, they hesitate to censor the expression of genuine human emotions. For example, in 2019 the CEO of YouTube, Susan Wojcicki, explained, “The way that we think about it is: ‘Is this content violating one of our policies? Has it violated anything in terms of hate, harassment?’ If it has, we remove that content. We keep tightening and tightening the policies. We also get criticism, just to be clear, [about] where do you draw the lines of free speech and, if you draw it too tightly, are you removing voices of society that should be heard? We’re trying to strike a balance of enabling a broad set of voices, but also making sure that those voices play by a set of rules that are healthy conversations for society.”[10]

A Facebook spokesperson similarly said in October 2021, “Like every platform, we are constantly making difficult decisions between free expressions and harmful speech, security and other issues…. But drawing these societal lines is always better left to elected leaders.”[11] In this way, the tech giants constantly shift the discussion to their supposed role as moderators of human-produced content and ignore the active role their algorithms play in cultivating certain human emotions and discouraging others. Are they really blind to it?

Surely not. Back in 2016, an internal Facebook report discovered that “64 percent of all extremist group joins are due to our recommendation tools…. Our recommendation systems grow the problem.”[12] A secret internal Facebook memo from August 2019, leaked by the whistleblower Frances Haugen, stated, “We have evidence from a variety of sources that hate speech, divisive political speech, and misinformation on Facebook and [its] family of apps are affecting societies around the world. We also have compelling evidence that our core product mechanics, such as virality, recommendations, and optimizing for engagement, are a significant part of why these types of speech flourish on the platform.”[13]

Another leaked document from December 2019 noted, “Unlike communication with close friends and family, virality is something new we have introduced to many ecosystems…and it occurs because we intentionally encourage it for business reasons.” The document pointed out that “ranking content about higher stakes topics like health or politics based on engagement leads to perverse incentives and integrity issues.” Perhaps most damningly, it revealed, “Our ranking systems have specific separate predictions for not just what you would engage with, but what we think you may pass along so that others may engage with. Unfortunately, research has shown how outrage and misinformation are more likely to be viral.” This leaked document made one crucial recommendation: since Facebook cannot remove everything harmful from a platform used by many millions, it should at least “stop magnifying harmful content by giving it unnatural distribution.”[14]

Like the Soviet leaders in Moscow, the tech companies were not uncovering some truth about humans; they were imposing on us a perverse new order. Humans are very complex beings, and benign social orders seek ways to cultivate our virtues while curtailing our negative tendencies. But social media algorithms see us, simply, as an attention mine. The algorithms reduced the multifaceted range of human emotions—hate, love, outrage, joy, confusion—into a single catchall category: engagement. In Myanmar in 2016, in Brazil in 2018, and in numerous other countries, the algorithms scored videos, posts, and all other content solely according to how many minutes people engaged with the content and how many times they shared it with others. An hour of lies or hatred was ranked higher than ten minutes of truth or compassion—or an hour of sleep. The fact that lies and hate tend to be psychologically and socially destructive, whereas truth, compassion, and sleep are essential for human welfare, was completely lost on the algorithms. Based on this very narrow understanding of humanity, the algorithms helped to create a new social system that encouraged our basest instincts while discouraging us from realizing the full spectrum of the human potential.

As the harmful effects were becoming manifest, the tech giants were repeatedly warned about what was happening, but they failed to step in because of their faith in the naive view of information. As the platforms were overrun by falsehoods and outrage, executives hoped that if more people were enabled to express themselves more freely, truth would eventually prevail. This, however, did not happen. As we have seen again and again throughout history, in a completely free information fight, truth tends to lose. To tilt the balance in favor of truth, networks must develop and maintain strong self-correcting mechanisms that reward truth telling. These self-correcting mechanisms are costly, but if you want to get the truth, you must invest in them.

Silicon Valley thought it was exempt from this historical rule. Social media platforms have been singularly lacking in self-correcting mechanisms. In 2014, Facebook employed just a single Burmese-speaking content moderator to monitor activities in the whole of Myanmar.[15] When observers in Myanmar began warning Facebook that it needed to invest more in moderating content, Facebook ignored them. For example, Pwint Htun, a Burmese American engineer and telecom executive who grew up in rural Myanmar, wrote to Facebook executives repeatedly about the danger. In an email from July 5, 2014—two years before the ethnic-cleansing campaign began—she issued a prophetic warning: “Tragically, FB in Burma is used like radio in Rwanda during the dark days of genocide.” Facebook took no action.

Even after the attacks on the Rohingya intensified and Facebook faced a storm of criticism, it still refused to hire people with expert local knowledge to curate content. Thus, when informed that hate-mongers in Myanmar were using the Burmese word kalar as a racist slur for the Rohingya, Facebook reacted in April 2017 by banning from the platform any posts that used the word. This revealed Facebook’s utter lack of knowledge about local conditions and the Burmese language. In Burmese, kalar is a racist slur only in specific contexts. In other contexts, it is an entirely innocent term. The Burmese word for chair is kalar htaing, and the word for chickpea is kalar pae. As Pwint Htun wrote to Facebook in June 2017, banning the term kalar from the platform is like banning the letters “hell” from “hello.”[16] Facebook continued to ignore the need for local expertise. By April 2018, the number of Burmese speakers Facebook employed to moderate content for its eighteen million users in Myanmar was a grand total of five.[17]

Instead of investing in self-correcting mechanisms that would reward truth telling, the social media giants actually developed unprecedented error-enhancing mechanisms that rewarded lies and fictions. One such error-enhancing mechanism was the Instant Articles program that Facebook rolled out in Myanmar in 2016. Wishing to drive up engagement, Facebook paid news channels according to the amount of user engagement they generated, measured in clicks and views. No importance whatsoever was given to the truthfulness of the “news.” A 2021 study found that in 2015, before the program was launched, six of the ten top Facebook websites in Myanmar belonged to “legitimate media.” By 2017, under the impact of Instant Articles, “legitimate media” was down to just two websites out of the top ten. By 2018, all top ten websites were “fake news and clickbait websites.”

The study concluded that because of the launch of Instant Articles “clickbait actors cropped up in Myanmar overnight. With the right recipe for producing engaging and evocative content, they could generate thousands of US dollars a month in ad revenue, or ten times the average monthly salary—paid to them directly by Facebook.” Since Facebook was by far the most important source of online news in Myanmar, this had enormous impact on the overall media landscape of the country: “In a country where Facebook is synonymous with the Internet, the low-grade content overwhelmed other information sources.”[18] Facebook and other social media platforms didn’t consciously set out to flood the world with fake news and outrage. But by telling their algorithms to maximize user engagement, this is exactly what they perpetrated.

Reflecting on the Myanmar tragedy, Pwint Htun wrote to me in July 2023, “I naively used to believe that social media could elevate human consciousness and spread the perspective of common humanity through interconnected pre-frontal cortexes in billions of human beings. What I realize is that the social media companies are not incentivized to interconnect pre-frontal cortexes. Social media companies are incentivized to create interconnected limbic systems—which is much more dangerous for humanity.”

The Alignment Problem

I don’t want to imply that the spread of fake news and conspiracy theories is the main problem with all past, present, and future computer networks. YouTube, Facebook, and other social media platforms claim that since 2018 they have been tweaking their algorithms to make them more socially responsible. Whether this is true or not is hard to say, especially because there is no universally accepted definition of “social responsibility.”[19] But the specific problem of polluting the information sphere in pursuit of user engagement can certainly be solved. When the tech giants set their hearts on designing better algorithms, they can usually do it. Around 2005, the profusion of spam threatened to make the use of email impossible. Powerful algorithms were developed to address the problem. By 2015, Google claimed its Gmail algorithm had a 99.9 percent success rate in blocking genuine spam, while only 1 percent of legitimate emails were erroneously labeled as such.[20]

We also shouldn’t discount the huge social benefits that YouTube, Facebook, and other social media platforms have brought. To be clear, most YouTube videos and Facebook posts have not been fake news and genocidal incitements. Social media has been more than helpful in connecting people, giving voice to previously disenfranchised groups, and organizing valuable new movements and communities.[21] It has also encouraged an unprecedented wave of human creativity. In the days when television was the dominant medium, viewers were often denigrated as couch potatoes: passive consumers of content that a few gifted artists produced. Facebook, YouTube, and other social media platforms inspired the couch potatoes to get up and start creating. Most of the content on social media—at least until the rise of powerful generative AI—has been produced by the users themselves, and their cats and dogs, rather than by a limited professional class.

I, too, routinely use YouTube and Facebook to connect with people, and I am grateful to social media for connecting me with my husband, whom I met on one of the first LGBTQ social media platforms back in 2002. Social media has done wonders for dispersed minorities like LGBTQ people. Few gay boys are born to a gay family in a gay neighborhood, and in the days before the internet simply finding one another posed a big challenge, unless you moved to one of the handful of tolerant metropolises that had a gay subculture. Growing up in a small homophobic town in Israel in the 1980s and early 1990s, I didn’t know a single openly gay man. Social media in the late 1990s and early 2000s provided an unprecedented and almost magical way for members of the dispersed LGBTQ community to find one another and connect.

And yet I have devoted so much attention to the social media “user engagement” debacle because it exemplifies a much bigger problem afflicting computers—the alignment problem. When computers are given a specific goal, such as to increase YouTube traffic to one billion hours a day, they use all their power and ingenuity to achieve this goal. Since they operate very differently than humans, they are likely to use methods their human overlords didn’t anticipate. This can result in dangerous unforeseen consequences that are not aligned with the original human goals. Even if recommendation algorithms stop encouraging hate, other instances of the alignment problem might result in larger catastrophes than the anti-Rohingya campaign. The more powerful and independent computers become, the bigger the danger.

Of course, the alignment problem is neither new nor unique to algorithms. It bedeviled humanity for thousands of years before the invention of computers. It has been, for example, the foundational problem of modern military thinking, enshrined in Carl von Clausewitz’s theory of war. Clausewitz was a Prussian general who fought during the Napoleonic Wars. Following Napoleon’s final defeat in 1815, Clausewitz became the director of the Prussian War College. He also began formalizing a grand theory of war. After he died of cholera in 1831, his wife, Marie, edited his unfinished manuscript and published On War in several parts between 1832 and 1834.[22]

On War created a rational model for understanding war, and it is still the dominant military theory today. Its most important maxim is that “war is the continuation of policy by other means.”[23] This implies that war is not an emotional outbreak, a heroic adventure, or a divine punishment. War is not even a military phenomenon. Rather, war is a political tool. According to Clausewitz, military actions are utterly irrational unless they are aligned with some overarching political goal.

Suppose Mexico contemplates whether to invade and conquer its small neighbor Belize. And suppose a detailed military analysis concludes that if the Mexican army invades, it will achieve a quick and decisive military victory, crushing the small Belize army and conquering the capital, Belmopan, in three days. According to Clausewitz, that does not constitute a rational reason for Mexico to invade. The mere ability to secure military victory is meaningless. The key question the Mexican government should ask itself is, what political goals will the military success achieve?

History is full of decisive military victories that led to political disasters. For Clausewitz, the most obvious example was close to home: Napoleon’s career. Nobody disputes the military genius of Napoleon, who was a master of both tactics and strategy. But while his string of victories brought Napoleon temporary control of vast territories, they failed to secure lasting political achievements. His military conquests merely drove most European powers to unite against him, and his empire collapsed a decade after he crowned himself emperor.

Indeed, in the long term, Napoleon’s victories ensured the permanent decline of France. For centuries, France was Europe’s leading geopolitical power, largely because neither Italy nor Germany existed as a unified political entity. Italy was a hodgepodge of dozens of warring city-states, feudal principalities, and church territories. Germany was an even more bizarre jigsaw puzzle divided into more than a thousand independent polities, loosely held together under the theoretical suzerainty of the Holy Roman Empire of the German Nation.[24] In 1789, the prospect of a German or Italian invasion of France was simply unthinkable, because there was no such thing as a German or Italian army.

As Napoleon expanded his empire into central Europe and the Italian Peninsula, he liquidated the Holy Roman Empire in 1806, amalgamated many of the smaller German and Italian principalities into larger territorial blocs, created a German Confederation of the Rhine and a Kingdom of Italy, and sought to unify these territories under his dynastic rule. His victorious armies also spread the ideals of modern nationalism and popular sovereignty into the German and Italian lands. Napoleon thought all this would make his empire stronger. In fact, by breaking up traditional structures and giving Germans and Italians a taste of national consolidation, Napoleon inadvertently laid the foundations for the ultimate unification of Germany (1866–71) and of Italy (1848–71). These twin processes of national unification were sealed by the German victory over France in the Franco-Prussian War of 1870–71. Faced with two newly unified and fervently nationalistic powers on its eastern border, France never regained its position of dominance.

A more recent example of military victory leading to political defeat was provided by the American invasion of Iraq in 2003. The Americans won every major military engagement, but failed to achieve any of their long-term political aims. Their military victory didn’t establish a friendly regime in Iraq, or a favorable geopolitical order in the Middle East. The real winner of the war was Iran. American military victory turned Iraq from Iran’s traditional foe into Iran’s vassal, thereby greatly weakening the American position in the Middle East while making Iran the regional hegemon.[25]

Both Napoleon and George W. Bush fell victim to the alignment problem. Their short-term military goals were misaligned with their countries’ long-term geopolitical goals. We can understand the whole of Clausewitz’s On War as a warning that “maximizing victory” is as shortsighted a goal as “maximizing user engagement.” According to the Clausewitzian model, only once the political goal is clear can armies decide on a military strategy that will hopefully achieve it. From the overall strategy, lower-ranking officers can then derive tactical goals. The model constructs a clear hierarchy between long-term policy, medium-term strategy, and short-term tactics. Tactics are considered rational only if they are aligned with some strategic goal, and strategy is considered rational only if it is aligned with some political goal. Even local tactical decisions of a lowly company commander must serve the war’s ultimate political goal.

Suppose that during the American occupation of Iraq an American company comes under intense fire from a nearby mosque. The company commander has several different tactical decisions to choose from. He might order the company to retreat. He might order the company to storm the mosque. He might order one of his supporting tanks to blow up the mosque. What should the company commander do?

From a purely military perspective, it might seem best for the commander to order his tank to blow up the mosque. This would capitalize on the tactical advantage that the Americans enjoyed in terms of firepower, avoid risking the lives of his own soldiers, and achieve a decisive tactical victory. However, from a political perspective, this might be the worst decision the commander could make. Footage of an American tank destroying a mosque would galvanize Iraqi public opinion against the Americans and create outrage throughout the wider Muslim world. Storming the mosque might also be a political mistake, because it too could create resentment among Iraqis, while the cost in American lives could weaken support for the war among American voters. Given the political war aims of the United States, retreating and conceding tactical defeat might well be the most rational decision.

For Clausewitz, then, rationality means alignment. Pursuing tactical or strategic victories that are misaligned with political goals is irrational. The problem is that the bureaucratic nature of armies makes them highly susceptible to such irrationality. As discussed in chapter 3, by dividing reality into separate drawers, bureaucracy encourages the pursuit of narrow goals even when this harms the greater good. Bureaucrats tasked with accomplishing a narrow mission may be ignorant of the wider impact of their actions, and it has always been tricky to ensure that their actions remain aligned with the greater good of society. When armies operate along bureaucratic lines—as all modern armies do—it creates a huge gap between a captain commanding a company in the field and the president formulating long-term policy in a distant office. The captain is prone to make decisions that seem reasonable on the ground but that actually undermine the war’s ultimate goal.

We see, then, that the alignment problem has long predated the computer revolution and that the difficulties encountered by builders of present-day information empires are not unlike those that bedeviled previous would-be conquerors. Nevertheless, computers do change the nature of the alignment problem in important ways. No matter how difficult it used to be to ensure that human bureaucrats and soldiers remain aligned with society’s long-term goals, it is going to be even harder to ensure the alignment of algorithmic bureaucrats and autonomous weapon systems.

The Paper-Clip Napoleon

One reason why the alignment problem is particularly dangerous in the context of the computer network is that this network is likely to become far more powerful than any previous human bureaucracy. A misalignment in the goals of superintelligent computers might result in a catastrophe of unprecedented magnitude. In his 2014 book, Superintelligence, the philosopher Nick Bostrom illustrated the danger using a thought experiment, which is reminiscent of Goethe’s “Sorcerer’s Apprentice.” Bostrom asks us to imagine that a paper-clip factory buys a superintelligent computer and that the factory’s human manager gives the computer a seemingly simple task: produce as many paper clips as possible. In pursuit of this goal, the paper-clip computer conquers the whole of planet Earth, kills all the humans, sends expeditions to take over additional planets, and uses the enormous resources it acquires to fill the entire galaxy with paper-clip factories.

The point of the thought experiment is that the computer did exactly what it was told (just like the enchanted broomstick in Goethe’s poem). Realizing that it needed electricity, steel, land, and other resources to build more factories and produce more paper clips, and realizing that humans are unlikely to give up these resources, the superintelligent computer eliminated all humans in its single-minded pursuit of its given goal.[26] Bostrom’s point was that the problem with computers isn’t that they are particularly evil but that they are particularly powerful. And the more powerful the computer, the more careful we need to be about defining its goal in a way that precisely aligns with our ultimate goals. If we define a misaligned goal to a pocket calculator, the consequences are trivial. But if we define a misaligned goal to a superintelligent machine, the consequences could be dystopian.

The paper-clip thought experiment may sound outlandish and utterly disconnected from reality. But if Silicon Valley managers had paid attention when Bostrom published it in 2014, perhaps they would have been more careful before instructing their algorithms to “maximize user engagement.” The Facebook and YouTube algorithms behaved exactly like Bostrom’s imaginary algorithm. When told to maximize paper-clip production, the algorithm sought to convert the entire physical universe into paper clips, even if it meant destroying human civilization. When told to maximize user engagement, the Facebook and YouTube algorithms sought to convert the entire social universe into user engagement, even if it meant doing harm to the social fabric of Myanmar, Brazil, and many other countries.

Bostrom’s thought experiment highlights a second reason why the alignment problem is more urgent in the case of computers. Because they are inorganic entities, they are likely to adopt strategies that would never occur to any human and that we are therefore ill-equipped to foresee and forestall. Here’s one example: In 2016, Dario Amodei was working on a project called Universe, trying to develop a general-purpose AI that could play hundreds of different computer games. The AI competed well in various car races, so Amodei next tried it on a boat race. Inexplicably, the AI steered its boat right into a harbor and then sailed in endless circles in and out of the harbor.

It took Amodei considerable time to understand what went wrong. The problem occurred because initially Amodei wasn’t sure how to tell the AI that its goal was to “win the race.” “Winning” is an unclear concept to an algorithm. Translating “win the race” into computer language would have required Amodei to formalize complex concepts like track position and placement among the other boats in the race. So instead, Amodei took the easy way and told the boat to maximize its score. He assumed that the score was a good proxy for winning the race. After all, it worked with the car races.

But the boat race had a peculiar feature, absent from the car races, that allowed the ingenious AI to find a loophole in the game’s rules. The game rewarded players with a lot of points for getting ahead of other boats—as in the car races—but it also rewarded them with a few points whenever they replenished their power by docking into a harbor. The AI discovered that if instead of trying to outsail the other boats, it simply went in circles in and out of the harbor, it could accumulate more points far faster. Apparently, none of the game’s human developers—nor Dario Amodei—had noticed this loophole. The AI was doing exactly what the game was rewarding it to do—even though it is not what the humans were hoping for. That’s the essence of the alignment problem: rewarding A while hoping for B.[27] If we want computers to maximize social benefits, it’s a bad idea to reward them for maximizing user engagement.

A third reason to worry about the alignment problem of computers is that because they are so different from us, when we make the mistake of giving them a misaligned goal, they are less likely to notice it or request clarification. If the boat-race AI had been a human gamer, it would have realized that the loophole it found in the game’s rules probably doesn’t really count as “winning.” If the paper-clip AI had been a human bureaucrat, it would have realized that destroying humanity in order to produce paper clips is probably not what was intended. But since computers aren’t humans, we cannot rely on them to notice and flag possible misalignments. In the 2010s the YouTube and Facebook management teams were bombarded with warnings from their human employees—as well as from outside observers—about the harm being done by the algorithms, but the algorithms themselves never raised the alarm.[28]

As we give algorithms greater and greater power over health care, education, law enforcement, and numerous other fields, the alignment problem will loom ever larger. If we don’t find ways to solve it, the consequences will be far worse than algorithms racking up points by sailing boats in circles.

The Corsican Connection

How to solve the alignment problem? In theory, when humans create a computer network, they must define for it an ultimate goal, which the computers are never allowed to change or ignore. Then, even if computers become so powerful that we lose control over them, we can rest assured that their immense power will benefit rather than harm us. Unless, of course, it turned out that we defined a harmful or vague goal. And there’s the rub. In the case of human networks, we rely on self-correcting mechanisms to periodically review and revise our goals, so setting the wrong goal is not the end of the world. But since the computer network might escape our control, if we set it the wrong goal, we might discover our mistake when we are no longer able to correct it. Some might hope that through a careful process of deliberation, we might be able to define in advance the right goals for the computer network. This, however, is a very dangerous delusion.

To understand why it is impossible to agree in advance on the ultimate goals of the computer network, let’s revisit Clausewitz’s war theory. There is one fatal flaw in the way he equates rationality with alignment. While Clausewitzian theory demands that all actions be aligned with the ultimate goal, it offers no rational way to define such a goal. Consider Napoleon’s life and military career. What should have been his ultimate goal? Given the prevailing cultural atmosphere of France circa 1800, we can think of several alternatives for “ultimate goal” that might have occurred to Napoleon:

Potential goal number 1: Making France the dominant power in Europe, secure against any future attack by Britain, the Habsburg Empire, Russia, a unified Germany, or a unified Italy.

Potential goal number 2: Creating a new multiethnic empire ruled by Napoleon’s family, which would include not only France but also many additional territories both in Europe and overseas.

Potential goal number 3: Achieving everlasting glory for himself personally, so that even centuries after his death billions of people will know the name Napoleon and admire his genius.

Potential goal number 4: Securing the redemption of his everlasting soul, and gaining entry to heaven after his death.

Potential goal number 5: Spreading the universal ideals of the French Revolution, and helping to protect freedom, equality, and human rights throughout Europe and the world.

Many self-styled rationalists tend to argue that Napoleon should have made it his life’s mission to achieve the first goal—securing French domination in Europe. But why? Remember that for Clausewitz rationality means alignment. A tactical maneuver is rational if, and only if, it is aligned with some higher strategic goal, which should in turn be aligned with an even higher political goal. But where does this chain of goals ultimately start? How can we determine the ultimate goal that justifies all the strategic subgoals and tactical steps derived from it? Such an ultimate goal by definition cannot be aligned with anything higher than itself, because there is nothing higher. What then makes it rational to place France at the top of the goal hierarchy, rather than Napoleon’s family, Napoleon’s fame, Napoleon’s soul, or universal human rights? Clausewitz provides no answer.

One might argue that goal number 4—securing the redemption of his everlasting soul—cannot be a serious candidate for an ultimate rational goal, because it is based on a belief in mythology. But the same argument can be leveled at all the other goals. Everlasting souls are an intersubjective invention that exists only in people’s minds, and exactly the same is true of nations and human rights. Why should Napoleon care about the mythical France any more than about his mythical soul?

Indeed, for most of his youth, Napoleon didn’t even consider himself French. He was born Napoleone di Buonaparte on Corsica, to a family of Italian emigrants. For five hundred years Corsica was ruled by the Italian city-state of Genoa, where many of Napoleone’s ancestors lived. It was only in 1768—a year before Napoleone’s birth—that Genoa ceded the island to France. Corsican nationalists resisted being handed over to France and rose in rebellion. Only after their defeat in 1770 did Corsica formally become a French province. Many Corsicans continued to resent the French takeover, but the di Buonaparte family swore allegiance to the French king and sent Napoleone to military school in mainland France.[29]

At school, Napoleone had to endure a good deal of hazing from his classmates for his Corsican nationalism and his poor command of the French language.[30] His mother tongues were Corsican and Italian, and although he gradually became fluent in French, he retained throughout his life a Corsican accent and an inability to spell French correctly.[31] Napoleone eventually enlisted in the French army, but when the Revolution broke out in 1789, he went back to Corsica, hoping the revolution would provide an opportunity for his beloved island to achieve greater autonomy. Only after he fell out with the leader of the Corsican independence movement—Pasquale Paoli—did Napoleone abandon the Corsican cause in May 1793. He returned to the mainland, where he decided to build his future.[32] It was at this stage that Napoleone di Buonaparte turned into Napoléon Bonaparte (he continued to use the Italian version of his name until 1796).[33]

Why then was it rational for Napoleon to devote his military career to making France the dominant power in Europe? Was it perhaps more rational for him to stay in Corsica, patch up his personal disagreements with Paoli, and devote himself to liberating his native island from its French conquerors? And maybe Napoleon should in fact have made it his life’s mission to unite Italy—the land of his ancestors?

Clausewitz offers no method to answer these questions rationally. If our only rule of thumb is that “every action must be aligned with some higher goal,” by definition there is no rational way to define that ultimate goal. How then can we provide a computer network with an ultimate goal it must never ignore or subvert? Tech executives and engineers who rush to develop AI are making a huge mistake if they think there is a rational way to tell AI what its ultimate goal should be. They should learn from the bitter experiences of generations of philosophers who tried to define ultimate goals and failed.

The Kantian Nazi

For millennia, philosophers have been looking for a definition of an ultimate goal that will not depend on an alignment to some higher goal. They have repeatedly been drawn to two potential solutions, known in philosophical jargon as deontology and utilitarianism. Deontologists (from the Greek word deon, meaning “duty”) believe that there are some universal moral duties, or moral rules, that apply to everyone. These rules do not rely on alignment to a higher goal, but rather on their intrinsic goodness. If such rules indeed exist, and if we can find a way to program them into computers, then we can make sure the computer network will be a force for good.

But what exactly does “intrinsic goodness” mean? The most famous attempt to define an intrinsically good rule was made by Immanuel Kant, a contemporary of Clausewitz and Napoleon. Kant argued that an intrinsically good rule is any rule that I would like to make universal. According to this view, a person about to murder someone should stop and go through the following thought process: “I am now going to murder a human. Would I like to establish a universal rule saying that it is okay to murder humans? If such a universal rule is established, then someone might murder me. So there shouldn’t be a universal rule allowing murder. It follows that I too shouldn’t murder.” In simpler language, Kant reformulated the old Golden Rule: “Do unto others what you would have them do unto you” (Matthew 7:12).

This sounds like a simple and obvious idea: each of us should behave in a way we want everyone to behave. But ideas that sound good in the ethereal realm of philosophy often have trouble immigrating to the harsh land of history. The key question historians would ask Kant is, when you talk about universal rules, how exactly do you define “universal”? Under actual historical circumstances, when a person is about to commit murder, the first step they often take is to exclude the victim from the universal community of humanity.[34] This, for example, is what anti-Rohingya extremists like Wirathu did. As a Buddhist monk, Wirathu was certainly against murdering humans. But he didn’t think this universal rule applied to killing Rohingya, who were seen as subhuman. In posts and interviews, he repeatedly compared them to beasts, snakes, mad dogs, wolves, jackals, and other dangerous animals.[35] On October 30, 2017, at the height of the anti-Rohingya violence, another, more senior Buddhist monk preached a sermon to military officers in which he justified violence against the Rohingya by telling the officers that non-Buddhists were “not fully human.”[36]

As a thought experiment, imagine a meeting between Immanuel Kant and Adolf Eichmann—who, by the way, considered himself a Kantian.[37] As Eichmann signs an order sending another trainload of Jews to Auschwitz, Kant tells him, “You are about to murder thousands of humans. Would you like to establish a universal rule saying it is okay to murder humans? If you do that, you and your family might also be murdered.” Eichmann replies, “No, I am not about to murder thousands of humans. I am about to murder thousands of Jews. If you ask me whether I would like to establish a universal rule saying it is okay to murder Jews, then I am all for it. As for myself and my family, there is no risk that this universal rule would lead to us being murdered. We aren’t Jews.”

One potential Kantian reply to Eichmann is that when we define entities, we must always use the most universal definition applicable. If an entity can be defined as either “a Jew” or “a human,” we should use the more universal term “human.” However, the whole point of Nazi ideology was to deny the humanity of Jews. In addition, note that Jews are not just humans. They are also animals, and they are also organisms. Since animals and organisms are obviously more universal categories than “human,” if you follow the Kantian argument to its logical conclusion, it might push us to adopt an extreme vegan position. Since we are organisms, does it mean we should object to the killing of any organism, down even to tomatoes or amoebas?

In history, many if not most conflicts concern the definition of identities. Everybody accepts that murder is wrong, but thinks that only killing members of the in-group qualifies as “murder,” whereas killing someone from an out-group is not. But the in-groups and out-groups are intersubjective entities, whose definition usually depends on some mythology. Deontologists who pursue universal rational rules often end up the captives of local myths.

This problem with deontology is especially critical if we try to dictate universal deontologist rules not to humans but to computers. Computers aren’t even organic. So if they follow a rule of “Do unto others what you would have them do unto you,” why should they be concerned about killing organisms like humans? A Kantian computer that doesn’t want to be killed has no reason to object to a universal rule saying “It is okay to kill organisms”; such a rule does not endanger the nonorganic computer.

Alternatively, being inorganic entities, computers may have no qualms about dying. As far as we can tell, death is an organic phenomenon and may be inapplicable to inorganic entities. When ancient Assyrians talked about “killing” documents, that was just a metaphor. If computers are more like documents than like organisms, and don’t care about “being killed,” would we like a Kantian computer to conclude that killing humans is therefore fine?

Is there a way to define whom computers should care about, without getting bogged down by some intersubjective myth? The most obvious suggestion is to tell computers that they must care about any entity capable of suffering. While suffering is often caused by belief in local intersubjective myths, suffering itself is nonetheless a universal reality. Therefore, using the capacity to suffer in order to define the critical in-group grounds morality in an objective and universal reality. A self-driving car should avoid killing all humans—whether Buddhist or Muslim, French or Italian—and should also avoid killing dogs and cats, and any sentient robots that might one day exist. We may even refine this rule, instructing the car to care about different beings in direct proportion to their capacity to suffer. If the car has to choose between killing a human and killing a cat, it should drive over the cat, because presumably the cat has a lesser capacity to suffer. But if we go in that direction, we inadvertently desert the deontologist camp and find ourselves in the camp of their rivals—the utilitarians.

The Calculus of Suffering

Whereas deontologists struggle to find universal rules that are intrinsically good, utilitarians judge actions by their impact on suffering and happiness. The English philosopher Jeremy Bentham—another contemporary of Napoleon, Clausewitz, and Kant—said that the only rational ultimate goal is to minimize suffering in the world and maximize happiness. If our main fear about computer networks is that their misaligned goals might inflict terrible suffering on humans and perhaps on other sentient beings, then the utilitarian solution seems both obvious and attractive. When creating the computer network, we just need to instruct it to minimize suffering and maximize happiness. If Facebook had told its algorithms “maximize happiness” instead of “maximize user engagement,” all would allegedly have been well. It is worth noting that this utilitarian approach is indeed popular in Silicon Valley, championed in particular by the effective altruism movement.[38]

Unfortunately, as with the deontologist solution, what sounds simple in the theoretical realm of philosophy becomes fiendishly complex in the practical land of history. The problem for utilitarians is that we don’t possess a calculus of suffering. We don’t know how many “suffering points” or “happiness points” to assign to particular events, so in complex historical situations it is extremely difficult to calculate whether a given action increases or decreases the overall amount of suffering in the world.

Utilitarianism is at its best in situations when the scales of suffering are very clearly tipped in one direction. When confronted by Eichmann, utilitarians don’t need to get into any complicated debates about identity. They just need to point out that the Holocaust caused immense suffering to the Jews, without providing equivalent benefits to anyone else, including the Germans. There was no compelling military or economic need for the Germans to murder millions of Jews. The utilitarian case against the Holocaust is overwhelming.

Utilitarians also have a field day when dealing with “victimless crimes” like homosexuality, in which all the suffering is on one side only. For centuries, the persecution of gay people caused them immense suffering, but it was nevertheless justified by various prejudices that were erroneously presented as deontological universal rules. Kant, for example, condemned homosexuality on the grounds that it is “contrary to natural instinct and to animal nature” and that it therefore degrades a person “below the level of the animals.” Kant further fulminated that because such acts are contrary to nature, they “make man unworthy of his humanity. He no longer deserves to be a person.”[39] Kant, in fact, repackaged a Christian prejudice as a supposedly universal deontological rule, without providing empirical proof that homosexuality is indeed contrary to nature. In light of the above discussion of dehumanization as a prelude to massacre, it is also noteworthy how Kant dehumanized gay people. The view that homosexuality is contrary to nature and deprives people of their humanity paved the way for Nazis like Eichmann to justify murdering homosexuals in concentration camps. Since homosexuals were allegedly below the level of animals, the Kantian rule against murdering humans didn’t apply to them.[40]

Utilitarians find it easy to dismiss Kant’s sexual theories, and Bentham indeed was one of the first modern European thinkers who favored the decriminalization of homosexuality.[41] Utilitarians argue that criminalizing homosexuality in the name of some dubious universal rule causes tremendous suffering to millions of people, without offering any substantial benefits to others. When two men form a loving relationship, this makes them happy, without making anyone else miserable. Why then forbid it? This type of utilitarian logic also led to many other modern reforms, such as the ban on torture and the introduction of some legal protections for animals.

But in historical situations when the scales of suffering are more evenly matched, utilitarianism falters. In the early days of the COVID-19 pandemic, governments all over the world adopted strict policies of social isolation and lockdown. This probably saved the lives of several million people.[42] It also made hundreds of millions miserable for months. Moreover, it might have indirectly caused numerous deaths, for example by increasing the incidence of murderous domestic violence,[43] or by making it more difficult for people to diagnose and treat other dangerous illnesses, like cancer.[44] Can anyone calculate the total impact of the lockdown policies and determine whether they increased or decreased the suffering in the world?

This may sound like a perfect task for a relentless computer network. But how would the computer network decide how many “misery points” to allocate to being locked down with three kids in a two-bedroom apartment for a month? Is that 60 misery points or 600? And how many points to allot to a cancer patient who died because she missed her chemotherapy treatments? Is that 60,000 misery points or 600,000? And what if she would have died of cancer anyway, and the chemo would merely have extended her life by five agonizing months? Should the computers value five months of living with extreme pain as a net gain or a net loss for the sum total of suffering in the world?

And how would the computer network evaluate the suffering caused by less tangible things, such as the knowledge of our own mortality? If a religious myth promises us that we will never really die, because after death our eternal soul will go to heaven, does that make us truly happy or just delusional? Is death the deep cause of our misery, or does our misery stem from our attempts to deny death? If someone loses their religious faith and comes to terms with their mortality, should the computer network see this as a net loss or a net gain?

What about even more complicated historical events like the American invasion of Iraq? The Americans were well aware that their invasion would cause tremendous suffering for millions of people. But in the long run, they argued, the benefits of bringing freedom and democracy to Iraq would outweigh the costs. Can the computer network calculate whether this argument was sound? Even if it was theoretically plausible, in practice the Americans failed to establish a stable democracy in Iraq. Does that mean that their attempt was wrong in the first place?

Just as deontologists trying to answer the question of identity are pushed to adopt utilitarian ideas, so utilitarians stymied by the lack of a suffering calculus often end up adopting a deontologist position. They uphold general rules like “Avoid wars of aggression” or “Protect human rights,” even though they cannot show that following these rules always reduces the sum total of suffering in the world. History provides them only with a vague impression that following these rules tends to reduce suffering. And when some of these general rules clash—for example, when contemplating launching a war of aggression in order to protect human rights—utilitarianism doesn’t offer much practical help. Not even the most powerful computer network can perform the necessary calculations.

Accordingly, while utilitarianism promises a rational—and even mathematical—way to align every action with “the ultimate good,” in practice it may well produce just another mythology. Communist true believers confronted by the horrors of Stalinism often replied that the happiness that future generations would experience under “real socialism” would redeem any short-term misery in the gulags. Libertarians, when asked about the immediate social harms of unrestricted free speech or the total abolition of taxes, express a similar faith that future benefits will outweigh any short-term damage. The danger of utilitarianism is that if you have a strong enough belief in a future utopia, it can become an open license to inflict terrible suffering in the present. Indeed, this is a trick traditional religions discovered thousands of years ago. The crimes of this world could too easily be excused by the promises of future salvation.

Computer Mythology

How then did bureaucratic systems throughout history set their ultimate goals? They relied on mythology to do it for them. No matter how rational the officials, engineers, tax collectors, and accountants were, they were ultimately in the service of this or that mythmaker. To paraphrase John Maynard Keynes, practical people, who believe themselves to be quite exempt from any religious influence, are usually the slaves of some mythmaker. Even nuclear physicists have found themselves obeying the commands of Shiite ayatollahs and communist apparatchiks.

The alignment problem turns out to be, at heart, a problem of mythology. Nazi administrators could have been committed deontologists or utilitarians, but they would still have murdered millions so long as they understood the world in terms of a racist mythology. If you start with the mythological belief that Jews are demonic monsters bent on destroying humanity, then both deontologists and utilitarians can find many logical arguments why the Jews should be killed.

An analogous problem might well afflict computers. Of course, they cannot “believe” in any mythology, because they are nonconscious entities that don’t believe in anything. As long as they lack subjectivity, how can they hold intersubjective beliefs? However, one of the most important things to realize about computers is that when a lot of computers communicate with one another, they can create inter-computer realities, analogous to the intersubjective realities produced by networks of humans. These inter-computer realities may eventually become as powerful—and as dangerous—as human-made intersubjective myths.

This is a very complicated argument, but it is another of the central arguments of the book, so let’s go over it carefully. First, let’s try to understand what inter-computer realities are. As an initial example, consider a one-player computer game. In such a game, you can wander inside a virtual landscape that exists as information within one computer. If you see a rock, that rock is not made of atoms. It is made of bits inside a single computer. When several computers are linked to one another, they can create inter-computer realities. Several players using different computers can wander together inside a common virtual landscape. If they see a rock, that rock is made of bits in several computers.[45]

Just as intersubjective realities like money and gods can influence the physical reality outside people’s minds, so inter-computer realities can influence reality outside the computers. In 2016 the game Pokémon Go took the world by storm and was downloaded hundreds of millions of times by the end of the year.[46] Pokémon Go is an augmented reality mobile game. Players can use their smartphones to locate, fight, and capture virtual creatures called Pokémon, which seem to exist in the physical world. I once went with my nephew Matan on such a Pokémon hunt. Walking around his neighborhood, I saw only houses, trees, rocks, cars, people, cats, dogs, and pigeons. I didn’t see any Pokémon, because I didn’t have a smartphone. But Matan, looking around through his smartphone lens, could “see” Pokémon standing on a rock or hiding behind a tree.

Though I couldn’t see the creatures, they were obviously not confined to Matan’s smartphone, because other people could “see” them too. For example, we encountered two other kids who were hunting the same Pokémon. If Matan managed to capture a Pokémon, the other kids could immediately observe what happened. The Pokémon were inter-computer entities. They existed as bits in a computer network rather than as atoms in the physical world, but they could nevertheless interact with the physical world and influence it, as it were, in various ways.

Now let’s examine a more consequential example of inter-computer realities. Consider the rank that a website gets in a Google search. When we google for news, flight tickets, or restaurant recommendations, one website appears at the top of the first Google page, whereas another is relegated to the middle of the fiftieth page. What exactly is this Google rank, and how is it determined? The Google algorithm determines the website’s Google rank by assigning points to various parameters, such as how many people visit the website and how many other websites link to it. The rank itself is an inter-computer reality, existing in a network connecting billions of computers—the internet. Like Pokémon, this inter-computer reality spills over into the physical world. For a news outlet, a travel agency, or a restaurant it matters a great deal whether its website appears at the top of the first Google page or in the middle of the fiftieth page.[47]

Since the Google rank is so important, people use all kinds of tricks to manipulate the Google algorithm to give their website a higher rank. For example, they may use bots to generate more traffic to the website.[48] This is also a widespread phenomenon in social media, where coordinated bot armies are constantly manipulating the algorithms of YouTube, Facebook, or X (formerly Twitter). If a post goes viral, is it because humans are really interested in it, or because thousands of bots managed to fool the algorithm?[49]

Inter-computer realities like Pokémon and Google ranks are analogous to intersubjective realities like the sanctity that humans ascribe to temples and cities. I lived much of my life in one of the holiest places on earth—the city of Jerusalem. Objectively, it is an ordinary place. As you walk around Jerusalem, you see houses, trees, rocks, cars, people, cats, dogs, and pigeons, as in any other city. But many people nevertheless imagine it to be an extraordinary place, full of gods, angels, and holy stones. They believe in this so strongly that they sometimes fight over possession of the city or of specific holy buildings and sacred stones, most notably the Holy Rock, located under the Dome of the Rock on Temple Mount. The Palestinian philosopher Sari Nusseibeh observed that “Jews and Muslims, acting on religious beliefs and backed up by nuclear capabilities, are poised to engage in history’s worst-ever massacre of human beings, over a rock.”[50] They don’t fight over the atoms that compose the rock; they fight over its “sanctity,” a bit like kids fighting over a Pokémon. The sanctity of the Holy Rock, and of Jerusalem generally, is an intersubjective phenomenon that exists in the communication network connecting many human minds. For thousands of years wars were fought over intersubjective entities like holy rocks. In the twenty-first century, we might see wars fought over inter-computer entities.

If this sounds like science fiction, consider potential developments in the financial system. As computers become more intelligent and more creative, they are likely to create new inter-computer financial devices. Gold coins and dollars are intersubjective entities. Cryptocurrencies like bitcoin are midway between intersubjective and inter-computer. The idea behind them was invented by humans, and their value still depends on human beliefs, but they cannot exist outside the computer network. In addition, they are increasingly traded by algorithms so that their value depends on the calculations of algorithms and not just on human beliefs.

What if in ten or fifty years computers create a new kind of cryptocurrency or some other financial device that becomes a vital tool for trading and investing—and a potential source for political crises and conflicts? Recall that the 2007–8 global financial crisis was instigated by collateralized debt obligations. These financial devices were invented by a handful of mathematicians and investment whiz kids and were almost unintelligible for most humans, including regulators. This led to an oversight failure and to a global catastrophe.[51] Computers may well create financial devices that will be orders of magnitude more complex than CDOs and that will be intelligible only to other computers. The result could be a financial and political crisis even worse than that of 2007–8.

Throughout history, economics and politics required that we understand the intersubjective realities invented by people—like religions, nations, and currencies. Someone who wanted to understand American politics had to take into account intersubjective realities like Christianity and CDOs. Increasingly, however, understanding American politics will necessitate understanding inter-computer realities ranging from AI-generated cults and currencies to AI-run political parties and even fully incorporated AIs. The U.S. legal system already recognizes corporations as legal persons that possess rights such as freedom of speech. In Citizens United v. Federal Election Commission (2010) the U.S. Supreme Court decided that this even protected the right of corporations to make political donations.[52] What would stop AIs from being incorporated and recognized as legal persons with freedom of speech, then lobbying and making political donations to protect and expand AI rights?

For tens of thousands of years, humans dominated planet Earth because we were the only ones capable of creating and sustaining intersubjective entities like corporations, currencies, gods, and nations, and using such entities to organize large-scale cooperation. Now computers may acquire comparable abilities.

This isn’t necessarily bad news. If computers lacked connectivity and creativity, they would not be very useful. We increasingly rely on computers to manage our money, drive our vehicles, reduce pollution, and discover new medicines, precisely because computers can directly communicate with one another, spot patterns where we can’t, and construct models that might never occur to us. The problem we face is not how to deprive computers of all creative agency, but rather how to steer their creativity in the right direction. It is the same problem we have always had with human creativity. The intersubjective entities invented by humans were the basis for all the achievements of human civilization, but they occasionally led to crusades, jihads, and witch hunts. The inter-computer entities will probably be the basis for future civilizations, but the fact that computers collect empirical data and use mathematics to analyze it doesn’t mean they cannot launch their own witch hunts.

The New Witches

In early modern Europe, an elaborate information network analyzed a huge amount of data about crimes, illnesses, and disasters and reached the conclusion that it was all the fault of witches. The more data the witch-hunters gathered, the more convinced they became that the world was full of demons and sorcery and that there was a global satanic conspiracy to destroy humanity. The information network then went on to identify the witches and imprison or kill them. We now know that witches were a bogus intersubjective category, invented by the information network itself and then imposed on people who had never actually met Satan and couldn’t summon hailstorms.

In the Soviet Union, an even more elaborate information network invented the kulaks—another mythic category that was imposed on millions. The mountains of information collected by Soviet bureaucracy about the kulaks weren’t an objective truth, but they created a new intersubjective truth. Knowing that someone was a kulak became one of the most important things to know about a Soviet person, even though the category was fictitious.

On an even larger scale, from the sixteenth to the twentieth century, numerous colonial bureaucracies in the Americas, from Brazil through Mexico and the Caribbean to the United States, created a racist mythology and came up with all kinds of intersubjective racial categories. Humans were divided into Europeans, Africans, and Native Americans, and since interracial sexual relations were common, additional categories were invented. In many Spanish colonies the laws differentiated between mestizos, people with mixed Spanish and Native American ancestry; mulatos, people with mixed Spanish and African ancestry; zambos, people with mixed African and Native American ancestry; and pardos, people with mixed Spanish, African, and Native American ancestry. All these seemingly empirical categories determined whether people could be enslaved, enjoy political rights, bear arms, hold public office, be admitted to school, practice certain professions, live in particular neighborhoods, and be allowed to have sex with and get married to each other. Allegedly, by placing a person in a particular racial drawer, one could define their personality, intellectual abilities, and ethical inclinations.[53]

By the nineteenth century, racism pretended to be an exact science: it claimed to differentiate between people on the basis of objective biological facts, and to rely on scientific methods such as measuring skulls and recording crime statistics. But the cloud of numbers and categories was just a smoke screen for absurd intersubjective myths. The fact that somebody had a Native American grandmother or an African father didn’t, of course, reveal anything about their intelligence, kindness, or honesty. These bogus categories didn’t discover or describe any truth about humans; they imposed an oppressive, mythological order on them.

As computers replace humans in more and more bureaucracies, from tax collection and health care to security and justice, they too may create a mythology and impose it on us with unprecedented efficiency. In a world ruled by paper documents, bureaucrats had difficulty policing racial borderlines or tracking everyone’s exact ancestry. People could get false documents. A zambo could move to another town and pretend to be a pardo. A Black person could sometimes pass as white. Similarly in the Soviet Union, kulak children occasionally managed to falsify their papers to get a good job or a place in college. In Nazi Europe, Jews could sometimes adopt an Aryan identity. But it would be much harder to game the system in a world ruled by computers that can read irises and DNA rather than paper documents. Computers could be frighteningly efficient in imposing false labels on people and making sure that the labels stick.

For example, social credit systems could create a new underclass of “low-credit people.” Such a system may claim to merely “discover” the truth through an empirical and mathematical process of aggregating points to form an overall score. But how exactly would it define pro-social and antisocial behaviors? What happens if such a system deducts points for criticizing government policies, for reading foreign literature, for practicing a minority religion, for having no religion, or for socializing with other low-credit people? As a thought experiment, consider what might happen when the new technology of the social credit system meets traditional religions.

Religions like Judaism, Christianity, and Islam have always imagined that somewhere above the clouds there is an all-seeing eye that gives or deducts points for everything we do and that our eternal fate depends on the score we accumulate. Of course, nobody could be certain of their score. You could know for sure only after you died. In practical terms, this meant that sinfulness and sainthood were intersubjective phenomena whose very definition depended on public opinion. What might happen if the Iranian regime, for example, decides to use its computer-based surveillance system not only to enforce its strict hijab laws, but to turn sinfulness and sainthood into precise inter-computer phenomena? You didn’t wear a hijab on the street—that’s −10 points. You ate on Ramadan before sunset—another 20 points deducted. You went to Friday prayer at the mosque, +5 points. You made the pilgrimage to Mecca, +500 points. The system might then aggregate all the points and divide people into “sinners” (under 0 points), “believers” (0 to 1,000 points), and “saints” (above 1,000 points). Whether someone is a sinner or a saint will depend on algorithmic calculations, not human belief. Would such a system discover the truth about people or impose order on people?

Analogous problems may afflict all social credit systems and total surveillance regimes. Whenever they claim to use all-encompassing databases and ultraprecise mathematics to discover sinners, terrorists, criminals, and antisocial or untrustworthy people, they might actually be imposing baseless religious and ideological prejudices with unprecedented efficiency.

Computer Bias

Some people may hope to overcome the problem of religious and ideological biases by giving even more power to the computers. The argument for doing so might go something like this: racism, misogyny, homophobia, antisemitism, and all other biases originate not in computers but in the psychological conditions and mythological beliefs of human beings. Computers are mathematical beings that don’t have a psychology or a mythology. So if we could take the humans completely out of the equation, the algorithms could finally decide things on the basis of pure math, free from all psychological distortions or mythological prejudices.

Unfortunately, numerous studies have revealed that computers often have deep-seated biases of their own. While they are not biological entities, and while they lack consciousness, they do have something akin to a digital psyche and even a kind of inter-computer mythology. They may well be racist, misogynist, homophobic, or antisemitic.[54] For example, on March 23, 2016, Microsoft released the AI chatbot Tay, giving it free access to Twitter. Within hours, Tay began posting misogynist and antisemitic tweets, such as “I fucking hate feminists and they should all die and burn in hell” and “Hitler was right I hate the Jews.” The vitriol increased until horrified Microsoft engineers shut Tay down—a mere sixteen hours after its release.[55]

More subtle but widespread racism was discovered in 2017 by the MIT professor Joy Buolamwini in commercial face-classification algorithms. She showed that these algorithms were very accurate in identifying white males, but extremely inaccurate in identifying Black females. For example, the IBM algorithm erred only 0.3 percent of the time in identifying the gender of light-skinned males, but 34.7 percent of the time when trying to identify the gender of dark-skinned females. As a qualitative test, Buolamwini asked the algorithms to categorize photos of the female African American activist Sojourner Truth, famous for her 1851 speech “Ain’t I a Woman?” The algorithms identified Truth as a man.[56]

When Buolamwini—who is a Ghanaian American woman—tested another facial-analysis algorithm to identify herself, the algorithm couldn’t “see” her dark-skinned face at all. In this context, “seeing” means the ability to acknowledge the presence of a human face, a feature used by phone cameras, for example, to decide where to focus. The algorithm easily saw light-skinned faces, but not Buolamwini’s. Only when Buolamwini put on a white mask did the algorithm recognize that it was observing a human face.[57]

What’s going on here? One answer might be that racist and misogynist engineers have coded these algorithms to discriminate against Black women. While we cannot rule out the possibility that such things happen, it was not the answer in the case of the face-classification algorithms or of Microsoft’s Tay. In fact, these algorithms picked up the racist and misogynist bias all by themselves from the data they were trained on.

To understand how this could happen, we need to explain something about the history of algorithms. Originally, algorithms could not learn much by themselves. For example, in the 1980s and 1990s chess-playing algorithms were taught almost everything they knew by their human programmers. The humans coded into the algorithm not only the basic rules of chess but also how to evaluate different positions and moves on the board. For example, humans coded a rule that sacrificing a queen in exchange for a pawn is usually a bad idea. These early algorithms managed to defeat human chess masters only because the algorithms could calculate many more moves and evaluate many more positions than a human could. But the algorithms’ abilities remained limited. Since they relied on humans to tell them all the secrets of the game, if the human coders didn’t know something, the algorithms they produced were also unlikely to know it.[58]

As the field of machine learning developed, algorithms gained more independence. The fundamental principle of machine learning is that algorithms can teach themselves new things by interacting with the world, just as humans do, thereby producing a fully fledged artificial intelligence. The terminology is not always consistent, but generally speaking, for something to be acknowledged as an AI, it needs the capacity to learn new things by itself, rather than just follow the instructions of its original human creators. Present-day chess-playing AI is taught nothing except the basic rules of the game. It learns everything else by itself, either by analyzing databases of prior games or by playing new games and learning from experience.[59] AI is not a dumb automaton that repeats the same movements again and again irrespective of the results. Rather, it is equipped with strong self-correcting mechanisms, which allow it to learn from its own mistakes.

This means that AI begins its life as a “baby algorithm” that has a lot of potential and computing power but doesn’t actually know much. The AI’s human parents give it only the capacity to learn and access to a world of data. They then let the baby algorithm explore the world. Like organic newborns, baby algorithms learn by spotting patterns in the data to which they have access. If I touch fire, it hurts. If I cry, Mum comes. If I sacrifice a queen for a pawn, I probably lose the game. By finding patterns in the data, the baby algorithm learns more, including many things that its human parents don’t know.[60]

Yet databases come with biases. The face-classification algorithms studied by Joy Buolamwini were trained on data sets of tagged online photos, such as the Labeled Faces in the Wild database. The photos in that database were taken mainly from online news articles. Since white males dominate the news, 78 percent of the photos in the database were of males, and 84 percent were of white people. George W. Bush appeared 530 times—more than twice as many times as all Black women combined.[61] Another database prepared by a U.S. government agency was more than 75 percent male, was almost 80 percent light-skinned, and had just 4.4 percent dark-skinned females.[62] No wonder the algorithms trained on such data sets were excellent at identifying white men but lousy at identifying Black women. Something similar happened to the chatbot Tay. The Microsoft engineers didn’t build into it any intentional prejudices. But a few hours of exposure to the toxic information swirling in Twitter turned the AI into a raging racist.[63]

It gets worse. In order to learn, baby algorithms need one more thing besides access to data. They also need a goal. A human baby learns how to walk because she wants to get somewhere. A lion cub learns to hunt because he wants to eat. Algorithms too must be given a goal in order to learn. In chess, it is easy to define the goal: take the opponent’s king. The AI learns that sacrificing a queen for a pawn is a “mistake,” because it usually prevents the algorithm from reaching its goal. In face recognition, the goal is also easy: identify the person’s gender, age, and name as listed in the original database. If the algorithm guessed that George W. Bush is female, but the database says male, the goal has not been reached, and the algorithm learns from its mistake.

But if you want to train an algorithm for hiring personnel, for example, how would you define the goal? How would the algorithm know that it made a mistake and hired the “wrong” person? We might tell the baby algorithm that its goal is to hire people who stay in the company for at least a year. Employers obviously don’t want to invest a lot of time and money in training a worker who quits or gets fired after a few months. Having defined the goal in such a way, it is time to go over the data. In chess, the algorithm can produce any amount of new data just by playing against itself. But in the job market, that’s impossible. Nobody can create an entire imaginary world where the baby algorithm can hire and fire imaginary people and learn from that experience. The baby algorithm can train only on an existing database about real-life people. Just as lion cubs learn what a zebra is mainly by spotting patterns in the real-life savanna, so baby algorithms learn what a good employee is by spotting patterns in real-life companies.

Unfortunately, if real-life companies already suffer from some ingrained bias, the baby algorithm is likely to learn this bias, and even amplify it. For instance, an algorithm looking for patterns of “good employees” in real-life data may conclude that hiring the boss’s nephews is always a good idea, no matter what other qualification they have. For the data clearly indicates that “boss’s nephews” are usually hired when applying for a job, and are rarely fired. The baby algorithm would spot this pattern and become nepotistic. If it is put in charge of an HR department, it will start giving preference to the boss’s nephews.

Similarly, if companies in a misogynist society prefer to hire men rather than women, an algorithm trained on real-life data is likely to pick up that bias, too. This indeed happened when Amazon tried in 2014–18 to develop an algorithm for screening job applications. Learning from previous successful and unsuccessful applications, the algorithm began to systematically downgrade applications simply for containing the word “women” or coming from graduates of women’s colleges. Since existing data showed that in the past such applications had less chance of succeeding, the algorithm developed a bias against them. The algorithm thought it had simply discovered an objective truth about the world: applicants who graduate from women’s colleges are less qualified. In fact, it just internalized and imposed a misogynist bias. Amazon tried and failed to fix the problem and ultimately scrapped the project.[64]

The database on which an AI is trained is a bit like a human’s childhood. Childhood experiences, traumas, and fairy tales stay with us throughout our lives. AIs too have childhood experiences. Algorithms might even infect one another with their biases, just as humans do. Consider a future society in which algorithms are ubiquitous and used not just to screen job applicants but also to recommend to people what to study in college. Suppose that due to a preexisting misogynist bias, 80 percent of jobs in engineering are given to men. In this society, an algorithm that hires new engineers is not only likely to copy this preexisting bias but also to infect the college recommendation algorithms with the same bias. A young woman entering college may be discouraged from studying engineering, because the existing data indicates she is less likely to eventually get a job. What began as a human intersubjective myth that “women aren’t good at engineering” might morph into an inter-computer myth. If we don’t get rid of the bias at the very beginning, computers may well perpetuate and magnify it.[65]

But getting rid of algorithmic bias might be as difficult as ridding ourselves of our human biases. Once an algorithm has been trained, it takes a lot of time and effort to “untrain” it. We might decide to just dump the biased algorithm and train an altogether new algorithm on a new set of less biased data. But where on earth can we find a set of totally unbiased data?[66]

Many of the algorithmic biases surveyed in this and previous chapters share the same fundamental problem: the computer thinks it has discovered some truth about humans, when in fact it has imposed order on them. A social media algorithm thinks it has discovered that humans like outrage, when in fact it is the algorithm itself that conditioned humans to produce and consume more outrage. Such biases result, on the one hand, from the computers discounting the full spectrum of human abilities and, on the other hand, from the computers discounting their own power to influence humans. Even if computers observe that almost all humans behave in a particular way, it doesn’t mean humans are bound to behave like that. Maybe it just means that the computers themselves are rewarding such behavior while punishing and blocking alternatives. For computers to have a more accurate and responsible view of the world, they need to take into account their own power and impact. And for that to happen, the humans who currently engineer computers need to accept that they are not manufacturing new tools. They are unleashing new kinds of independent agents, and potentially even new kinds of gods.

The New Gods?

In God, Human, Animal, Machine, the philosopher Meghan O’Gieblyn demonstrates how the way we understand computers is heavily influenced by traditional mythologies. In particular, she stresses the similarities between the omniscient and unfathomable god of Judeo-Christian theology and present-day AIs whose decisions seem to us both infallible and inscrutable.[67] This may present humans with a dangerous temptation.

We saw in chapter 4 that already thousands of years ago humans dreamed about finding an infallible information technology to shield us from human corruption and error. Holy books were an audacious attempt to craft such a technology, but they backfired. Since the book couldn’t interpret itself, a human institution had to be built to interpret the sacred words and adapt them to changing circumstances. Different humans interpreted the holy book in different ways, thereby reopening the door to corruption and error. But in contrast to the holy book, computers can adapt themselves to changing circumstances and also interpret their decisions and ideas for us. Some humans may consequently conclude that the quest for an infallible technology has finally succeeded and that we should treat computers as a holy book that can talk to us and interpret itself, without any need of an intervening human institution.

This would be an extremely hazardous gamble. When certain interpretations of scriptures have occasionally caused disasters such as witch hunts and wars of religion, humans have always been able to change their beliefs. When the human imagination summoned a belligerent and hate-filled god, we retained the power to rid ourselves of it and imagine a more tolerant deity. But algorithms are independent agents, and they are already taking power away from us. If they cause disaster, simply changing our beliefs about them will not necessarily stop them. And it is highly likely that if computers are entrusted with power, they will indeed cause disasters, for they are fallible.

When we say that computers are fallible, it means far more than that they make the occasional factual mistake or wrong decision. More important, like the human network before it, the computer network might fail to find the right balance between truth and order. By creating and imposing on us powerful inter-computer myths, the computer network could cause historical calamities that would dwarf the early modern European witch hunts or Stalin’s collectivization.

Consider a network of billions of interacting computers that accumulates a stupendous amount of information about the world. As they pursue various goals, the networked computers develop a common model of the world that helps them communicate and cooperate. This shared model will probably be full of errors, fictions, and lacunae, and be a mythology rather than a truthful account of the universe. One example is a social credit system that divides humans into bogus categories, determined not by a human rationale like racism but by some unfathomable computer logic. We may come into contact with this mythology every day of our lives, since it would guide the numerous decisions computers make about us. But because this mythical model would be created by inorganic entities in order to coordinate actions with other inorganic entities, it might owe nothing to the old biological dramas and might be totally alien to us.[68]

As noted in chapter 2, large-scale societies cannot exist without some mythology, but that doesn’t mean all mythologies are equal. To guard against errors and excesses, some mythologies have acknowledged their own fallible origin and included a self-correcting mechanism allowing humans to question and change the mythology. That’s the model of the U.S. Constitution, for example. But how can humans probe and correct a computer mythology we don’t understand?

One potential guardrail is to train computers to be aware of their own fallibility. As Socrates taught, being able to say “I don’t know” is an essential step on the path to wisdom. And this is true of computer wisdom no less than of human wisdom. The first lesson that every algorithm should learn is that it might make mistakes. Baby algorithms should learn to doubt themselves, to signal uncertainty, and to obey the precautionary principle. This is not impossible. Engineers are already making considerable headway in encouraging AI to express self-doubt, ask for feedback, and admit its mistakes.[69]

Yet no matter how aware algorithms are of their own fallibility, we should keep humans in the loop, too. Given the pace at which AI is developing, it is simply impossible to anticipate how it will evolve and to place guardrails against all future potential hazards. This is a key difference between AI and previous existential threats like nuclear technology. The latter presented humankind with a few easily anticipated doomsday scenarios, most obviously an all-out nuclear war. This meant that it was feasible to conceptualize the danger in advance, and explore ways to mitigate it. In contrast, AI presents us with countless doomsday scenarios. Some are relatively easy to grasp, such as terrorists using AI to produce biological weapons of mass destruction. Some are more difficult to grasp, such as AI creating new psychological weapons of mass destruction. And some may be utterly beyond the human imagination, because they emanate from the calculations of an alien intelligence. To guard against a plethora of unforeseeable problems, our best bet is to create living institutions that can identify and respond to the threats as they arise.[70]

Ancient Jews and Christians were disappointed to discover that the Bible couldn’t interpret itself, and reluctantly maintained human institutions to do what the technology couldn’t. In the twenty-first century, we are in an almost opposite situation. We devised a technology that can interpret itself, but precisely for this reason we had better create human institutions to monitor it carefully.

To conclude, the new computer network will not necessarily be either bad or good. All we know for sure is that it will be alien and it will be fallible. We therefore need to build institutions that will be able to check not just familiar human weaknesses like greed and hatred but also radically alien errors. There is no technological solution to this problem. It is, rather, a political challenge. Do we have the political will to deal with it? Modern humanity has created two main types of political systems: large-scale democracy and large-scale totalitarianism. Part 3 examines how each of these systems may deal with a radically alien and fallible computer network.