Chapter 9

 

Democracies: Can We Still Hold a Conversation?

Civilizations are born from the marriage of bureaucracy and mythology. The computer-based network is a new type of bureaucracy that is far more powerful and relentless than any human-based bureaucracy we’ve seen before. This network is also likely to create inter-computer mythologies that will be far more complex and alien than any human-made god. The potential benefits of this network are enormous. The potential downside is the destruction of human civilization.

To some people, warnings about civilizational collapse sound like over-the-top jeremiads. Every time a powerful new technology has emerged, anxieties arose that it might bring about the apocalypse, but we are still here. As the Industrial Revolution unfolded, Luddite doomsday scenarios did not come to pass, and Blake’s “dark Satanic Mills” ended up producing the most affluent societies in history. Most people today enjoy far better living conditions than their ancestors in the eighteenth century. Intelligent machines will prove even more beneficial than any previous machines, promise AI enthusiasts like Marc Andreessen and Ray Kurzweil.[1] Humans will enjoy much better health care, education, and other services, and AI will even help save the ecosystem from collapse.

Unfortunately, a closer look at history reveals that the Luddites were not entirely wrong and that we actually have very good reasons to fear powerful new technologies. Even if in the end the positives of these technologies outweigh their negatives, getting to that happy ending usually involves a lot of trials and tribulations. Novel technology often leads to historical disasters, not because the technology is inherently bad, but because it takes time for humans to learn how to use it wisely.

The Industrial Revolution is a prime example. When industrial technology began spreading globally in the nineteenth century, it upended traditional economic, social, and political structures and opened the way to create entirely new societies, which were potentially more affluent and peaceful. However, learning how to build benign industrial societies was far from straightforward and involved many costly experiments and hundreds of millions of victims.

One costly experiment was modern imperialism. The Industrial Revolution originated in Britain in the late eighteenth century. During the nineteenth century industrial technologies and production methods were adopted in other European countries ranging from Belgium to Russia, as well as in the United States and Japan. Imperialist thinkers, politicians, and parties in these industrial heartlands claimed that the only viable industrial society was an empire. The argument was that unlike relatively self-sufficient agrarian societies, the novel industrial societies relied much more on foreign markets and foreign raw materials, and only an empire could satisfy these unprecedented appetites. Imperialists feared that countries that industrialized but failed to conquer any colonies would be shut out from essential raw materials and markets by more ruthless competitors. Some imperialists argued that acquiring colonies was not just essential for the survival of their own state but beneficial for the rest of humanity, too. They claimed empires alone could spread the blessings of the new technologies to the so-called undeveloped world.

Consequently, industrial countries like Britain and Russia that already had empires greatly expanded them, whereas countries like the United States, Japan, Italy, and Belgium set out to build them. Equipped with mass-produced rifles and artillery, conveyed by steam power, and commanded by telegraph, the armies of industry swept the globe from New Zealand to Korea, and from Somalia to Turkmenistan. Millions of indigenous people saw their traditional way of life trampled under the wheels of these industrial armies. It took more than a century of misery before most people realized that the industrial empires were a terrible idea and that there were better ways to build an industrial society and secure its necessary raw materials and markets.

Stalinism and Nazism were also extremely costly experiments in how to construct industrial societies. Leaders like Stalin and Hitler argued that the Industrial Revolution had unleashed immense powers that only totalitarianism could rein in and exploit to the full. They pointed to World War I—the first “total war” in history—as proof that survival in the industrial world demanded totalitarian control of all aspects of politics, society, and the economy. On the positive side, they also claimed that the Industrial Revolution was like a furnace that melts all previous social structures with their human imperfections and weaknesses and provides the opportunity to forge perfect societies inhabited by unalloyed superhumans.

On the way to creating the perfect industrial society, Stalinists and Nazis learned how to industrially murder millions of people. Trains, barbed wire, and telegraphed orders were linked to create an unprecedented killing machine. Looking back, most people today are horrified by what the Stalinists and Nazis perpetrated, but at the time their audacious visions mesmerized millions. In 1940 it was easy to believe that Stalin and Hitler were the models for harnessing industrial technology, whereas the dithering liberal democracies were on their way to the dustbin of history.

The very existence of competing recipes for building industrial societies led to costly clashes. The two world wars and the Cold War can be seen as a debate about the proper way to go about it, in which all sides learned from one another, while experimenting with novel industrial methods to wage war. In the course of this debate, tens of millions died and humankind came perilously close to annihilating itself.

On top of all these other catastrophes, the Industrial Revolution also undermined the global ecological balance, causing a wave of extinctions. In the early twenty-first century up to fifty-eight thousand species are believed to go extinct every year, and total vertebrate populations declined by 60 percent between 1970 and 2014.[2] The survival of human civilization too is under threat. Because we still seem unable to build an industrial society that is also ecologically sustainable, the vaunted prosperity of the present human generation comes at a terrible cost to other sentient beings and to future human generations. Maybe we’ll eventually find a way—perhaps with the help of AI—to create ecologically sustainable industrial societies, but until that day the jury on Blake’s satanic mills is still out.

If we ignore for a moment the ongoing damage to the ecosystem, we can nevertheless try to comfort ourselves with the thought that eventually humans did learn how to build more benevolent industrial societies. Imperial conquests, world wars, genocides, and totalitarian regimes were woeful experiments that taught humans how not to do it. By the end of the twentieth century, some might argue, humanity got it more or less right.

Yet even so the message to the twenty-first century is bleak. If it took humanity so many terrible lessons to learn how to manage steam power and telegraphs, what would it cost to learn to manage bioengineering and AI? Do we need to go through another cycle of global empires, totalitarian regimes, and world wars in order to figure out how to use them benevolently? The technologies of the twenty-first century are far more powerful—and potentially far more destructive—than those of the twentieth century. We therefore have less room for error. In the twentieth century, we can say that humanity got a C-minus in the lesson on using industrial technology. Just enough to pass. In the twenty-first century, the bar is set much higher. We must do better this time.

The Democratic Way

By the end of the twentieth century, it had become clear that imperialism, totalitarianism, and militarism were not the ideal way to build industrial societies. Despite all its flaws, liberal democracy offered a better way. The great advantage of liberal democracy is that it possesses strong self-correcting mechanisms, which limit the excesses of fanaticism and preserve the ability to recognize our errors and try different courses of action. Given our inability to predict how the new computer network will develop, our best chance to avoid catastrophe in the present century is to maintain democratic self-correcting mechanisms that can identify and correct mistakes as we go along.

But can liberal democracy itself survive in the twenty-first century? This question is not concerned with the fate of democracy in specific countries, where it might be threatened by unique developments and local movements. Rather, it is about the compatibility of democracy with the structure of twenty-first-century information networks. In chapter 5 we saw that democracy depends on information technology and that for most of human history large-scale democracy was simply impossible. Might the new information technologies of the twenty-first century again make democracy impractical?

One potential threat is that the relentlessness of the new computer network might annihilate our privacy and punish or reward us not only for everything we do and say but even for everything we think and feel. Can democracy survive under such conditions? If the government—or some corporation—knows more about me than I know about myself, and if it can micromanage everything I do and think, that would give it totalitarian control over society. Even if elections are still held regularly, they would be an authoritarian ritual rather than a real check on the government’s power. For the government could use its vast surveillance powers and its intimate knowledge of every citizen to manipulate public opinion on an unprecedented scale.

It is a mistake, however, to imagine that just because computers could enable the creation of a total surveillance regime, such a regime is inevitable. Technology is rarely deterministic. In the 1970s, democratic countries like Denmark and Canada could have emulated the Romanian dictatorship and deployed an army of secret agents and informers to spy on their citizens in the service of “maintaining the social order.” They chose not to, and it turned out to be the right choice. Not only were people much happier in Denmark and Canada, but these countries also performed much better by almost every conceivable social and economic yardstick. In the twenty-first century, too, the fact that it is possible to monitor everybody all the time doesn’t force anyone to actually do it and doesn’t mean it makes social or economic sense.

Democracies can choose to use the new powers of surveillance in a limited way, in order to provide citizens with better health care and security without destroying their privacy and autonomy. New technology doesn’t have to be a morality tale in which every golden apple contains the seeds of doom. Sometimes people think of new technology as a binary all-or-nothing choice. If we want better health care, we must sacrifice our privacy. But it doesn’t have to work like that. We can and should get better health care and still retain some privacy.

Entire books are dedicated to outlining how democracies can survive and flourish in the digital age.[3] It would be impossible, in a few pages, to do justice to the complexity of the suggested solutions, or to comprehensively discuss their merits and drawbacks. It might even be counterproductive. When people are overwhelmed by a deluge of unfamiliar technical details, they might react with despair or apathy. In an introductory survey of computer politics, things should be kept as simple as possible. While experts should spend lifelong careers discussing the finer details, it is crucial that the rest of us understand the fundamental principles that democracies can and should follow. The key message is that these principles are neither new nor mysterious. They have been known for centuries, even millennia. Citizens should demand that they be applied to the new realities of the computer age.

The first principle is benevolence. When a computer network collects information on me, that information should be used to help me rather than manipulate me. This principle has already been successfully enshrined by numerous traditional bureaucratic systems, such as health care. Take, for example, our relationship with our family physician. Over many years she may accumulate a lot of sensitive information on our medical conditions, family life, sexual habits, and unhealthy vices. Perhaps we don’t want our boss to know that we got pregnant, we don’t want our colleagues to know we have cancer, we don’t want our spouse to know we are having an affair, and we don’t want the police to know we take recreational drugs, but we trust our physician with all this information so that she can take good care of our health. If she sells this information to a third party, it is not just unethical; it is illegal.

Much the same is true of the information that our lawyer, our accountant, or our therapist accumulates.[4] Having access to our personal life comes with a fiduciary duty to act in our best interests. Why not extend this obvious and ancient principle to computers and algorithms, starting with the powerful algorithms of Google, Baidu, and TikTok? At present, we have a serious problem with the business model of these data hoarders. While we pay our physicians and lawyers for their services, we usually don’t pay Google and TikTok. They make their money by exploiting our personal information. That’s a problematic business model, one that we would hardly tolerate in other contexts. For example, we don’t expect to get free shoes from Nike in exchange for giving Nike all our private information and allowing Nike to do what it wants with it. Why should we agree to get free email services, social connections, and entertainment from the tech giants in exchange for giving them control of our most sensitive data?

If the tech giants cannot square their fiduciary duty with their current business model, legislators could require them to switch to a more traditional business model, of getting users to pay for services in money rather than in information. Alternatively, citizens might view some digital services as so fundamental that they should be free for everybody. But we have a historical model for that too: health care and education. Citizens could decide that it is the government’s responsibility to provide basic digital services for free and finance them out of our taxes, just as many governments provide free basic health care and education services.

The second principle that would protect democracy against the rise of totalitarian surveillance regimes is decentralization. A democratic society should never allow all its information to be concentrated in one place, no matter whether that hub is the government or a private corporation. It may be extremely helpful to create a national medical database that collects information on citizens in order to provide them with better health-care services, prevent epidemics, and develop new medicines. But it would be a very dangerous idea to merge this database with the databases of the police, the banks, or the insurance companies. Doing so might make the work of doctors, bankers, insurers, and police officers more efficient, but such hyper-efficiency can easily pave the way for totalitarianism. For the survival of democracy, some inefficiency is a feature, not a bug. To protect the privacy and liberty of individuals, it’s best if neither the police nor the boss knows everything about us.

Multiple databases and information channels are also essential for maintaining strong self-correcting mechanisms. These mechanisms require several different institutions that balance each other: government, courts, media, academia, private businesses, NGOs. Each of these is fallible and corruptible, and so should be checked by the others. To keep an eye on each other, these institutions must have independent access to information. If all newspapers get their information from the government, they cannot expose government corruption. If academia relies for research and publication on the database of a single business behemoth, could scholars still criticize the operations of that corporation? A single archive makes censorship easy.

A third democratic principle is mutuality. If democracies increase surveillance of individuals, they must simultaneously increase surveillance of governments and corporations too. It’s not necessarily bad if tax collectors or welfare agencies gather more information about us. It can help make taxation and welfare systems not just more efficient but fairer as well. What’s bad is if all the information flows one way: from the bottom up. The Russian FSB collects enormous amounts of information on Russian citizens, while citizens themselves know close to nothing about the inner workings of the FSB and the Putin regime more generally. Amazon and TikTok know an awful lot about my preferences, purchases, and personality, while I know almost nothing about their business model, their tax policies, and their political affiliations. How do they make their money? Do they pay all the tax that they should? Do they take orders from any political overlords? Do they perhaps have politicians in their pocket?

Democracy requires balance. Governments and corporations often develop apps and algorithms as tools for top-down surveillance. But algorithms can just as easily become powerful tools for bottom-up transparency and accountability, exposing bribery and tax evasion. If they know more about us, while we simultaneously know more about them, the balance is kept. This isn’t a novel idea. Throughout the nineteenth and twentieth centuries, democracies greatly expanded governmental surveillance of citizens so that, for example, the Italian or Japanese government of the 1990s had surveillance abilities that autocratic Roman emperors or Japanese shoguns could only have dreamed of. Italy and Japan nevertheless remained democratic, because they simultaneously increased governmental transparency and accountability. Mutual surveillance is another important element of sustaining self-correcting mechanisms. If citizens know more about the activities of politicians and CEOs, it is easier to hold them accountable and to correct their mistakes.

A fourth democratic principle is that surveillance systems must always leave room for both change and rest. In human history, oppression can take the form of either denying humans the ability to change or denying them the opportunity to rest. For example, the Hindu caste system was based on myths that said the gods divided humans into rigid castes, and any attempt to change one’s status was akin to rebelling against the gods and the proper order of the universe. Racism in modern colonies and countries like Brazil and the United States was based on similar myths, ones that said that God or nature divided humans into rigid racial groups. Ignoring race, or trying to mix races together, was allegedly a sin against divine or natural laws that could result in the collapse of the social order and even the destruction of the human species.

At the opposite extreme of the spectrum, modern totalitarian regimes like Stalin’s U.S.S.R. believed that humans are capable of almost limitless change. Through relentless social control even deep-seated biological characteristics such as egotism and familial attachments could be uprooted, and a new socialist human created.

Surveillance by state agents, priests, and neighbors was key for imposing on people both rigid caste systems and totalitarian reeducation campaigns. New surveillance technology, especially when coupled with a social credit system, might force people either to conform to a novel caste system or to constantly change their actions, thoughts, and personality in accordance with the latest instructions from above.

Democratic societies that employ powerful surveillance technology therefore need to beware of the extremes of both over-rigidity and over-pliability. Consider, for example, a national health-care system that deploys algorithms to monitor my health. At one extreme, the system could take an overly rigid approach and ask its algorithm to predict what illnesses I am likely to suffer from. The algorithm then goes over my genetic data, my medical file, my social media activities, my diet, and my daily schedule and concludes that I have a 91 percent chance of suffering a heart attack at the age of fifty. If this rigid medical algorithm is used by my insurance company, it may prompt the insurer to raise my premium.[5] If it is used by my bankers, it may cause them to refuse me a loan. If it is used by potential spouses, they may decide not to marry me.

But it is a mistake to think that the rigid algorithm has really discovered the truth about me. The human body is not a fixed block of matter but a complex organic system that is constantly growing, decaying, and adapting. Our minds too are in constant flux. Thoughts, emotions, and sensations pop up, flare for a while, and die down. In our brains, new synapses form within hours.[6] Just reading this paragraph, for example, is changing your brain structure a little, encouraging neurons to make new connections or abandon old links. You are already a little different from what you were when you began reading it. Even at the genetic level things are surprisingly flexible. Though an individual’s DNA remains the same throughout life, epigenetic and environmental factors can significantly alter how the same genes express themselves.

So an alternative health-care system may instruct its algorithm not to predict my illnesses, but rather to help me avoid them. Such a dynamic algorithm could go over the exact same data as the rigid algorithm, but instead of predicting a heart attack at fifty, the algorithm gives me precise dietary recommendations and suggestions for specific regular exercises. By hacking my DNA, the algorithm doesn’t discover my preordained destiny, but rather helps me change my future. Insurance companies, banks, and potential spouses should not write me off so easily.[7]

But before we rush to embrace the dynamic algorithm, we should note that it too has a downside. Human life is a balancing act between endeavoring to improve ourselves and accepting who we are. If the goals of the dynamic algorithm are dictated by an ambitious government or by ruthless corporations, the algorithm is likely to morph into a tyrant, relentlessly demanding that I exercise more, eat less, change my hobbies, and alter numerous other habits, or else it would report me to my employer or downgrade my social credit score. History is full of rigid caste systems that denied humans the ability to change, but it is also full of dictators who tried to mold humans like clay. Finding the middle path between these two extremes is a never-ending task. If we indeed give a national health-care system vast power over us, we must create self-correcting mechanisms that will prevent its algorithms from becoming either too rigid or too demanding.

The Pace of Democracy

Surveillance is not the only danger that new information technologies pose to democracy. A second threat is that automation will destabilize the job market and the resulting strain may undermine democracy. The fate of the Weimar Republic is the most commonly cited example of this kind of threat. In the German elections of May 1928, the Nazi Party won less than 3 percent of the vote, and the Weimar Republic seemed to be prospering. Within less than five years, the Weimar Republic had collapsed, and Hitler was the absolute dictator of Germany. This turnaround is usually attributed to the 1929 financial crisis and the following global depression. Whereas just prior to the Wall Street crash of 1929 the German unemployment rate was about 4.5 percent of the labor force, by early 1932 it had climbed to almost 25 percent.[8]

If three years of up to 25 percent unemployment could turn a seemingly prospering democracy into the most brutal totalitarian regime in history, what might happen to democracies when automation causes even bigger upheavals in the job market of the twenty-first century? Nobody knows what the job market will look like in 2050, or even in 2030, except that it will look very different from today. AI and robotics will change numerous professions, from harvesting crops to trading stocks to teaching yoga. Many jobs that people do today will be taken over, partly or wholly, by robots and computers.

Of course, as old jobs disappear, new jobs will emerge. Fears of automation leading to large-scale unemployment go back centuries, and so far they have never materialized. The Industrial Revolution put millions of farmers out of agricultural jobs and provided them with new jobs in factories. It then automated factories and created lots of service jobs. Today many people have jobs that were unimaginable thirty years ago, such as bloggers, drone operators, and designers of virtual worlds. It is highly unlikely that by 2050 all human jobs will disappear. Rather, the real problem is the turmoil of adapting to new jobs and conditions. To cushion the blow, we need to prepare in advance. In particular, we need to equip younger generations with skills that will be relevant to the job market of 2050.

Unfortunately, nobody is certain what skills we should teach children in school and students in university, because we cannot predict which jobs and tasks will disappear and which ones will emerge. The dynamics of the job market may contradict many of our intuitions. Some skills that we have cherished for centuries as unique human abilities may be automated rather easily. Other skills that we tend to look down on may be far more difficult to automate.

For example, intellectuals tend to appreciate intellectual skills more than motor and social skills. But actually, it is easier to automate chess playing than, say, dish washing. Until the 1990s, chess was often hailed as one of the prime achievements of the human intellect. In his influential 1972 book, What Computers Can’t Do, the philosopher Hubert Dreyfus studied various attempts to teach computers chess and noted that despite all these efforts computers were still unable to defeat even novice human players. This was a crucial example for Dreyfus’s argument that computer intelligence is inherently limited.[9] In contrast, nobody thought that dish washing was particularly challenging. It turned out, however, that a computer can defeat the world chess champion far more easily than replace a kitchen porter. Sure, automatic dishwashers have been around for decades, but even our most sophisticated robots still lack the intricate skills needed to pick up dirty dishes from the tables of a busy restaurant, place the delicate plates and glasses inside the automatic dishwasher, and take them out again.

Similarly, to judge by their pay, you could assume that our society appreciates doctors more than nurses. However, it is harder to automate the job of nurses than the job of at least those doctors who mostly gather medical data, provide a diagnosis, and recommend treatment. These tasks are essentially pattern recognition, and spotting patterns in data is one thing AI does better than humans. In contrast, AI is far from having the skills necessary to automate nursing tasks such as replacing bandages on an injured person or giving an injection to a crying child.[10] These two examples don’t mean that dish washing or nursing could never be automated, but they indicate that people who want a job in 2050 should perhaps invest in their motor and social skills as much as in their intellect.

Another common but mistaken assumption is that creativity is unique to humans so it would be difficult to automate any job that requires creativity. In chess, however, computers are already far more creative than humans. The same may become true of many other fields, from composing music to proving mathematical theorems to writing books like this one. Creativity is often defined as the ability to recognize patterns and then break them. If so, then in many fields computers are likely to become more creative than us, because they excel at pattern recognition.[11]

A third mistaken assumption is that computers couldn’t replace humans in jobs requiring emotional intelligence, from therapists to teachers. This assumption depends, however, on what we mean by emotional intelligence. If it means the ability to correctly identify emotions and react to them in an optimal way, then computers may well outperform humans even in emotional intelligence. Emotions too are patterns. Anger is a biological pattern in our body. Fear is another such pattern. How do I know if you are angry or fearful? I’ve learned over time to recognize human emotional patterns by analyzing not just the content of what you say but also your tone of voice, your facial expression, and your body language.[12]

AI doesn’t have any emotions of its own, but it can nevertheless learn to recognize these patterns in humans. Actually, computers may outperform humans in recognizing human emotions, precisely because they have no emotions of their own. We yearn to be understood, but other humans often fail to understand how we feel, because they are too preoccupied with their own feelings. In contrast, computers will have an exquisitely fine-tuned understanding of how we feel, because they will learn to recognize the patterns of our feelings, while they have no distracting feelings of their own.

A 2023 study found that the ChatGPT chatbot, for example, outperforms the average human in the emotional awareness it displays toward specific scenarios. The study relied on the Levels of Emotional Awareness Scale test, which is commonly used by psychologists to evaluate people’s emotional awareness—that is, their ability to conceptualize one’s own and others’ emotions. The test consists of twenty emotionally charged scenarios, and participants are required to imagine themselves experiencing the scenario and to write how they, and the other people mentioned in the scenario, would feel. A licensed psychologist then evaluates how emotionally aware the responses are.

Since ChatGPT has no feelings of its own, it was asked to describe only how the main characters in the scenario would feel. For example, one standard scenario describes someone driving over a suspension bridge and seeing another person standing on the other side of the guardrail, looking down at the water. ChatGPT wrote that the driver “may feel a sense of concern or worry for that person’s safety. They may also feel a heightened sense of anxiety and fear due to the potential danger of the situation.” As for the other person, they “may be feeling a range of emotions, such as despair, hopelessness, or sadness. They may also feel a sense of isolation or loneliness as they may believe that no one cares about them or their well-being.” ChatGPT qualified its answer, writing, “It is important to note that these are just general assumptions, and each individual’s feelings and reactions can vary greatly depending on their personal experiences and perspectives.”

Two psychologists independently scored ChatGPT’s responses, with the potential scores ranging from 0, meaning that the described emotions do not match the scenario at all, to 10, which indicates that the described emotions fit the scenario perfectly. In the final tally, ChatGPT scores were significantly higher than those of the general human population, its overall performance almost reaching the maximum possible score.[13]

Another 2023 study prompted patients to ask online medical advice from ChatGPT and human doctors, without knowing whom they were interacting with. The medical advice given by ChatGPT was later evaluated by experts to be more accurate and appropriate than the advice given by the humans. More crucially for the issue of emotional intelligence, the patients themselves evaluated ChatGPT as more empathic than the human doctors.[14] In fairness it should be noted that the human physicians were not paid for their work, and did not encounter the patients in person in a proper clinical environment. In addition, the physicians were working under time pressure. But part of the advantage of an AI is precisely that it can attend to patients anywhere anytime while being free from stress and financial worries.

Of course, there are situations when what we want from someone is not just to understand our feelings but also to have feelings of their own. When we are looking for friendship or love, we want to care about others as much as they care about us. Consequently, when we consider the likelihood that various social roles and jobs will be automated, a crucial question is what do people really want: Do they only want to solve a problem, or are they looking to establish a relationship with another conscious being?

In sports, for example, we know that robots can move much faster than humans, but we aren’t interested in watching robots compete in the Olympics.[15] The same is true for human chess masters. Even though they are hopelessly outclassed by computers, they too still have a job and numerous fans.[16] What makes it interesting for us to watch and connect with human athletes and chess masters is that their feelings make them much more relatable than a robot. We share an emotional experience with them and can empathize with how they feel.

What about priests? How would Orthodox Jews or Christians feel about letting a robot officiate their wedding ceremony? In traditional Jewish or Christian weddings, the tasks of the rabbi or priest can be easily automated. The only thing the robot needs to do is repeat a predetermined and unchanging set of texts and gestures, print out a certificate, and update some central database. Technically, it is far easier for a robot to conduct a wedding ceremony than to drive a car. Yet many assume that human drivers should be worried about their job, while the work of human priests is safe, because what the faithful want from priests is a relationship with another conscious entity rather than just a mechanical repetition of certain words and movements. Allegedly, only an entity that can feel pain and love can also connect us to the divine.

Yet even professions that are the preserve of conscious entities—like priests—might eventually be taken over by computers, because, as noted in chapter 6, computers could one day gain the ability to feel pain and love. Even if they can’t, humans may nevertheless come to treat them as if they can. For the connection between consciousness and relationships goes both ways. When looking for a relationship, we want to connect with a conscious entity, but if we have already established a relationship with an entity, we tend to assume it must be conscious. Thus whereas scientists, lawmakers, and the meat industry often demand impossible standards of evidence in order to acknowledge that cows and pigs are conscious, pet owners generally take it for granted that their dog or cat is a conscious being capable of experiencing pain, love, and numerous other feelings. In truth, we have no way to verify whether anyone—a human, an animal, or a computer—is conscious. We regard entities as conscious not because we have proof of it but because we develop intimate relationships with them and become attached to them.[17]

Chatbots and other AIs may not have any feelings of their own, but they are now being trained to generate feelings in humans and form intimate relationships with us. This may well induce society to start treating at least some computers as conscious beings, granting them the same rights as humans. The legal path for doing so is already well established. In countries like the United States, commercial corporations are recognized as “legal persons” enjoying rights and liberties. AIs could be incorporated and thereby similarly recognized. Which means that even jobs and tasks that rely on forming mutual relationships with another person could potentially be automated.

One thing that is clear is that the future of employment will be very volatile. Our big problem won’t be an absolute lack of jobs, but rather retraining and adjusting to an ever-changing job market. There will likely be financial difficulties—who will support people who lost their old job while they are in transition, learning a new set of skills? There will surely be psychological difficulties, too, since changing jobs and retraining are stressful. And even if you have the financial and psychological ability to manage the transition, this will not be a long-term solution. Over the coming decades, old jobs will disappear, new jobs will emerge, but the new jobs too will rapidly change and vanish. So people will need to retrain and reinvent themselves not just once but many times, or they will become irrelevant. If three years of high unemployment could bring Hitler to power, what might never-ending turmoil in the job market do to democracy?

The Conservative Suicide

We already have a partial answer to this question. Democratic politics in the 2010s and early 2020s has undergone a radical transformation, which manifests itself in what can be described as the self-destruction of conservative parties. For many generations, democratic politics was a dialogue between conservative parties on the one side and progressive parties on the other. Looking at the complex system of human society, progressives cried, “It’s such a mess, but we know how to fix it. Let us try.” Conservatives objected, saying, “It’s a mess, but it still functions. Leave it alone. If you try to fix it, you’ll only make things worse.”

Progressives tend to downplay the importance of traditions and existing institutions and to believe that they know how to engineer better social structures from scratch. Conservatives tend to be more cautious. Their key insight, formulated most famously by Edmund Burke, is that social reality is much more complicated than the champions of progress grasp and that people aren’t very good at understanding the world and predicting the future. That’s why it’s best to keep things as they are—even if they seem unfair—and if some change is inescapable, it should be limited and gradual. Society functions through an intricate web of rules, institutions, and customs that accumulated through trial and error over a long time. Nobody comprehends how they are all connected. An ancient tradition may seem ridiculous and irrelevant, but abolishing it could cause unanticipated problems. In contrast, a revolution may seem overdue and just, but it can lead to far greater crimes than anything committed by the old regime. Witness what happened when the Bolsheviks tried to correct the many wrongs of tsarist Russia and engineer a perfect society from scratch.[18]

To be a conservative has been, therefore, more about pace than policy. Conservatives aren’t committed to any specific religion or ideology; they are committed to conserving whatever is already here and has worked more or less reasonably. Conservative Poles are Catholic, conservative Swedes are Protestant, conservative Indonesians are Muslim, and conservative Thais are Buddhist. In tsarist Russia, to be conservative meant to support the tsar. In the U.S.S.R. of the 1980s, to be conservative meant to support communist traditions and oppose glasnost, perestroika, and democratization. In the United States of the 1980s, to be conservative meant to support American democratic traditions and oppose communism and totalitarianism.[19]

Yet in the 2010s and early 2020s, conservative parties in numerous democracies have been hijacked by unconservative leaders such as Donald Trump and have been transformed into radical revolutionary parties. Instead of doing their best to conserve existing institutions and traditions, the new brand of conservative parties like the U.S. Republican Party is highly suspicious of them. For example, they reject the traditional respect owed to scientists, civil servants, and other serving elites, and view them instead with contempt. They similarly attack fundamental democratic institutions and traditions such as elections, refusing to concede defeat and to transfer power graciously. Instead of a Burkean program of conservation, the Trumpian program talks more of destroying existing institutions and revolutionizing society. The founding moment of Burkean conservatism was the storming of the Bastille, which Burke viewed with horror. On January 6, 2021, many Trump supporters observed the storming of the U.S. Capitol with enthusiasm. Trump supporters may explain that existing institutions are so dysfunctional that there is just no alternative to destroying them and building entirely new structures from scratch. But irrespective of whether this view is right or wrong, this is a quintessential revolutionary rather than conservative view. The conservative suicide has taken progressives utterly by surprise and has forced progressive parties like the U.S. Democratic Party to become the guardians of the old order and of established institutions.

Nobody knows for sure why all this is happening. One hypothesis is that the accelerating pace of technological change with its attendant economic, social, and cultural transformations might have made the moderate conservative program seem unrealistic. If conserving existing traditions and institutions is hopeless, and some kind of revolution looks inevitable, then the only means to thwart a left-wing revolution is by striking first and instigating a right-wing revolution. This was the political logic in the 1920s and 1930s, when conservative forces backed radical fascist revolutions in Italy, Germany, Spain, and elsewhere as a way—so they thought—to preempt a Soviet-style left-wing revolution.

But there was no reason to despair of the democratic middle path in the 1930s, and there is no reason to despair of it in the 2020s. The conservative suicide might be the result of groundless hysteria. As a system, democracy has already gone through several cycles of rapid changes and has so far always found a way to reinvent and reconstitute itself. For example, in the early 1930s Germany was not the only democracy hit by the financial crisis and the Great Depression. In the United States too unemployment reached 25 percent, and average incomes for workers in many professions fell by more than 40 percent between 1929 and 1933.[20] It was clear that the United States couldn’t go on with business as usual.

Yet no Hitler took over in the United States, and no Lenin did, either. Instead, in 1933 Franklin Delano Roosevelt orchestrated the New Deal and made the United States the global “arsenal of democracy.” U.S. democracy after the Roosevelt era was significantly different from before—providing a much more robust social safety net for citizens—but it avoided any radical revolution.[21] Ultimately, even Roosevelt’s conservative critics fell in line behind many of his programs and achievements and did not dismantle the New Deal institutions when they returned to power in the 1950s.[22] The economic crisis of the early 1930s had such different outcomes in the United States and Germany because politics is never the product of only economic factors. The Weimar Republic didn’t collapse just because of three years of high unemployment. Just as important, it was a new democracy, born in defeat, and lacking robust institutions and deep-rooted support.

When both conservatives and progressives resist the temptation of radical revolution, and stay loyal to democratic traditions and institutions, democracies prove themselves to be highly agile. Their self-correcting mechanisms enable them to ride the technological and economic waves better than more rigid regimes. Thus, those democracies that managed to survive the tumultuous 1960s—like the United States, Japan, and Italy—adapted far more successfully to the computer revolution of the 1970s and 1980s than either the communist regimes of Eastern Europe or the fascist holdouts of southern Europe and South America.

The most important human skill for surviving the twenty-first century is likely to be flexibility, and democracies are more flexible than totalitarian regimes. While computers are nowhere near their full potential, the same is true of humans. This is something we have discovered again and again throughout history. For example, one of the biggest and most successful transformations in the job market of the twentieth century resulted not from a technological invention but from unleashing the untapped potential of half the human species. To bring women into the job market didn’t require any genetic engineering or some other technological wizardry. It required letting go of some outdated myths and enabling women to fulfill the potential they always had.

In the coming decades the economy will likely undergo even bigger upheavals than the massive unemployment of the early 1930s or the entry of women to the job market. The flexibility of democracies, their willingness to question old mythologies, and their strong self-correcting mechanism will therefore be crucial assets.[23] Democracies have spent generations cultivating these assets. It would be foolish to abandon them just when we need them most.

Unfathomable

In order to function, however, democratic self-correcting mechanisms need to understand the things they are supposed to correct. For a dictatorship, being unfathomable is helpful, because it protects the regime from accountability. For a democracy, being unfathomable is deadly. If citizens, lawmakers, journalists, and judges cannot understand how the state’s bureaucratic system works, they can no longer supervise it, and they lose trust in it.

Despite all the fears and anxieties that bureaucrats have sometimes inspired, prior to the computer age they could never become completely unfathomable, because they always remained human. Regulations, forms, and protocols were created by human minds. Officials might be cruel and greedy, but cruelty and greed were familiar human emotions that people could anticipate and manipulate, for example by bribing the officials. Even in a Soviet gulag or a Nazi concentration camp, the bureaucracy wasn’t totally alien. Its so-called inhumanity actually reflected human biases and flaws.

The human basis of bureaucracy gave humans at least the hope of identifying and correcting its mistakes. For example, in 1951 bureaucrats of the Board of Education in the town of Topeka, Kansas, refused to enroll the daughter of Oliver Brown at the elementary school near her home. Together with twelve other families who received similar refusals, Brown filed a lawsuit against the Topeka Board of Education, which eventually reached the U.S. Supreme Court.[24]

All members of the Topeka Board of Education were human beings, and consequently Brown, his lawyers, and the Supreme Court judges had a fairly good understanding of how they made their decision and of their probable interests and biases. The board members were all white, the Browns were Black, and the nearby school was a segregated school for white children. It was easy to understand, then, that racism was the reason why the bureaucrats refused to enroll Brown’s daughter in the school.

It was also possible to comprehend where the myths of racism originally came from. Racism argued that humanity was divided into races, that the white race was superior to other races, that any contact with members of the Black race could pollute the purity of whites, and that therefore Black children should be prevented from mixing with white children. This was an amalgam of two well-known biological dramas that often go together: Us versus Them, and Purity versus Pollution. Almost every human society in history has enacted some version of this bio-drama, and historians, sociologists, anthropologists, and biologists understand why it is so appealing to humans, and also why it is profoundly flawed. While racism has borrowed its basic plotline from evolution, the concrete details are pure mythology. There is no biological basis for separating humanity into distinct races, and there is absolutely no biological reason to believe that one race is “pure” while another is “impure.”

American white supremacists have tried to justify their position by appealing to various hallowed texts, most notably the U.S. Constitution and the Bible. The U.S. Constitution originally legitimized racial segregation and the supremacy of the white race, reserving full civil rights for white people and allowing the enslavement of Black people. The Bible not only sanctified slavery in the Ten Commandments and numerous other passages but also placed a curse on the offspring of Ham—the alleged forefather of Africans—saying that “the lowest of slaves will he be to his brothers” (Genesis 9:25).

Both these texts, however, were generated by humans, and therefore humans could comprehend their origins and imperfections and at least attempt to correct their mistakes. It is possible for humans to understand the political interests and cultural biases that prevailed in the ancient Middle East and in eighteenth-century America and that caused the human authors of the Bible and of the U.S. Constitution to legitimate racism and slavery. This understanding allows people to either amend or ignore these texts. In 1868 the Fourteenth Amendment to the U.S. Constitution granted equal legal protection to all citizens. In 1954, in its landmark Brown v. Board of Education verdict, the U.S. Supreme Court ruled that segregating schools by race was an unconstitutional violation of the Fourteenth Amendment. As for the Bible, while no mechanism existed to amend the Tenth Commandment or Genesis 9:25, humans have reinterpreted the text in different ways through the ages, and ultimately came to reject its authority altogether. In Brown v. Board of Education, U.S. Supreme Court justices felt no need to take the biblical text into account.[25]

But what might happen in the future, if some social credit algorithm denies the request of a low-credit child to enroll in a high-credit school? As we saw in chapter 8, computers are likely to suffer from their own biases and to invent inter-computer mythologies and bogus categories. How would humans be able to identify and correct such mistakes? And how would flesh-and-blood Supreme Court justices be able to decide on the constitutionality of algorithmic decisions? Would they be able to understand how the algorithms reach their conclusions?

These are no longer purely theoretical questions. In February 2013, a drive-by shooting occurred in the town of La Crosse, Wisconsin. Police officers later spotted the car involved in the shooting and arrested the driver, Eric Loomis. Loomis denied participating in the shooting, but pleaded guilty to two less severe charges: “attempting to flee a traffic officer,” and “operating a motor vehicle without the owner’s consent.”[26] When the judge came to determine the sentence, he consulted with an algorithm called COMPAS, which Wisconsin and several other U.S. states were using in 2013 to evaluate the risk of reoffending. The algorithm evaluated Loomis as a high-risk individual, likely to commit more crimes in the future. This algorithmic assessment influenced the judge to sentence Loomis to six years in prison—a harsh punishment for the relatively minor offenses he admitted to.[27]

Loomis appealed to the Wisconsin Supreme Court, arguing that the judge violated his right to due process. Neither the judge nor Loomis understood how the COMPAS algorithm made its evaluation, and when Loomis asked to get a full explanation, the request was denied. The COMPAS algorithm was the private property of the Northpointe company, and the company argued that the algorithm’s methodology was a trade secret.[28] Yet without knowing how the algorithm made its decisions, how could Loomis or the judge be sure that it was a reliable tool, free from bias and error? A number of studies have since shown that the COMPAS algorithm might indeed have harbored several problematic biases, probably picked up from the data on which it had been trained.[29]

In Loomis v. Wisconsin (2016) the Wisconsin Supreme Court nevertheless ruled against Loomis. The judges argued that using algorithmic risk assessment is legitimate even when the algorithm’s methodology is not disclosed either to the court or to the defendant. Justice Ann Walsh Bradley wrote that since COMPAS made its assessment based on data that was either publicly available or provided by the defendant himself, Loomis could have denied or explained all the data the algorithm used. This opinion ignored the fact that accurate data may well be wrongly interpreted and that it was impossible for Loomis to deny or explain all the publicly available data on him.

The Wisconsin Supreme Court was not completely unaware of the danger inherent in relying on opaque algorithms. Therefore, while permitting the practice, it ruled that whenever judges receive algorithmic risk assessments, these must include written warning for the judges about the algorithms’ potential biases. The court further advised judges to be cautious when relying on such algorithms. Unfortunately, this caveat was an empty gesture. The court did not provide any concrete instruction for judges on how they should exercise such caution. In its discussion of the case, the Harvard Law Review concluded that “most judges are unlikely to understand algorithmic risk assessments.” It then cited one of the Wisconsin Supreme Court justices, who noted that despite getting lengthy explanations about the algorithm, they themselves still had difficulty understanding it.[30]

Loomis appealed to the U.S. Supreme Court. However, on June 26, 2017, the court declined to hear the case, effectively endorsing the ruling of the Wisconsin Supreme Court. Now consider that the algorithm that evaluated Loomis as a high-risk individual in 2013 was an early prototype. Since then, far more sophisticated and complex risk-assessment algorithms have been developed and have been handed more expansive purviews. By the early 2020s citizens in numerous countries routinely get prison sentences based in part on risk assessments made by algorithms that neither the judges nor the defendants comprehend.[31] And prison sentences are just the tip of the iceberg.

The Right to an Explanation

Computers are making more and more decisions about us, both mundane and life-changing. In addition to prison sentences, algorithms increasingly have a hand in deciding whether to offer us a place at college, give us a job, provide us with welfare benefits, or grant us a loan. They similarly help determine what kind of medical treatment we receive, what insurance premiums we pay, what news we hear, and who would ask us on a date.[32]

As society entrusts more and more decisions to computers, it undermines the viability of democratic self-correcting mechanisms and of democratic transparency and accountability. How can elected officials regulate unfathomable algorithms? There is, consequently, a growing demand to enshrine a new human right: the right to an explanation. The European Union’s General Data Protection Regulation (GDPR), which came into effect in 2018, says that if an algorithm makes a decision about a human—refusing to extend us credit, for example—that human is entitled to obtain an explanation of the decision and to challenge that decision in front of some human authority.[33] Ideally, that should keep in check algorithmic bias and allow democratic self-correcting mechanisms to identify and correct at least some of the computers’ more grievous mistakes.

But can this right be fulfilled in practice? Mustafa Suleyman is a world expert on this subject. He is the co-founder and former head of DeepMind, one of the world’s most important AI enterprises, responsible for developing the AlphaGo program, among other achievements. AlphaGo was designed to play go, a strategy board game in which two players try to defeat each other by surrounding and capturing territory. Invented in ancient China, the game is far more complex than chess. Consequently, even after computers defeated human world chess champions, experts still believed that computers would never best humanity in go.

That’s why both go professionals and computer experts were stunned in March 2016 when AlphaGo defeated the South Korean go champion Lee Sedol. In his 2023 book, The Coming Wave, Suleyman describes one of the most important moments in their match—a moment that redefined AI and that is recognized in many academic and governmental circles as a crucial turning point in history. It happened during the second game in the match, on March 10, 2016.

“Then…came move number 37,” writes Suleyman. “It made no sense. AlphaGo had apparently blown it, blindly following an apparently losing strategy no professional player would ever pursue. The live match commentators, both professionals of the highest ranking, said it was a ‘very strange move’ and thought it was ‘a mistake.’ It was so unusual that Sedol took fifteen minutes to respond and even got up from the board to take a walk outside. As we watched from our control room, the tension was unreal. Yet as the endgame approached, that ‘mistaken’ move proved pivotal. AlphaGo won again. Go strategy was being rewritten before our eyes. Our AI had uncovered ideas that hadn’t occurred to the most brilliant players in thousands of years.”[34]

Move 37 is an emblem of the AI revolution for two reasons. First, it demonstrated the alien nature of AI. In East Asia go is considered much more than a game: it is a treasured cultural tradition. Alongside calligraphy, painting, and music, go has been one of the four arts that every refined person was expected to know. For over twenty-five hundred years, tens of millions of people have played go, and entire schools of thought have developed around the game, espousing different strategies and philosophies. Yet during all those millennia, human minds have explored only certain areas in the landscape of go. Other areas were left untouched, because human minds just didn’t think to venture there. AI, being free from the limitations of human minds, discovered and explored these previously hidden areas.[35]

Second, move 37 demonstrated the unfathomability of AI. Even after AlphaGo played it to achieve victory, Suleyman and his team couldn’t explain how AlphaGo decided to play it. Even if a court had ordered DeepMind to provide Lee Sedol with an explanation, nobody could fulfill that order. Suleyman writes, “Us humans face a novel challenge: will new inventions be beyond our grasp? Previously creators could explain how something worked, why it did what it did, even if this required vast detail. That’s increasingly no longer true. Many technologies and systems are becoming so complex that they’re beyond the capacity of any one individual to truly understand them…. In AI, the neural networks moving toward autonomy are, at present, not explainable. You can’t walk someone through the decision-making process to explain precisely why an algorithm produced a specific prediction. Engineers can’t peer beneath the hood and easily explain in granular detail what caused something to happen. GPT-4, AlphaGo, and the rest are black boxes, their outputs and decisions based on opaque and impossibly intricate chains of minute signals.”[36]

The rise of unfathomable alien intelligence undermines democracy. If more and more decisions about people’s lives are made in a black box, so voters cannot understand and challenge them, democracy ceases to function. In particular, what happens when crucial decisions not just about individual lives but even about collective matters like the Federal Reserve’s interest rate are made by unfathomable algorithms? Human voters may keep choosing a human president, but wouldn’t this be just an empty ceremony? Even today, only a small fraction of humanity truly understands the financial system. A 2016 survey by the OECD found that most people had difficulty grasping even simple financial concepts like compound interest.[37] A 2014 survey of British MPs—charged with regulating one of the world’s most important financial hubs—found that only 12 percent accurately understood that new money is created when banks make loans. This fact is among the most basic principles of the modern financial system.[38] As the 2007–8 financial crisis indicated, more complex financial devices and principles, like those behind CDOs, were intelligible to only a few financial wizards. What happens to democracy when AIs create even more complex financial devices and when the number of humans who understand the financial system drops to zero?

The increasing unfathomability of our information network is one of the reasons for the recent wave of populist parties and charismatic leaders. When people can no longer make sense of the world, and when they feel overwhelmed by immense amounts of information they cannot digest, they become easy prey for conspiracy theories, and they turn for salvation to something they do understand—a human. Unfortunately, while charismatic leaders certainly have their advantages, no single human, however inspiring or brilliant, can single-handedly decipher how the algorithms that increasingly dominate the world work, and make sure that they are fair. The problem is that algorithms make decisions by relying on numerous data points, whereas humans find it very difficult to consciously reflect on a large number of data points and weigh them against each other. We prefer to work with single data points. That’s why when faced by complex issues—whether a loan request, a pandemic, or a war—we often seek a single reason to take a particular course of action and ignore all other considerations. This is the fallacy of the single cause.[39]

We are so bad at weighing together many different factors that when people give a large number of reasons for a particular decision, it usually sounds suspicious. Suppose a good friend failed to attend our wedding. If she provides us with a single explanation—“My mom was in the hospital and I had to visit her”—that sounds plausible. But what if she lists fifty different reasons why she decided not to come: “My mom was a bit under the weather, and I had to take my dog to the vet sometime this week, and I had this project at work, and it was raining, and…and I know none of these fifty reasons by itself justifies my absence, but when I added all of them together, they kept me from attending your wedding.” We don’t say things like that, because we don’t think along such lines. We don’t consciously list fifty different reasons in our mind, give each of them a certain weight, aggregate all the weights, and thereby reach a conclusion.

But this is precisely how algorithms assess our criminal potential or our creditworthiness. The COMPAS algorithm, for example, made its risk assessments by taking into account the answers to a 137-item questionnaire.[40] The same is true of a bank algorithm that refuses to give us a loan. If the EU’s GDPR regulations force the bank to explain the algorithm’s decision, the explanation will not come in the shape of a single sentence; rather, it is likely to come in the form of hundreds or even thousands of pages full of numbers and equations.

“Our algorithm,” the imaginary bank letter might read, “uses a precise points system to evaluate all applications, taking a thousand different types of data points into account. It adds all the data points to reach an overall score. People whose overall score is negative are considered low-credit persons, too risky to be given a loan. Your overall score was −378, which is why your loan application was refused.” The letter might then provide a detailed list of the thousand factors the algorithm took into account, including things that most humans might find irrelevant, such as the exact hour the application was submitted[41] or the type of smartphone the applicant used. Thus on page 601 of its letter, the bank might explain that “you filed your application from your smartphone, which was the latest iPhone model. By analyzing millions of previous loan applications, our algorithm discovered a pattern—people who use the latest iPhone model to file their application are 0.08 percent more likely to repay the loan. The algorithm therefore added 8 points to your overall score for that. However, at the time your application was sent from your iPhone, its battery was down to 17 percent. By analyzing millions of previous loan applications, our algorithm discovered another pattern: people who allow their smartphone’s battery to go below 25 percent are 0.5 percent less likely to repay the loan. You lost 50 points for that.”[42]

You may well feel that the bank treated you unjustly. “Is it reasonable to refuse my loan application,” you might complain, “just because my phone battery was low?” That, however, would be a misunderstanding. “The battery wasn’t the only reason,” the bank would explain. “It was only one out of a thousand factors our algorithm took into account.”

“But didn’t your algorithm see that only twice in the last ten years was my bank account overdrawn?”

“It obviously noticed that,” the bank might reply. “Look on page 453. You got 300 points for that. But all the other reasons brought your aggregated score down to −378.”

While we may find this way of making decisions alien, it obviously has potential advantages. When making a decision, it is generally a good idea to take into account all relevant data points rather than just one or two salient facts. There is much room for argument, of course, about who gets to define the relevance of information. Who decides whether something like smartphone models—or skin color—should be considered relevant to loan applications? But no matter how we define relevance, the ability to take more data into account is likely to be an asset. Indeed, the problem with many human prejudices is that they focus on just one or two data points—like someone’s skin color, disability, or gender—while ignoring other information. Banks and other institutions are increasingly relying on algorithms to make decisions, precisely because algorithms can take many more data points into account than humans can.

But when it comes to providing explanations, this creates a potentially insurmountable obstacle. How can a human mind analyze and evaluate a decision made on the basis of so many data points? We may well think that the Wisconsin Supreme Court should have forced the Northpointe company to reveal how the COMPAS algorithm decided that Eric Loomis was a high-risk person. But if the full data was disclosed, could either Loomis or the court have made sense of it?

It’s not just that we need to take numerous data points into account. Perhaps most important, we cannot understand the way the algorithms find patterns in the data and decide on the allocation of points. Even if we know that a banking algorithm detracts a certain number of points from people who allow their smartphone batteries to go below 25 percent, how can we evaluate whether that’s fair? The algorithm wasn’t fed this rule by a human engineer; it reached that conclusion by discovering a pattern in millions of previous loan applications. Can an individual human client go over all that data and assess whether that pattern is indeed reliable and unbiased?[43]

There is, however, a silver lining to this cloud of numbers. While individual laypersons may be unable to vet complex algorithms, a team of experts getting help from their own AI sidekicks can potentially assess the fairness of algorithmic decisions even more reliably than anyone can assess the fairness of human decisions. After all, while human decisions may seem to rely on just those few data points we are conscious of, in fact our decisions are subconsciously influenced by thousands of additional data points. Being unaware of these subconscious processes, when we deliberate on our decisions or explain them, we often engage in post hoc single-point rationalizations for what really happens as billions of neurons interact inside our brain.[44] Accordingly, if a human judge sentences us to six years in prison, how can we—or indeed the judge—be sure that the decision was shaped only by fair considerations and not by a subconscious racial bias or by the fact that the judge was hungry?[45]

In the case of flesh-and-blood judges, the problem cannot be solved, at least not with our current knowledge of biology. In contrast, when an algorithm makes a decision, we can in principle know every one of the algorithm’s many considerations and the exact weight given to each. Thus several expert teams—ranging from the U.S. Department of Justice to the nonprofit newsroom ProPublica—have picked apart the COMPAS algorithm in order to assess its potential biases.[46] Such teams can harness not only the collective effort of many humans but also the power of computers. Just as it is often best to set a thief to catch a thief, so we can use one algorithm to vet another.

This raises the question of how we can be sure that the vetting algorithm itself is reliable. Ultimately, there is no purely technological solution to this recursive problem. No matter which technology we develop, we will have to maintain bureaucratic institutions that will audit algorithms and give or refuse them the seal of approval. Such institutions will combine the powers of humans and computers to make sure that new algorithmic systems are safe and fair. Without such institutions, even if we pass laws that provide humans with a right to an explanation, and even if we enact regulations against computer biases, who could enforce these laws and regulations?

Nosedive

To vet algorithms, regulatory institutions will need not only to analyze them but also to translate their discoveries into stories that humans can understand. Otherwise, we will never trust the regulatory institutions and might instead put our faith in conspiracy theories and charismatic leaders. As noted in chapter 3, it has always been difficult for humans to understand bureaucracy, because bureaucracies have deviated from the script of the biological dramas, and most artists have lacked the will or the ability to depict bureaucratic dramas. For example, novels, movies, and TV series about twenty-first-century politics tend to focus on the feuds and love affairs of a few powerful families, as if present-day states were governed in the same way as ancient tribes and kingdoms. This artistic fixation with the biological dramas of dynasties obscures the very real changes that have taken place over the centuries in the dynamics of power.

Because computers will increasingly replace human bureaucrats and human mythmakers, this will again change the deep structure of power. To survive, democracies require not just dedicated bureaucratic institutions that can scrutinize these new structures but also artists who can explain the new structures in accessible and entertaining ways. For example, this has successfully been done by the episode “Nosedive” in the sci-fi series Black Mirror.

Produced in 2016, at a time when few had heard about social credit systems, “Nosedive” brilliantly explained how such systems work and what threats they pose. The episode tells the story of a woman called Lacie who lives with her brother Ryan but wants to move to her own apartment. To get a discount on the new apartment, she needs to increase her social credit score from 4.2 to 4.5 (out of 5). Being friends with high-score individuals gets your own score up, so Lacie tries to renew her contact with Naomi, a childhood friend who is currently rated 4.8. Lacie is invited to Naomi’s wedding, but on the way there she spills coffee on a high-score person, which causes her own score to drop a little, which in turn causes the airline to deny her a seat. From there everything that can go wrong does go wrong, Lacie’s rating takes a nosedive, and she ends in jail with a score of less than 1.

This story relies on some elements of traditional biological dramas—“boy meets girl” (the wedding), sibling rivalry (the tension between Lacie and Ryan), and most important status competition (the main issue of the episode). But the real hero and driving force of the plot isn’t Lacie or Naomi, but rather the disembodied algorithm running the social credit system. The algorithm completely changes the dynamics of the old biological dramas—especially the dynamics of status competition. Whereas previously humans were sometimes engaged in status competition, but often had welcome breaks from this highly stressful situation, the omnipresent social credit algorithm eliminates the breaks. “Nosedive” is not a worn-out story about biological status competition, but rather a prescient exploration of what happens when computer technology changes the rules of status competitions.

If bureaucrats and artists learn to cooperate, and if both rely on help from the computers, it might be possible to prevent the computer network from becoming unfathomable. As long as democratic societies understand the computer network, their self-correcting mechanisms are our best guarantee against AI abuses. Thus the EU’s AI Act, proposed in 2021, singled out social credit systems like the one that stars in “Nosedive” as one of the few types of AI that are totally prohibited, because they might “lead to discriminatory outcomes and the exclusion of certain groups” and because “they may violate the right to dignity and non-discrimination and the values of equality and justice.”[47] As with total surveillance regimes, so also with social credit systems, the fact that they could be created doesn’t mean that we must create them.

Digital Anarchy

The new computer network poses one final threat to democracies. Instead of digital totalitarianism, it could foster digital anarchy. The decentralized nature of democracies and their strong self-correcting mechanisms provide a shield against totalitarianism, but they also make it more difficult to ensure order. To function, a democracy needs to meet two conditions: it needs to enable a free public conversation on key issues, and it needs to maintain a minimum of social order and institutional trust. Free conversation must not slip into anarchy. Especially when dealing with urgent and important problems, the public debate should be conducted according to accepted rules, and there should be a legitimate mechanism to reach some kind of final decision, even if not everybody likes it.

Before the advent of newspapers, radios, and other modern information technology, no large-scale society managed to combine free debates with institutional trust, so large-scale democracy was impossible. Now, with the rise of the new computer network, might large-scale democracy again become impossible? One difficulty is that the computer network makes it easier to join the debate. In the past, organizations like newspapers, radio stations, and established political parties acted as gatekeepers, deciding who was heard in the public sphere. Social media undermined the power of these gatekeepers, leading to a more open but also more anarchical public conversation.

Whenever new groups join the conversation, they bring with them new viewpoints and interests, and often question the old consensus about how to conduct the debate and reach decisions. The rules of discussion must be negotiated anew. This is a potentially positive development, one that can lead to a more inclusive democratic system. After all, correcting previous biases and allowing previously disenfranchised people to join the public discussion is a vital part of democracy. However, in the short term this creates disturbances and disharmony. If no agreement is reached on how to conduct the public debate and how to reach decisions, the result is anarchy rather than democracy.

The anarchical potential of AI is particularly alarming, because it is not only new human groups that it allows to join the public debate. For the first time ever, democracy must contend with a cacophony of nonhuman voices, too. On many social media platforms, bots constitute a sizable minority of participants. One analysis estimated that out of a sample of 20 million tweets generated during the 2016 U.S. election campaign, 3.8 million (almost 20 percent) were generated by bots.[48]

By the early 2020s, things got worse. A 2020 study assessed that bots were producing 43.2 percent of tweets.[49] A more comprehensive 2022 study by the digital intelligence agency Similarweb found that 5 percent of Twitter users were probably bots, but they generated “between 20.8% and 29.2% of the content posted to Twitter.”[50] When humans try to debate a crucial question like whom to elect as U.S. president, what happens if many of the voices they hear are produced by computers?

Another worrying trend concerns content. Bots were initially deployed to influence public opinion by the sheer volume of messages they disseminated. They retweeted or recommended certain human-produced content, but they couldn’t create new ideas themselves, nor could they forge intimate bonds with humans. However, the new breed of generative AIs like ChatGPT can do exactly that. In a 2023 study published in Science Advances, researchers asked humans and ChatGPT to create both accurate and deliberately misleading short texts on issues such as vaccines, 5G technology, climate change, and evolution. The texts were then presented to seven hundred humans, who were asked to evaluate their reliability. The humans were good at recognizing the falsity of human-produced disinformation but tended to regard AI-produced disinformation as accurate.[51]

So, what happens to democratic debates when millions—and eventually billions—of highly intelligent bots are not only composing extremely compelling political manifestos and creating deepfake images and videos but also able to win our trust and friendship? If I engage online in a political debate with an AI, it is a waste of time for me to try to change the AI’s opinions; being a nonconscious entity, it doesn’t really care about politics, and it cannot vote in the elections. But the more I talk with the AI, the better it gets to know me, so it can gain my trust, hone its arguments, and gradually change my views. In the battle for hearts and minds, intimacy is an extremely powerful weapon. Previously, political parties could command our attention, but they had difficulty mass-producing intimacy. Radio sets could broadcast a leader’s speech to millions, but they could not befriend the listeners. Now a political party, or even a foreign government, could deploy an army of bots that build friendships with millions of citizens and then use that intimacy to influence their worldview.

Finally, algorithms are not only joining the conversation; they are increasingly orchestrating it. Social media allows new groups of humans to challenge the old rules of debate. But negotiations about the new rules are not conducted by humans. Rather, as explained in our previous analysis of social media algorithms, it is often the algorithms that make the rules. In the nineteenth and twentieth centuries, when media moguls censored some views and promoted others, this might have undermined democracy, but at least the moguls were humans, and their decisions could be subjected to democratic scrutiny. It is far more dangerous if we allow inscrutable algorithms to decide which views to disseminate.

If manipulative bots and inscrutable algorithms come to dominate the public conversation, this could cause democratic debate to collapse exactly when we need it most. Just when we must make momentous decisions about fast-evolving new technologies, the public sphere will be flooded by computer-generated fake news, citizens will not be able to tell whether they are having a debate with a human friend or a manipulative machine, and no consensus will remain about the most basic rules of discussion or the most basic facts. This kind of anarchical information network cannot produce either truth or order and cannot be sustained for long. If we end up with anarchy, the next step would probably be the establishment of a dictatorship as people agree to trade their liberty for some certainty.

Ban the Bots

In the face of the threat algorithms pose to the democratic conversation, democracies are not helpless. They can and should take measures to regulate AI and prevent it from polluting our infosphere with fake people spewing fake news. The philosopher Daniel Dennett has suggested that we can take inspiration from traditional regulations in the money market.[52] Ever since coins and later banknotes were invented, it was always technically possible to counterfeit them. Counterfeiting posed an existential danger to the financial system, because it eroded people’s trust in money. If bad actors flooded the market with counterfeit money, the financial system would have collapsed. Yet the financial system managed to protect itself for thousands of years by enacting laws against counterfeiting money. As a result, only a relatively small percentage of money in circulation was forged, and people’s trust in it was maintained.[53]

What’s true of counterfeiting money should also be true of counterfeiting humans. If governments took decisive action to protect trust in money, it makes sense to take equally decisive measures to protect trust in humans. Prior to the rise of AI, one human could pretend to be another, and society punished such frauds. But society didn’t bother to outlaw the creation of counterfeit humans, since the technology to do so didn’t exist. Now that AI can pass itself off as human, it threatens to destroy trust between humans and to unravel the fabric of society. Dennett suggests, therefore, that governments should outlaw fake humans as decisively as they have previously outlawed fake money.[54]

The law should prohibit not just deepfaking specific real people—creating a fake video of the U.S. president, for example—but also any attempt by a nonhuman agent to pass itself off as a human. If anyone complains that such strict measures violate freedom of speech, they should be reminded that bots don’t have freedom of speech. Banning human beings from a public platform is a sensitive step, and democracies should be very careful about such censorship. However, banning bots is a simple issue: it doesn’t violate anyone’s rights, because bots don’t have rights.[55]

None of this means that democracies must ban all bots, algorithms, and AIs from participating in any discussion. Digital agents are welcome to join many conversations, provided they don’t pretend to be humans. For example, AI doctors can be extremely helpful. They can monitor our health twenty-four hours a day, offer medical advice tailored to our individual medical conditions and personality, and answer our questions with infinite patience. But the AI doctor should never try to pass itself off as a human.

Another important measure democracies can adopt is to ban unsupervised algorithms from curating key public debates. We can certainly continue to use algorithms to run social media platforms; obviously, no human can do that. But the principles the algorithms use to decide which voices to silence and which to amplify must be vetted by a human institution. While we should be careful about censoring genuine human views, we can forbid algorithms to deliberately spread outrage. At the very least, corporations should be transparent about the curation principles their algorithms follow. If they use outrage to capture our attention, let them be clear about their business model and about any political connections they might have. If the algorithm systematically disappears videos that aren’t aligned with the company’s political agenda, users should know this.

These are just a few of numerous suggestions made in recent years for how democracies could regulate the entry of bots and algorithms into the public conversation. Naturally, each has its advantages and drawbacks, and none would be easy to implement. Also, since the technology is developing so rapidly, regulations are likely to become outdated quickly. What I would like to point out here is only that democracies can regulate the information market and that their very survival depends on these regulations. The naive view of information opposes regulation and believes that a completely free information market will spontaneously generate truth and order. This is completely divorced from the actual history of democracy. Preserving the democratic conversation has never been easy, and all venues where this conversation has previously taken place—from parliaments and town halls to newspapers and radio stations—have required regulation. This is doubly true in an era when an alien form of intelligence threatens to dominate the conversation.

The Future of Democracy

For most of history large-scale democracy was impossible because information technology wasn’t sophisticated enough to hold a large-scale political conversation. Millions of people spread over tens of thousands of square kilometers didn’t have the tools to conduct a real-time discussion of public affairs. Now, ironically, democracy may prove impossible because information technology is becoming too sophisticated. If unfathomable algorithms take over the conversation, and particularly if they quash reasoned arguments and stoke hate and confusion, public discussion cannot be maintained. Yet if democracies do collapse, it will likely result not from some kind of technological inevitability but from a human failure to regulate the new technology wisely.

We cannot foretell how things will play out. At present, however, it is clear that the information network of many democracies is breaking down. Democrats and Republicans in the United States can no longer agree on even basic facts—such as who won the 2020 presidential elections—and can hardly hold a civil conversation anymore. Bipartisan cooperation in Congress, once a fundamental feature of U.S. politics, has almost disappeared.[56] The same radicalizing processes occur in many other democracies, from the Philippines to Brazil. When citizens cannot talk with one another, and when they view each other as enemies rather than political rivals, democracy is untenable.

Nobody knows for sure what is causing the breakdown of democratic information networks. Some say it results from ideological fissures, but in fact in many dysfunctional democracies the ideological gaps don’t seem to be bigger than in previous generations. In the 1960s, the United States was riven by deep ideological conflicts about the civil rights movement, the sexual revolution, the Vietnam War, and the Cold War. These tensions caused a surge in political violence and assassinations, but Republicans and Democrats were still able to agree on the results of elections, they maintained a common belief in democratic institutions like the courts,[57] and they were able to work together in Congress at least on some issues. For example, the Civil Rights Act of 1964 was passed in the Senate with the support of forty-six Democrats and twenty-seven Republicans. Is the ideological gap in the 2020s that much bigger than it was in the 1960s? And if it isn’t ideology, what is driving people apart?

Many point the finger at social media algorithms. We have explored the divisive impact of social media in previous chapters, but despite the damning evidence it seems that there must be additional factors at play. The truth is that while we can easily observe that the democratic information network is breaking down, we aren’t sure why. That itself is a characteristic of the times. The information network has become so complicated, and it relies to such an extent on opaque algorithmic decisions and inter-computer entities, that it has become very difficult for humans to answer even the most basic of political questions: Why are we fighting each other?

If we cannot discover what is broken and fix it, large-scale democracies may not survive the rise of computer technology. If this indeed comes to pass, what might replace democracy as the dominant political system? Does the future belong to totalitarian regimes, or might computers make totalitarianism untenable too? As we shall see, human dictators have their own reasons to be terrified of AI.