Epilogue

 

In late 2016, a few months after AlphaGo defeated Lee Sedol and as Facebook algorithms were stoking dangerous racist sentiments in Myanmar, I published Homo Deus. Though my academic training had been in medieval and early modern military history, and though I have no background in the technical aspects of computer science, I suddenly found myself, post-publication, with the reputation of an AI expert. This opened the doors to the offices of scientists, entrepreneurs, and world leaders interested in AI and afforded me a fascinating, privileged look into the complex dynamics of the AI revolution.

It turned out that my previous experience researching topics such as English strategy in the Hundred Years’ War and studying paintings from the Thirty Years’ War[1] wasn’t entirely unrelated to this new field. In fact, it gave me a rather unique historical perspective on the events unfolding rapidly in AI labs, corporate offices, military headquarters, and presidential palaces. Over the past eight years I have had numerous public and private discussions about AI, particularly about the dangers it poses, and with each passing year the tone has become more urgent. Conversations that in 2016 felt like idle philosophical speculations about a distant future had, by 2024, acquired the focused intensity of an emergency room.

I am neither a politician nor a businessperson and have little talent for what these vocations demand. But I do believe that an understanding of history can be useful in gaining a better grasp of present-day technological, economic, and cultural developments—and, more urgently, in changing our political priorities. Politics is largely a matter of priorities. Should we cut the health-care budget and spend more on defense? Is our more pressing security threat terrorism or climate change? Do we focus on regaining a lost patch of ancestral territory or concentrate on creating a common economic zone with the neighbors? Priorities determine how citizens vote, what businesspeople are concerned about, and how politicians try to make a name for themselves. And priorities are often shaped by our understanding of history.

While so-called realists dismiss historical narratives as propaganda ploys deployed to advance state interests, in fact it is these narratives that define state interests in the first place. As we saw in our discussion of Clausewitz’s theory of war, there is no rational way to define ultimate goals. The state interests of Russia, Israel, Myanmar, or any other country can never be deduced from some mathematical or physical equation; they are always the supposed moral of a historical narrative.

It is therefore hardly surprising that politicians all over the world spend a lot of time and effort recounting historical narratives. The above-mentioned example of Vladimir Putin is hardly exceptional in this respect. In 2005 the UN secretary-general, Kofi Annan, had his first meeting with General Than Shwe, then the dictator of Myanmar. Annan was advised to speak first, so as to prevent the general from monopolizing the conversation, which was meant to last only twenty minutes. But Than Shwe struck first and held forth for nearly an hour on the history of Myanmar, hardly giving the UN secretary-general any chance to speak.[2] In May 2011 the Israeli prime minister, Benjamin Netanyahu did something similar in the White House, when he met the U.S. president, Barack Obama. After Obama’s brief introductory remarks, Netanyahu subjected the president to a long lecture about the history of Israel and the Jewish people, treating Obama as if he were his student.[3] Cynics might argue that Than Shwe and Netanyahu hardly cared about the facts of history and were deliberately distorting them in order to achieve some political goal. But these political goals were themselves the product of deeply held convictions about history.

In my own conversations on AI with politicians, as well as tech entrepreneurs, history has often emerged as a central theme. Some of my interlocutors painted a rosy picture of history and were accordingly enthusiastic about AI. They argued that more information has always meant more knowledge and that by increasing our knowledge, every previous information revolution has greatly benefited humankind. Didn’t the print revolution lead to the scientific revolution? Didn’t newspapers and radio lead to the rise of modern democracy? The same, they said, would happen with AI. Others had a dimmer perspective, but nevertheless expressed hope that humankind will somehow muddle through the AI revolution, just as we muddled through the Industrial Revolution.

Neither view offered me much solace. For reasons explained in previous chapters, I find such historical comparisons to the print revolution and the Industrial Revolution distressing, especially coming from people in positions of power, whose historical vision is informing the decisions that shape our future. These historical comparisons underestimate both the unprecedented nature of the AI revolution and the negative aspects of previous revolutions. The immediate results of the print revolution included witch hunts and religious wars alongside scientific discoveries, while newspapers and radio were exploited by totalitarian regimes as well as by democracies. As for the Industrial Revolution, adapting to it involved catastrophic experiments such as imperialism and Nazism. If the AI revolution leads us to similar kinds of experiments, can we really be certain we will muddle through again?

My goal with this book is to provide a more accurate historical perspective on the AI revolution. This revolution is still in its infancy, and it is notoriously difficult to understand momentous developments in real time. It is hard, even now, to assess the meaning of events in the 2010s like AlphaGo’s victory or Facebook’s involvement in the anti-Rohingya campaign. The meaning of events of the early 2020s is even more obscure. Yet by expanding our horizons to look at how information networks developed over thousands of years, I believe it is possible to gain some insight on what we’re living through today.

One lesson is that the invention of new information technology is always a catalyst for major historical changes, because the most important role of information is to weave new networks rather than represent preexisting realities. By recording tax payments, clay tablets in ancient Mesopotamia helped forge the first city-states. By canonizing prophetic visions, holy books spread new kinds of religions. By swiftly disseminating the words of presidents and citizens, newspapers and telegraphs opened the door to both large-scale democracy and large-scale totalitarianism. The information thus recorded and distributed was sometimes true, often false, but it invariably created new connections between larger numbers of people.

We are used to giving political, ideological, and economic interpretations to historical revolutions such as the rise of the first Mesopotamian city-states, the spread of Christianity, the American Revolution, and the Bolshevik Revolution. But to gain a deeper understanding, we should also view them as revolutions in the way information flows. Christianity was obviously different from Greek polytheism in many of its myths and rites, yet it was also different in the importance it gave to a single holy book and the institution entrusted with interpreting it. Consequently, whereas each temple of Zeus was a separate entity, each Christian church became a node in a unified network.[4] Information flowed differently among the followers of Christ than among the worshippers of Zeus. Similarly, Stalin’s U.S.S.R. was a different kind of information network from Peter the Great’s empire. Stalin enacted many unprecedented economic policies, but what enabled him to do it is that he headed a totalitarian network in which the center accumulated enough information to micromanage the lives of hundreds of millions of people. Technology is rarely deterministic, and the same technology can be used in very different ways. But without the invention of technologies like the book and the telegraph, the Christian Church and the Stalinist apparatus would never have been possible.

This historical lesson should strongly encourage us to pay more attention to the AI revolution in our current political debates. The invention of AI is potentially more momentous than the invention of the telegraph, the printing press, or even writing, because AI is the first technology that is capable of making decisions and generating ideas by itself. Whereas printing presses and parchment scrolls offered new means for connecting people, AIs are full-fledged members in our information networks, possessing their own agency. In coming years, all networks—from armies to religions—will gain millions of new AI members, which will process data differently than humans do. These new members will make alien decisions and generate alien ideas—that is, decisions and ideas that are unlikely to occur to humans. The addition of many alien agents is bound to change the shape of armies, religions, markets, and nations. Entire political, economic, and social systems might collapse, and new ones will take their place. That’s why AI should be an urgent matter even to people who don’t care about technology and who think the most important political questions concern the survival of democracy or the fair distribution of wealth.

This book has juxtaposed the discussion of AI with the discussion of sacred canons like the Bible, because we are now at the critical moment of AI canonization. When church fathers like Bishop Athanasius decided to include 1 Timothy in the biblical dataset while excluding the Acts of Paul and Thecla, they shaped the world for millennia. Billions of Christians down to the twenty-first century have formed their views of the world based on the misogynist ideas of 1 Timothy rather than on the more tolerant attitude of Thecla. Even today it is difficult to reverse course, because the church fathers chose not to include any self-correcting mechanisms in the Bible. The present-day equivalents of Bishop Athanasius are the engineers who write the initial code for AI, and who choose the dataset on which the baby AI is trained. As AI grows in power and authority, and perhaps becomes a self-interpreting holy book, so the decisions made by present-day engineers could reverberate down the ages.

Studying history does more than just emphasize the importance of the AI revolution and of our decisions regarding AI. It also cautions us against two common but misleading approaches to information networks and information revolutions. On the one hand, we should beware of an overly naive and optimistic view. Information isn’t truth. Its main task is to connect rather than represent, and information networks throughout history have often privileged order over truth. Tax records, holy books, political manifestos, and secret police files can be extremely efficient in creating powerful states and churches, which hold a distorted view of the world and are prone to abuse their power. More information, ironically, can sometimes result in more witch hunts.

There is no reason to expect that AI would necessarily break the pattern and privilege truth. AI is not infallible. What little historical perspective we have gained from the alarming events in Myanmar, Brazil, and elsewhere over the past decade indicates that in the absence of strong self-correcting mechanisms AIs are more than capable of promoting distorted worldviews, enabling egregious abuses of power, and instigating terrifying new witch hunts.

On the other hand, we should also beware of swinging too far in the other direction and adopting an overly cynical view. Populists tell us that power is the only reality, that all human interactions are power struggles, and that information is merely a weapon we use to vanquish our enemies. This has never been the case, and there is no reason to think that AI will make it so in the future. While many information networks do privilege order over truth, no network can survive if it ignores truth completely. As for individual humans, we tend to be genuinely interested in truth rather than only in power. Even institutions like the Spanish Inquisition have had conscientious truth-seeking members like Alonso de Salazar Frías, who, instead of sending innocent people to their deaths, risked his life to remind us that witches are just intersubjective fictions. Most people don’t view themselves as one-dimensional creatures obsessed solely with power. Why, then, hold such a view about everyone else?

Refusing to reduce all human interactions to a zero-sum power struggle is crucial not just for gaining a fuller, more nuanced understanding of the past but also for having a more hopeful and constructive attitude about our future. If power were the only reality, then the only way to resolve conflicts would be through violence. Both populists and Marxists believe that people’s views are determined by their privileges, and that to change people’s views it is necessary to first take away their privileges—which usually requires force. However, since humans are interested in truth, there is a chance to resolve at least some conflicts peacefully, by talking to one another, acknowledging mistakes, embracing new ideas, and revising the stories we believe. That is the basic assumption of democratic networks and of scientific institutions. It has also been the basic motivation behind writing this book.

Extinction of the Smartest

Let’s return now to the question I posed at the beginning of this book: If we are so wise, why are we so self-destructive? We are at one and the same time both the smartest and the stupidest animals on earth. We are so smart that we can produce nuclear missiles and superintelligent algorithms. And we are so stupid that we go ahead producing these things even though we’re not sure we can control them and failing to do so could destroy us. Why do we do it? Does something in our nature compel us to go down the path of self-destruction?

This book has argued that the fault isn’t with our nature but with our information networks. Due to the privileging of order over truth, human information networks have often produced a lot of power but little wisdom. For example, Nazi Germany created a highly efficient military machine and placed it at the service of an insane mythology. The result was misery on an enormous scale, the death of tens of millions of people, and eventually the destruction of Nazi Germany, too.

Of course, power is not in itself bad. When used wisely, it can be an instrument of benevolence. Modern civilization, for example, has acquired the power to prevent famines, contain epidemics, and mitigate natural disasters such as hurricanes and earthquakes. In general, the acquisition of power allows a network to deal more effectively with threats coming from outside, but simultaneously increases the dangers that the network poses to itself. It is particularly noteworthy that as a network becomes more powerful, imaginary terrors that exist only in the stories the network itself invents become potentially more dangerous than natural disasters. A modern state faced with drought or excessive rains can usually prevent this natural disaster from causing mass starvation among its citizens. But a modern state gripped by a man-made fantasy is capable of instigating man-made famines on an enormous scale, as happened in the U.S.S.R. in the early 1930s.

Accordingly, as a network becomes more powerful, its self-correcting mechanisms become more vital. If a Stone Age tribe or a Bronze Age city-state was incapable of identifying and correcting its own mistakes, the potential damage was limited. At most, one city was destroyed, and the survivors tried again elsewhere. Even if the ruler of an Iron Age empire, such as Tiberius or Nero, was gripped by paranoia or psychosis, the consequences were seldom catastrophic. The Roman Empire endured for centuries despite its fair share of mad emperors, and its eventual collapse did not bring about the end of human civilization. But if a Silicon Age superpower has weak or nonexistent self-correcting mechanisms, it could very well endanger the survival of our species, and countless other life-forms, too. In the era of AI, the whole of humankind finds itself in an analogous situation to Tiberius in his Capri villa. We command immense power and enjoy rare luxuries, but we are easily manipulated by our own creations, and by the time we wake up to the danger, it might be too late.

Unfortunately, despite the importance of self-correcting mechanisms for the long-term welfare of humanity, politicians might be tempted to weaken them. As we have seen throughout the book, though neutralizing self-correcting mechanisms has many downsides, it can nevertheless be a winning political strategy. It could deliver immense power into the hands of a twenty-first-century Stalin, and it would be foolhardy to assume that an AI-enhanced totalitarian regime would necessarily self-destruct before it could wreak havoc on human civilization. Just as the law of the jungle is a myth, so also is the idea that the arc of history bends toward justice. History is a radically open arc, one that can bend in many directions and reach very different destinations. Even if Homo sapiens destroys itself, the universe will keep going about its business as usual. It took four billion years for terrestrial evolution to produce a civilization of highly intelligent apes. If we are gone, and it takes evolution another hundred million years to produce a civilization of highly intelligent rats, it will. The universe is patient.

There is, though, an even worse scenario. As far as we know today, apes, rats, and the other organic animals of planet Earth may be the only conscious entities in the entire universe. We have now created a nonconscious but very powerful alien intelligence. If we mishandle it, AI might extinguish not only the human dominion on Earth but the light of consciousness itself, turning the universe into a realm of utter darkness. It is our responsibility to prevent this.

The good news is that if we eschew complacency and despair, we are capable of creating balanced information networks that will keep their own power in check. Doing so is not a matter of inventing another miracle technology or landing upon some brilliant idea that has somehow escaped all previous generations. Rather, to create wiser networks, we must abandon both the naive and the populist views of information, put aside our fantasies of infallibility, and commit ourselves to the hard and rather mundane work of building institutions with strong self-correcting mechanisms. That is perhaps the most important takeaway this book has to offer.

This wisdom is much older than human history. It is elemental, the foundation of organic life. The first organisms weren’t created by some infallible genius or god. They emerged through an intricate process of trial and error. Over four billion years, ever more complex mechanisms of mutation and self-correction led to the evolution of trees, dinosaurs, jungles, and eventually humans. Now we have summoned an alien inorganic intelligence that could escape our control and put in danger not just our own species but countless other life-forms. The decisions we all make in the coming years will determine whether summoning this alien intelligence proves to be a terminal error or the beginning of a hopeful new chapter in the evolution of life.