Chapter 11

 

The Silicon Curtain: Global Empire or Global Split?

The previous two chapters explored how different human societies might react to the rise of the new computer network. But we live in an interconnected world, where the decisions of one country can have a profound impact on others. Some of the gravest dangers posed by AI do not result from the internal dynamics of a single human society. Rather, they arise from dynamics involving many societies, which might lead to new arms races, new wars, and new imperial expansions.

Computers are not yet powerful enough to completely escape our control or destroy human civilization by themselves. As long as humanity stands united, we can build institutions that will control AI and will identify and correct algorithmic errors. Unfortunately, humanity has never been united. We have always been plagued by bad actors, as well as by disagreements between good actors. The rise of AI, then, poses an existential danger to humankind not because of the malevolence of computers but because of our own shortcomings.

Thus, a paranoid dictator might hand unlimited power to a fallible AI, including even the power to launch nuclear strikes. If the dictator trusts his AI more than his defense minister, wouldn’t it make sense to have the AI supervise the country’s most powerful weapons? If the AI then makes an error, or begins to pursue an alien goal, the result could be catastrophic, and not just for that country.

Similarly, terrorists focused on events in one corner of the world might use AI to instigate a global pandemic. The terrorists might be more versed in some apocalyptic mythology than in the science of epidemiology, but they just need to set the goal, and all else will be done by their AI. The AI could synthesize a new pathogen, order it from commercial laboratories or print it in biological 3-D printers, and devise the best strategy to spread it around the world, via airports or food supply chains. What if the AI synthesizes a virus that is as deadly as Ebola, as contagious as COVID-19, and as slow acting as AIDS? By the time the first victims begin to die, and the world is alerted to the danger, most people on earth might have already been infected.[1]

As we have seen in previous chapters, human civilization is threatened not only by physical and biological weapons of mass destruction like atom bombs and viruses. Human civilization could also be destroyed by weapons of social mass destruction, like stories that undermine our social bonds. An AI developed in one country could be used to unleash a deluge of fake news, fake money, and fake humans so that people in numerous other countries lose the ability to trust anything or anyone.

Many societies—both democracies and dictatorships—may act responsibly to regulate such usages of AI, clamp down on bad actors, and restrain the dangerous ambitions of their own rulers and fanatics. But if even a handful of societies fail to do so, this could be enough to endanger the whole of humankind. Climate change can devastate even countries that adopt excellent environmental regulations, because it is a global rather than a national problem. AI, too, is a global problem. Countries would be naive to imagine that as long as they regulate AI wisely within their own borders, these regulations will protect them from the worst outcomes of the AI revolution. Accordingly, to understand the new computer politics, it is not enough to examine how discrete societies might react to AI. We also need to consider how AI might change relations between societies on a global level.

At present, the world is divided into about two hundred nation-states, most of which gained their independence only after 1945. They are not all equal. The list contains two superpowers, a handful of major powers, several blocs and alliances, and a lot of smaller fish. Still, even the tiniest states enjoy some leverage, as evidenced by their ability to play the superpowers against each other. In the early 2020s, for example, China and the United States competed for influence in the strategically important South Pacific region. Both superpowers courted island nations like Tonga, Tuvalu, Kiribati, and the Solomon Islands. The governments of these small nations—whose populations range from 740,000 (Solomon Islands) to 11,000 (Tuvalu)—had substantial leeway to decide which way to tack and were able to extract considerable concessions and aid.[2]

Other small states, such as Qatar, have established themselves as important players in the geopolitical arena. With only 300,000 citizens, Qatar is nevertheless pursuing ambitious foreign policy aims in the Middle East, is playing an outsized rule in the global economy, and is home to Al Jazeera, the Arab world’s most influential TV network. One might argue that Qatar is able to punch well above its weight because it is the third-largest exporter of natural gas in the world. Yet in a different international setting, that would have made Qatar not an independent actor but the first course on the menu of any imperial conqueror. It is telling that, as of 2024, Qatar’s much bigger neighbors, and the world’s hegemonic powers, are letting the tiny Gulf state hold on to its fabulous riches. Many people describe the international system as a jungle. If so, it is a jungle in which tigers allow fat chickens to live in relative safety.

Qatar, Tonga, Tuvalu, Kiribati, and the Solomon Islands all indicate that we are living in a postimperial era. They gained their independence from the British Empire in the 1970s, as part of the final demise of the European imperial order. The leverage they now have in the international arena testifies that in the first quarter of the twenty-first century power is distributed between a relatively large number of players, rather than monopolized by a few empires.

How might the rise of the new computer network change the shape of international politics? Aside from apocalyptic scenarios such as a dictatorial AI launching a nuclear war, or a terrorist AI instigating a lethal pandemic, computers pose two main challenges to the current international system. First, since computers make it easier to concentrate information and power in a central hub, humanity could enter a new imperial era. A few empires (or perhaps a single empire) might bring the whole world under a much tighter grip than that of the British Empire or the Soviet Empire. Tonga, Tuvalu, and Qatar would be transformed from independent states into colonial possessions—just as they were fifty years ago.

Second, humanity could split along a new Silicon Curtain that would pass between rival digital empires. As each regime chooses its own answer to the AI alignment problem, to the dictator’s dilemma, and to other technological quandaries, each might create a separate and very different computer network. The various networks might then find it ever more difficult to interact, and so would the humans they control. Qataris living as part of an Iranian or Russian network, Tongans living as part of a Chinese network, and Tuvaluans living as part of an American network could come to have such different life experiences and worldviews that they would hardly be able to communicate or to agree on much.

If these developments indeed materialize, they could easily lead to their own apocalyptic outcome. Perhaps each empire can keep its nuclear weapons under human control and its lunatics away from bioweapons. But a human species divided into hostile camps that cannot understand each other stands a small chance of avoiding devastating wars or preventing catastrophic climate change. A world of rival empires separated by an opaque Silicon Curtain would also be incapable of regulating the explosive power of AI.

The Rise of Digital Empires

In chapter 9 we touched briefly on the link between the Industrial Revolution and modern imperialism. It was not evident, at the beginning, that industrial technology would have much of an impact on empire building. When the first steam engines were put to use to pump water in British coal mines in the eighteenth century, no one foresaw that they would eventually power the most ambitious imperial projects in human history. When the Industrial Revolution subsequently gathered steam in the early nineteenth century, it was driven by private businesses, because governments and armies were relatively slow to appreciate its potential geopolitical impact. The world’s first commercial railway, for example, which opened in 1830 between Liverpool and Manchester, was built and operated by the privately owned Liverpool and Manchester Railway Company. The same was true of most other early railway lines in the U.K., the United States, France, Germany, and elsewhere. At that point, it wasn’t at all clear why governments or armies should get involved in such commercial enterprises.

By the middle of the nineteenth century, however, the governments and armed forces of the leading industrial powers had fully recognized the immense geopolitical potential of modern industrial technology. The need for raw materials and markets justified imperialism, while industrial technologies made imperial conquests easier. Steamships were crucial, for example, to the British victory over the Chinese in the Opium Wars, and railroads played a decisive role in the American expansion west and the Russian expansion east and south. Indeed, entire imperial projects were shaped around the construction of railroads such as the Trans-Siberian and Trans-Caspian Russian lines, the German dream of a Berlin-Baghdad railway, and the British dream of building a railway from Cairo to the Cape.[3]

Nevertheless, most polities didn’t join the burgeoning industrial arms race in time. Some lacked the capacity to do so, like the Melanesian chiefdoms of the Solomon Islands and the Al Thani tribe of Qatar. Others, like the Burmese Empire, the Ashanti Empire, and the Chinese Empire, might have had the capacity but lacked the will and foresight. Their rulers and inhabitants either didn’t follow developments in places like northwest England or didn’t think they had much to do with them. Why should the rice farmers of the Irrawaddy basin in Burma or the Yangtze basin in China concern themselves about the Liverpool–Manchester Railway? By the end of the nineteenth century, however, these rice farmers found themselves either conquered or indirectly exploited by the British Empire. Most other stragglers in the industrial race also ended up dominated by one industrial power or other. Could something similar happen with AI?

When the race to develop AI gathered steam in the early years of the twenty-first century, it too was initially spearheaded by private entrepreneurs in a handful of countries. They set their sights on centralizing the world’s flow of information. Google wanted to organize all the world’s information in one place. Amazon sought to centralize all the world’s shopping. Facebook wished to connect all the world’s social spheres. But concentrating all the world’s information is neither practical nor helpful unless one can centrally process that information. And in 2000, when Google’s search engine was making its baby steps, when Amazon was a modest online bookshop, and when Mark Zuckerberg was in high school, the AI necessary to centrally process oceans of data was nowhere at hand. But some people bet it was just around the corner.

Kevin Kelly, the founding editor of Wired magazine, recounted how in 2002 he attended a small party at Google and struck up a conversation with Larry Page. “Larry, I still don’t get it. There are so many search companies. Web search, for free? Where does that get you?” Page explained that Google wasn’t focused on search at all. “We’re really making an AI,” he said.[4] Having lots of data makes it easier to create an AI. And AI can turn lots of data into lots of power.

By the 2010s, the dream was becoming a reality. Like every major historical revolution, the rise of AI was a gradual process involving numerous steps. And like every revolution, a few of these steps were seen as turning points, just like the opening of the Liverpool–Manchester Railway. In the prolific literature on the story of AI, two events pop up again and again. The first occurred when, on September 30, 2012, a convolutional neural network called AlexNet won the ImageNet Large Scale Visual Recognition Challenge.

If you have no idea what a convolutional neural network is, and if you have never heard of the ImageNet challenge, you are not alone. More than 99 percent of us are in the same situation, which is why AlexNet’s victory was hardly front-page news in 2012. But some humans did hear about AlexNet’s victory and decoded the writing on the wall.

They knew, for example, that ImageNet is a database of millions of annotated digital images. Did a website ever ask you to prove that you are not a robot by looking at a set of images and indicating which ones contain a car or a cat? The images you clicked were perhaps added to the ImageNet database. The same thing might also have happened to tagged images of your pet cat that you uploaded online. The ImageNet Large Scale Visual Recognition Challenge tests various algorithms on how well they are able to identify the annotated images in the database. Can they correctly identify the cats? When humans are asked to do it, out of one hundred cat images we correctly identify ninety-five as cats. In 2010 the best algorithms had a success rate of only 72 percent. In 2011 the algorithmic success rate crawled up to 75 percent. In 2012 the AlexNet algorithm won the challenge and stunned the still minuscule community of AI experts by achieving a success rate of 85 percent. While this improvement may not sound like much to laypersons, it demonstrated to the experts the potential for rapid progress in certain AI domains. By 2015 a Microsoft algorithm achieved 96 percent accuracy, surpassing the human ability to identify cat images.

In 2016, The Economist published a piece titled “From Not Working to Neural Networking” that asked, “How has artificial intelligence, associated with hubris and disappointment since its earliest days, suddenly become the hottest field in technology?” It pointed to AlexNet’s victory as the moment when “people started to pay attention, not just within the AI community but across the technology industry as a whole.” The article was illustrated with an image of a robotic hand holding up a photo of a cat.[5]

All those cat images that tech giants had been harvesting from across the world, without paying a penny to either users or tax collectors, turned out to be incredibly valuable. The AI race was on, and the competitors were running on cat images. At the same time that AlexNet was preparing for the ImageNet challenge, Google too was training its AI on cat images, and even created a dedicated cat-image-generating AI called the Meow Generator.[6] The technology developed by recognizing cute kittens was later deployed for more predatory purposes. For example, Israel relied on it to create the Red Wolf, Blue Wolf, and Wolf Pack apps used by Israeli soldiers for facial recognition of Palestinians in the Occupied Territories.[7] The ability to recognize cat images also led to the algorithms Iran uses to automatically recognize unveiled women and enforce its hijab laws. As explained in chapter 8, massive amounts of data are required to train machine-learning algorithms. Without millions of cat images uploaded and annotated for free by people across the world, it would not have been possible to train the AlexNet algorithm or the Meow Generator, which in turn served as the template for subsequent AIs with far-reaching economic, political, and military potential.[8]

Just as in the early nineteenth century the effort to build railways was pioneered by private entrepreneurs, so in the early twenty-first century private corporations were the initial main competitors in the AI race. The executives of Google, Facebook, Alibaba, and Baidu saw the value of recognizing cat images before the presidents and generals did. The second eureka moment, when the presidents and generals caught on to what was happening, occurred in mid-March 2016. It was the aforementioned victory of Google’s AlphaGo over Lee Sedol. Whereas AlexNet’s achievement was largely ignored by politicians, AlphaGo’s triumph sent shock waves through government offices, especially in East Asia. In China and neighboring countries go is a cultural treasure and considered an ideal training for aspiring strategists and policy makers. In March 2016, or so the mythology of AI would have it, the Chinese government realized that the age of AI had begun.[9]

It is little wonder that the Chinese government was probably the first to understand the full importance of what was happening. In the nineteenth century, China was late to appreciate the potential of the Industrial Revolution and was slow to adopt inventions like railroads and steamships. It consequently suffered what the Chinese call “the century of humiliations.” After having been the world’s greatest superpower for centuries, failing to adopt modern industrial technology brought China to its knees. It was repeatedly defeated in wars, partially conquered by foreigners, and thoroughly exploited by the powers that did understand railroads and steamships. The Chinese vowed never again to miss the train.

In 2017, China’s government released its “New Generation Artificial Intelligence Plan,” which announced that “by 2030, China’s AI theories, technologies, and application should achieve world-leading levels, making China the world’s primary AI innovation center.”[10] In the following years China poured enormous resources into AI so that by the early 2020s it was already leading the world in several AI-related fields and catching up with the United States in others.[11]

Of course, the Chinese government wasn’t the only one that woke up to the importance of AI. On September 1, 2017, President Putin of Russia declared, “Artificial intelligence is the future, not only for Russia, but for all humankind…. Whoever becomes the leader in this sphere will become the ruler of the world.” In January 2018, Prime Minister Modi of India concurred that “the one who control [sic] the data will control the world.”[12] In February 2019, President Trump signed an executive order on AI, saying that “the age of AI has arrived” and that “continued American leadership in Artificial Intelligence is of paramount importance to maintaining the economic and national security of the United States.”[13] The United States at the time was already the leader in the AI race, thanks largely to efforts of visionary private entrepreneurs. But what began as a commercial competition between corporations was turning into a match between governments, or perhaps more accurately, into a race between competing teams, each made of one government and several corporations. The prize for the winner? World domination.

Data Colonialism

In the sixteenth century, when Spanish, Portuguese, and Dutch conquistadors were building the first global empires in history, they came with sailing ships, horses, and gunpowder. When the British, Russians, and Japanese made their bids for hegemony in the nineteenth and twentieth centuries, they relied on steamships, locomotives, and machine guns. In the twenty-first century, to dominate a colony, you no longer need to send in the gunboats. You need to take out the data. A few corporations or governments harvesting the world’s data could transform the rest of the globe into data colonies—territories they control not with overt military force but with information.[14]

Imagine a situation—in twenty years, say—when somebody in Beijing or San Francisco possesses the entire personal history of every politician, journalist, colonel, and CEO in your country: every text they ever sent, every web search they ever made, every illness they suffered, every sexual encounter they enjoyed, every joke they told, every bribe they took. Would you still be living in an independent country, or would you now be living in a data colony? What happens when your country finds itself utterly dependent on digital infrastructures and AI-powered systems over which it has no effective control?

Such a situation can lead to a new kind of data colonialism in which control of data is used to dominate faraway colonies. Mastery of AI and data could also give the new empires control of people’s attention. As we have already discussed, in the 2010s American social media giants like Facebook and YouTube upended the politics of distant countries like Myanmar and Brazil in pursuit of profit. Future digital empires may do something similar for political interests.

Fears of psychological warfare, data colonialism, and loss of control over their cyberspace have led many countries to already block what they see as dangerous apps. China has banned Facebook, YouTube, and many other Western social media apps and websites. Russia has banned almost all Western social media apps as well as some Chinese ones. In 2020, India banned TikTok, WeChat, and numerous other Chinese apps on the grounds that they were “prejudicial to sovereignty and integrity of India, defense of India, security of state and public order.”[15] The United States has been debating whether to ban TikTok—concerned that the app might be serving Chinese interests—and as of 2023 it is illegal to use it on the devices of almost all federal employees, state employees, and government contractors.[16] Lawmakers in the U.K., New Zealand, and other countries have also expressed concerns over TikTok.[17] Numerous other governments, from Iran to Ethiopia, have blocked various apps like Facebook, Twitter, YouTube, Telegram, and Instagram.

Data colonialism could also manifest itself in the spread of social credit systems. What might happen, for example, if a dominant player in the global digital economy decides to establish a social credit system that harvests data anywhere it can and scores not only its own nationals but people throughout the world? Foreigners couldn’t just shrug off their score, because it might affect them in numerous ways, from buying flight tickets to applying for visas, scholarships, and jobs. Just as tourists use the global scores given by foreign corporations like Tripadvisor and Airbnb to evaluate restaurants and vacation homes even in their own country, and just as people throughout the world use the U.S. dollar for commercial transactions, so people everywhere might begin to use a Chinese or an American social credit score for local social interactions.

Becoming a data colony will have economic as well as political and social consequences. In the nineteenth and twentieth centuries, if you were a colony of an industrial power like Belgium or Britain, it usually meant that you provided raw materials, while the cutting-edge industries that made the biggest profits remained in the imperial hub. Egypt exported cotton to Britain and imported high-end textiles. Malaya provided rubber for tires; Coventry made the cars.[18]

Something analogous is likely to happen with data colonialism. The raw material for the AI industry is data. To produce AI that recognizes images, you need cat photos. To produce the trendiest fashion, you need data on fashion trends. To produce autonomous vehicles, you need data about traffic patterns and car accidents. To produce health-care AI, you need data about genes and medical conditions. In a new imperial information economy, raw data will be harvested throughout the world and will flow to the imperial hub. There the cutting-edge technology will be developed, producing unbeatable algorithms that know how to identify cats, predict fashion trends, drive autonomous vehicles, and diagnose diseases. These algorithms will then be exported back to the data colonies. Data from Egypt and Malaysia might make a corporation in San Francisco or Beijing rich, while people in Cairo and Kuala Lumpur remain poor, because neither the profits nor the power is distributed back.

The nature of the new information economy might make the imbalance between imperial hub and exploited colony worse than ever. In ancient times land—rather than information—was the most important economic asset. This precluded the overconcentration of all wealth and power in a single hub. As long as land was paramount, considerable wealth and power always remained in the hands of provincial landowners. A Roman emperor, for example, could put down one provincial revolt after another, but on the day after decapitating the last rebel chief, he had no choice but to appoint a new set of provincial landowners who might again challenge the central power. In the Roman Empire, although Italy was the seat of political power, the richest provinces were in the eastern Mediterranean. It was impossible to transport the fertile fields of the Nile valley to the Italian Peninsula.[19] Eventually the emperors abandoned the city of Rome to the barbarians and moved the seat of political power to the rich east, to Constantinople.

During the Industrial Revolution machines became more important than land. Factories, mines, railroad lines, and electrical power stations became the most valuable assets. It was somewhat easier to concentrate these kinds of assets in one place. The British Empire could centralize industrial production in its home islands, extract raw materials from India, Egypt, and Iraq, and sell them finished goods made in Birmingham or Belfast. Unlike in the Roman Empire, Britain was the seat of both political and economic power. But physics and geology still put natural limits on this concentration of wealth and power. The British couldn’t move every cotton mill from Calcutta to Manchester, or shift the oil wells from Kirkuk to Yorkshire.

Information is different. Unlike cotton and oil, digital data can be sent from Malaysia or Egypt to Beijing or San Francisco at almost the speed of light. And unlike land, oil fields, or textile factories, algorithms don’t take up much space. Consequently, unlike industrial power, the world’s algorithmic power can be concentrated in a single hub. Engineers in a single country might write the code and control the keys for all the crucial algorithms that run the entire world.

Indeed, AI makes it possible to concentrate in one place even the decisive assets of some traditional industries, like textile. In the nineteenth century, to control the textile industry meant to control sprawling cotton fields and huge mechanical production lines. In the twenty-first century, the most important asset of the textile industry is information rather than cotton or machinery. To beat the competitors, a garment producer needs information about the likes and dislikes of customers and the ability to predict or manufacture the next fashions. By controlling this type of information, high-tech giants like Amazon and Alibaba can monopolize even a very traditional industry like textile. In 2021, Amazon became the United States’ biggest single clothing retailer.[20]

Moreover, as AI, robots, and 3-D printers automate textile production, millions of workers might lose their jobs, upending national economies and the global balance of power. What will happen to the economies and politics of Pakistan and Bangladesh, for example, when automation makes it cheaper to produce textiles in Europe? Consider that at present the textile sector provides employment to 40 percent of Pakistan’s total labor force and accounts for 84 percent of Bangladesh’s export earnings.[21] As noted in chapter 9, while automation might make millions of textile workers redundant, it will probably create many new jobs, too. For instance, there might be a huge demand for coders and data analysts. But turning an unemployed factory hand into a data analyst demands a substantial up-front investment in retraining. Where would Pakistan and Bangladesh get the money to do that?

AI and automation therefore pose a particular challenge to poorer developing countries. In an AI-driven economy, the digital leaders claim the bulk of the gains and could use their wealth to retrain their workforce and profit even more. Meanwhile, the value of unskilled laborers in left-behind countries will decline, and they will not have the resources to retrain their workforce, causing them to fall even further behind. The result might be lots of new jobs and immense wealth in San Francisco and Shanghai, while many other parts of the world face economic ruin.[22] According to the global accounting firm PricewaterhouseCoopers, AI is expected to add $15.7 trillion to the global economy by 2030. But if current trends continue, it is projected that China and North America—the two leading AI superpowers—will together take home 70 percent of that money.[23]

From Web to Cocoon

These economic and geopolitical dynamics could divide the world between two digital empires. During the Cold War, the Iron Curtain was in many places literally made of metal: barbed wire separated one country from another. Now the world is increasingly divided by the Silicon Curtain. The Silicon Curtain is made of code, and it passes through every smartphone, computer, and server in the world. The code on your smartphone determines on which side of the Silicon Curtain you live, which algorithms run your life, who controls your attention, and where your data flows.

It is becoming difficult to access information across the Silicon Curtain, say between China and the United States, or between Russia and the EU. Moreover, the two sides are increasingly run on different digital networks, using different computer codes. Each sphere obeys different regulations and serves different purposes. In China, the most important aim of new digital technology is to strengthen the state and serve government policies. While private enterprises are given a certain amount of autonomy in developing and deploying AIs, their economic activities are ultimately subservient to the government’s political goals. These political goals also justify a relatively high level of surveillance, both online and offline. This means, for example, that though Chinese citizens and authorities do care about people’s privacy, China is already far ahead of the United States and other Western countries in developing and deploying social credit systems that encompass the whole of people’s lives.[24]

In the United States, the government plays a more limited role. Private enterprises lead the development and deployment of AI, and the ultimate goal of many new AI systems is to enrich the tech giants rather than to strengthen the American state or the current administration. Indeed, in many cases governmental policies are themselves shaped by powerful business interests. But the U.S. system does offer greater protection for citizens’ privacy. While American corporations aggressively gather information on people’s online activities, they are much more restricted in surveilling people’s offline lives. There is also widespread rejection of the ideas behind all-embracing social credit systems.[25]

These political, cultural, and regulatory differences mean that each sphere is using different software. In China you cannot use Google or Facebook, and you cannot access Wikipedia. In the United States few people use WeChat, Baidu, or Tencent. More important, the spheres aren’t mirror images of each other. It is not that the Chinese and Americans develop local versions of the same apps. Baidu isn’t the Chinese Google. Alibaba isn’t the Chinese Amazon. They have different goals, different digital architectures, and different impacts on people’s lives.[26] These differences influence much of the world, since most countries rely on Chinese and American software rather than on local technology.

Each sphere also uses different hardware like smartphones and computers. The United States pressures its allies and clients to avoid Chinese hardware, such as Huawei’s 5G infrastructure.[27] The Trump administration blocked an attempt by the Singaporean corporation Broadcom to buy the leading American producer of computer chips, Qualcomm. They feared foreigners might insert back doors into the chips or would prevent the U.S. government from inserting its own back doors there.[28] In 2022, the Biden administration placed strict limits on trade in high-performance computing chips necessary for the development of AI. U.S. companies were forbidden to export such chips to China, or to provide China with the means to manufacture or repair them. The restrictions have subsequently been tightened further, and the ban was expanded to include other nations such as Russia and Iran.[29] While in the short term this hampers China in the AI race, in the long term it will push China to develop a completely separate digital sphere that will be distinct from the American digital sphere even in its smallest building blocks.[30]

The two digital spheres may drift further and further apart. Chinese software would talk only with Chinese hardware and Chinese infrastructure, and the same would happen on the other side of the Silicon Curtain. Since digital code influences human behavior, and human behavior in turn shapes digital code, the two sides may well be moving along different trajectories that will make them more and more different not just in their technology but in their cultural values, social norms, and political structures. After generations of convergence, humanity could find itself at a crucial point of divergence.[31] For centuries, new information technologies fueled the process of globalization and brought people all over the world into closer contact. Paradoxically, information technology today is so powerful it can potentially split humanity by enclosing different people in separate information cocoons, ending the idea of a single shared human reality. While the web has been our main metaphor in recent decades, the future might belong to cocoons.

The Global Mind-Body Split

The division into separate information cocoons could lead not just to economic rivalries and international tensions but also to the development of very different cultures, ideologies, and identities. Guessing future cultural and ideological developments is usually a fool’s errand. It is far more difficult than predicting economic and geopolitical developments. How many Romans or Jews in the days of Tiberius could have anticipated that a splinter Jewish sect would eventually take over the Roman Empire and that the emperors would abandon Rome’s old gods to worship an executed Jewish rabbi?

It would have been even more difficult to foresee the directions in which various Christian sects would develop and the momentous impact of their ideas and conflicts on everything from politics to sexuality. When Jesus was asked about paying taxes to Tiberius’s government and answered, “Render unto Caesar the things that are Caesar’s, and unto God the things that are God’s” (Matthew 22:21), nobody could imagine the impact his response would have on the separation of church and state in the American republic two millennia later. And when Saint Paul wrote to the Christians in Rome, “I myself in my mind am a slave to God’s law, but in my sinful flesh a slave to the law of sin” (Romans 7:25), who could have foreseen the repercussions this would have on schools of thought ranging from Cartesian philosophy to queer theory?

Despite these difficulties, it is important to try to imagine future cultural developments, in order to alert ourselves to the fact that the AI revolution and the formation of rival digital spheres are likely to change more than just our jobs and political structures. The following paragraphs contain some admittedly ambitious speculation, so please bear in mind that my goal is not to accurately foretell cultural developments but merely to draw attention to the likelihood that profound cultural shifts and conflicts await us.

One possible development with far-reaching consequences is that different digital cocoons might adopt incompatible approaches to the most fundamental questions of human identity. For thousands of years, many religious and cultural conflicts—for example, between rival Christian sects, between Hindus and Buddhists, and between Platonists and Aristotelians—were fueled by disagreements about the mind-body problem. Are humans a physical body, or a nonphysical mind, or perhaps a mind trapped inside a body? In the twenty-first century, the computer network might supercharge the mind-body problem and turn it into a cause for major personal, ideological, and political conflicts.

To appreciate the political ramifications of the mind-body problem, let’s briefly revisit the history of Christianity. Many of the earliest Christian sects, influenced by Jewish thinking, believed in the Old Testament idea that humans are embodied beings and that the body plays a crucial role in human identity. The book of Genesis said God created humans as physical bodies, and almost all books of the Old Testament assume that humans can exist only as physical bodies. With a few possible exceptions, the Old Testament doesn’t mention the possibility of a bodiless existence after death, in heaven or hell. When the ancient Jews fantasized about salvation, they imagined it to mean an earthly kingdom of material bodies. In the time of Jesus, many Jews believed that when the Messiah finally comes, the bodies of the dead would come back to life, here on earth. The Kingdom of God, established by the Messiah, was supposed to be a material kingdom, with trees and stones and flesh-and-blood bodies.[32]

This was also the view of Jesus himself and the first Christians. Jesus promised his followers that soon the Kingdom of God would be built here on earth and they would inhabit it in their material bodies. When Jesus died without fulfilling his promise, his early followers came to believe that he was resurrected in the flesh and that when the Kingdom of God finally materialized on earth, they too would be resurrected in the flesh. The church father Tertullian (160–240 CE) wrote that “the flesh is the very condition on which salvation hinges,” and the catechism of the Catholic Church, citing the doctrines adopted at the Second Council of Lyon in 1274, states, “We believe in God who is creator of the flesh; we believe in the Word made flesh in order to redeem the flesh; we believe in the resurrection of the flesh, the fulfillment of both the creation and the redemption of the flesh…. We believe in the true resurrection of this flesh that we now possess.”[33]

Despite such seemingly unequivocal statements, we saw that Saint Paul already had his doubts about the flesh, and by the fourth century CE, under Greek, Manichaean, and Persian influences, some Christians had drifted toward a dualistic approach. They came to think of humans as consisting of a good immaterial soul trapped inside an evil material body. They didn’t fantasize about being resurrected in the flesh. Just the opposite. Having been released by death from its abominable material prison, why would the pure soul ever want to get back in? Christians accordingly began to believe that after death the soul is liberated from the body and exists forever in an immaterial place completely beyond the physical realm—which is the standard belief among Christians today, notwithstanding what Tertullian and the Second Council of Lyon said.[34]

But Christianity couldn’t completely abandon the old Jewish view that humans are embodied beings. After all, Christ appeared on earth in the flesh. His body was nailed to the cross, on which he experienced excruciating pain. For two thousand years, Christian sects therefore fought each other—sometimes with words, sometimes with swords—over the exact relations between soul and body. The fiercest arguments focused on Christ’s own body. Was he material? Was he purely spiritual? Did he perhaps have a nonbinary nature, being both human and divine at the same time?

The different approaches to the mind-body problem influenced how people treated their own bodies. Saints, hermits, and monks made breathtaking experiments in pushing the human body to its limits. Just as Christ allowed his body to be tortured on the cross, so these “athletes of Christ” allowed lions and bears to rip them apart while their souls rejoiced in divine ecstasy. They wore hair shirts, fasted for weeks, or stood for years on a pillar—like the famous Simeon who allegedly stood for about forty years on top of a pillar near Aleppo.[35]

Other Christians took the opposite approach, believing that the body didn’t matter at all. The only thing that mattered was faith. This idea was taken to extremes by Protestants like Martin Luther, who formulated the doctrine of sola fide: only faith. After living as a monk for about ten years, fasting and torturing his body in various ways, Luther despaired of these bodily exercises. He reasoned that no bodily self-torments could force God to redeem him. Indeed, thinking he could win his own salvation by torturing his body was the sin of pride. Luther therefore disrobed, married a former nun, and told his followers that to be good Christians, the only thing they needed was to have complete faith in Christ.[36]

These ancient theological debates about mind and body may seem utterly irrelevant to the AI revolution, but they have in fact been resurrected by twenty-first-century technologies. What is the relationship between our physical body and our online identities and avatars? What is the relation between the offline world and cyberspace? Suppose I spend most of my waking hours sitting in my room in front of a screen, playing online games, forming virtual relationships, and even working remotely. I hardly venture out even to eat. I just order takeout. If you are like ancient Jews and the first Christians, you would pity me and conclude that I must be living in a delusion, losing touch with the reality of physical spaces and flesh-and-blood bodies. But if your thinking is closer to that of Luther and many later Christians, you might think I am liberated. By shifting most of my activities and relationships online, I have released myself from the limited organic world of debilitating gravity and corrupt bodies and can enjoy the unlimited possibilities of a digital world, which is potentially liberated from the laws of biology and even physics. I am free to roam a much vaster and more exciting space and to explore new aspects of my identity.

An increasingly important question is, Can people adopt any virtual identity they like, or should their identity be constrained by their biological body? If we follow the Lutheran position of sola fide, the biological body isn’t of much importance. To adopt a certain online identity, the only thing that matters is what you believe. This debate can have far-reaching consequences not just for human identity but for our attitude to the world as a whole. A society that understands identities in terms of biological bodies should also care more about material infrastructure like sewage pipes and about the ecosystem that sustains our bodies. It will see the online world as an auxiliary of the offline world that can serve various useful purposes but can never become the central arena of our lives. Its aim would be to create an ideal physical and biological realm—the Kingdom of God on earth. In contrast, a society that downplays biological bodies and focuses on online identities may well seek to create an immersive Kingdom of God in cyberspace while discounting the fate of mere material things like sewage pipes and rain forests.

This debate could shape attitudes not only toward organisms but also toward digital entities. As long as society defines identity by focusing on physical bodies, it is unlikely to view AIs as persons. But if society gives less importance to physical bodies, then even AIs that lack any corporeal manifestations may be accepted as legal persons enjoying various rights.

Throughout history, diverse cultures have given diverse answers to the mind-body problem. A twenty-first-century controversy about the mind-body problem could result in cultural and political splits more consequential even than the split between Jews and Christians or between Catholics and Protestants. What happens, for example, if the American sphere discounts the body, defines humans by their online identity, recognizes AIs as persons, and downplays the importance of the ecosystem, whereas the Chinese sphere adopts opposite positions? Current disagreements about violations of human rights or adherence to ecological standards will look minuscule in comparison. The Thirty Years’ War—arguably the most devastating war in European history—was fought at least in part because Catholics and Protestants couldn’t agree on doctrines like sola fide and on whether Christ was divine, human, or nonbinary. Might future conflicts start because of an argument about AI rights and the nonbinary nature of avatars?

As noted, these are all wild speculations, and in all likelihood actual cultures and ideologies will develop in different—and perhaps even wilder—directions. But it is probable that within a few decades the computer network will cultivate new human and nonhuman identities that make little sense to us. And if the world will be divided into two rival digital cocoons, the identities of entities in one cocoon might be unintelligible to the inhabitants of the other.

From Code War to Hot War

While China and the United States are currently the front-runners in the AI race, they are not alone. Other countries or blocs, such as the EU, India, Brazil, and Russia, may try to create their own digital spheres, each influenced by different political, cultural, and religious traditions.[37] Instead of being divided between just two global empires, the world might be divided among a dozen empires. It is unclear whether this will somewhat alleviate or only exacerbate the imperial competition.

The more the new empires compete against one another, the greater the danger of armed conflict. The Cold War between the United States and the U.S.S.R. never escalated into a direct military confrontation largely thanks to the doctrine of mutually assured destruction. But the danger of escalation in the age of AI is bigger, because cyber warfare is inherently different from nuclear warfare.

First, cyber weapons are much more versatile than nuclear bombs. Cyber weapons can bring down a country’s electric grid, but they can also be used to destroy a secret research facility, jam an enemy sensor, inflame a political scandal, manipulate elections, or hack a single smartphone. And they can do all that stealthily. They don’t announce their presence with a mushroom cloud and a storm of fire, nor do they leave a visible trail from launchpad to target. Consequently, at times it is hard to know if an attack even occurred or who launched it. If a database is hacked or sensitive equipment is destroyed, it’s hard to be sure whom to blame. The temptation to start a limited cyberwar is therefore big, and so is the temptation to escalate it. Rival countries like Israel and Iran or the United States and Russia have been trading cyber blows for years, in an undeclared but escalating war.[38] This is becoming the new global norm, amplifying international tensions and pushing countries to cross one red line after another.

A second crucial difference concerns predictability. The Cold War was like a hyperrational chess game, and the certainty of destruction in the event of nuclear conflict was so great that the desire to start a war was correspondingly small. Cyber warfare lacks this certainty. Nobody knows for sure where each side has planted its logic bombs, Trojan horses, and malware. Nobody can be certain whether their own weapons would actually work when called upon. Would Chinese missiles fire when the order is given, or perhaps the Americans have hacked them or the chain of command? Would American aircraft carriers function as expected, or would they perhaps shut down mysteriously or sail around in circles?[39]

Such uncertainty undermines the doctrine of mutually assured destruction. One side might convince itself—rightly or wrongly—that it can launch a successful first strike and avoid massive retaliation. Even worse, if one side thinks it has such an opportunity, the temptation to launch a first strike could become irresistible, because one never knows how long the window of opportunity will remain open. Game theory posits that the most dangerous situation in an arms race is when one side feels it has an advantage but that this advantage is slipping away.[40]

Even if humanity avoids the worst-case scenario of global war, the rise of new digital empires could still endanger the freedom and prosperity of billions of people. The industrial empires of the nineteenth and twentieth centuries exploited and repressed their colonies, and it would be foolhardy to expect the new digital empires to behave much better. Moreover, as noted earlier, if the world is divided into rival empires, humanity is unlikely to cooperate effectively to overcome the ecological crisis or to regulate AI and other disruptive technologies like bioengineering.

The Global Bond

Of course, no matter whether the world is divided between a few digital empires, remains a more diverse community of two hundred nation-states, or is split along altogether different and unforeseen lines, cooperation is always an option. Among humans, the precondition for cooperation isn’t similarity; it is the ability to exchange information. As long as we are able to converse, we might find some shared story that can bring us closer. This, after all, is what made Homo sapiens the dominant species on the planet.

Just as different and even rival families can cooperate within a tribal network, and competing tribes can cooperate within a national network, so opposing nations and empires can cooperate within a global network. The stories that make such cooperation possible do not eliminate our differences; rather, they enable us to identify shared experiences and interests, which offer a common framework for thought and action.

A large part of what nevertheless makes global cooperation difficult is the misguided notion that it requires abolishing all cultural, social, and political differences. Populist politicians often argue that if the international community agrees on a common story and on universal norms and values, this will destroy the independence and unique traditions of their own nation.[41] This position was unabashedly distilled in 2015 by Marine Le Pen—leader of France’s National Front party—in an election speech in which she declared, “We have entered a new two-partyism. A two-partyism between two mutually exclusive conceptions that will from now on structure our political life. The cleavage no longer separates left and right, but globalists and patriots.”[42] In August 2020, President Trump described his guiding ethos thus: “We have rejected globalism and embraced patriotism.”[43]

Luckily, this binary position is mistaken in its basic assumption. Global cooperation and patriotism are not mutually exclusive. For patriotism isn’t about hating foreigners. It is about loving our compatriots. And there are many situations when, in order to take care of our compatriots, we need to cooperate with foreigners. COVID-19 provided us with one obvious example. Pandemics are global events, and without global cooperation it is hard to contain them, let alone prevent them. When a new virus or a mutant pathogen appears in one country, it puts all other countries in danger. Conversely, the biggest advantage of humans over pathogens is that we can cooperate in ways that pathogens cannot. Doctors in Germany and Brazil can alert one another to new dangers, give one another good advice, and work together to discover better treatments.

If German scientists invent a vaccine against some new disease, how should Brazilians react to this German achievement? One option is to reject the foreign vaccine and wait until Brazilian scientists develop a Brazilian vaccine. That, however, would be not just foolish; it would be anti-patriotic. Brazilian patriots should want to use any available vaccine to help their compatriots, no matter where the vaccine was developed. In this situation, cooperating with foreigners is the patriotic thing to do. The threat of losing control of AIs is an analogous situation in which patriotism and global cooperation must go together. An out-of-control AI, just like an out-of-control virus, puts in danger humans in every nation. No human collective—whether a tribe, a nation, or the entire species—stands to benefit from letting power shift from humans to algorithms.

Contrary to what populists argue, globalism doesn’t mean establishing a global empire, abandoning national loyalties, or opening borders to unlimited immigration. In fact, global cooperation means two far more modest things: first, a commitment to some global rules. These rules don’t deny the uniqueness of each nation and the loyalty people should owe their nation. They just regulate the relations between nations. A good model is the World Cup. The World Cup is a competition between nations, and people often show fierce loyalty to their national team. At the same time, the World Cup is an amazing display of global agreement. Brazil cannot play football against Germany unless Brazilians and Germans first agree on the same set of rules for the game. That’s globalism in action.

The second principle of globalism is that sometimes—not always, but sometimes—it is necessary to prioritize the long-term interests of all humans over the short-term interests of a few. For example, in the World Cup, all national teams agree not to use performance-enhancing drugs, because everybody realizes that if they go down that path, the World Cup would eventually devolve into a competition between biochemists. In other fields where technology is a game changer, we should similarly strive to balance national and global interests. Nations will obviously continue to compete in the development of new technology, but sometimes they should agree to limit the development and deployment of dangerous technologies like autonomous weapons and manipulative algorithms—not purely out of altruism, but for their own self-preservation.

The Human Choice

Forging and keeping international agreements on AI will require major changes in the way the international system functions. While we have experience in regulating dangerous technologies like nuclear and biological weapons, the regulation of AI will demand unprecedented levels of trust and self-discipline, for two reasons. First, it is easier to hide an illicit AI lab than an illicit nuclear reactor. Second, AIs have a lot more dual civilian-military usages than nuclear bombs. Consequently, despite signing an agreement that bans autonomous weapon systems, a country could build such weapons secretly, or camouflage them as civilian products. For example, it might develop fully autonomous drones for delivering mail and spraying fields with pesticides that with a few minor modifications could also deliver bombs and spray people with poison. Consequently, governments and corporations will find it more difficult to trust that their rivals are really abiding by the agreed regulations—and to withstand the temptation to themselves waive the rules.[44] Can humans develop the necessary levels of trust and self-discipline? Do changes like those have any precedent in history?

Many people are skeptical of the human capacity to change, and in particular of the human ability to renounce violence and forge stronger global bonds. For example, “realist” thinkers like Hans Morgenthau and John Mearsheimer have argued that an all-out competition for power is the inescapable condition of the international system. Mearsheimer explains that his theory “sees great powers as concerned mainly with figuring out how to survive in a world where there is no agency to protect them from each other” and that “they quickly realize that power is the key to their survival.” Mearsheimer then asks “how much power states want” and answers that all states want as much power as they can get, “because the international system creates powerful incentives for states to look for opportunities to gain power at the expense of rivals.” He concludes, “A state’s ultimate goal is to be the hegemon in the system.”[45]

This grim view of international relations is akin to the populist and Marxist views of human relations, in that they all see humans as interested only in power. And they are all founded upon a deeper philosophical theory of human nature, which the primatologist Frans de Waal termed “veneer theory.” It argues that at heart humans are Stone Age hunters who cannot but see the world as a jungle where the strong prey upon the weak and where might makes right. For millennia, the theory goes, humans have tried to camouflage this unchanging reality under a thin and mutable veneer of myths and rituals, but we have never really broken free from the law of the jungle. Indeed, our myths and rituals are themselves a weapon used by the jungle’s top dogs to deceive and trap their inferiors. Those who don’t realize this are dangerously naive and will fall prey to some ruthless predator.[46]

There are reasons to think, however, that “realists” like Mearsheimer have a selective view of historical reality and that the law of the jungle is itself a myth. As de Waal and many other biologists documented in numerous studies, real jungles—unlike the one in our imagination—are full of cooperation, symbiosis, and altruism displayed by countless animals, plants, fungi, and even bacteria. Eighty percent of all land plants, for example, rely on symbiotic relationships with fungi, and almost 90 percent of vascular plant families enjoy symbiotic relationships with microorganisms. If organisms in the rain forests of Amazonia, Africa, or India abandoned cooperation in favor of an all-out competition for hegemony, the rain forests and all their inhabitants would quickly die. That’s the law of the jungle.[47]

As for Stone Age humans, they were gatherers as well as hunters, and there is no firm evidence that they had irrepressible warlike tendencies. While there are plenty of speculations, the first unambiguous evidence for organized warfare appears in the archaeological record only about thirteen thousand years ago, at the site of Jebel Sahaba in the Nile valley.[48] Even after that date, the record of war is variable rather than constant. Some periods were exceptionally violent, whereas others were relatively peaceful. The clearest pattern we observe in the long-term history of humanity isn’t the constancy of conflict, but rather the increasing scale of cooperation. A hundred thousand years ago, Sapiens could cooperate only at the level of bands. Over the millennia, we have found ways to create communities of strangers, first on the level of tribes and eventually on the level of religions, trade networks, and states. Realists should note that states are not the fundamental particles of human reality, but rather the product of arduous processes of building trust and cooperation. If humans were interested only in power, they could never have created states in the first place. Sure, conflicts have always remained a possibility—both between and within states—but they have never been an inescapable destiny.

War’s intensity depends not on an immutable human nature but on shifting technological, economic, and cultural factors. As these factors change, so does war, as was clearly demonstrated in the post-1945 era. During that period, the development of nuclear technology greatly increased the potential price of war. From the 1950s onward it became clear to the superpowers that even if they could somehow win an all-out nuclear exchange, their victory would likely be a suicidal achievement, involving the sacrifice of most of their population.

Simultaneously, the ongoing shift from a material-based economy to a knowledge-based economy decreased the potential gains of war. While it has remained feasible to conquer rice paddies and gold mines, by the late twentieth century these were no longer the main sources of economic wealth. The new leading industries, like the semiconductor sector, came to be based on technical skills and organizational know-how that could not be acquired by military conquest. Accordingly, some of the greatest economic miracles of the post-1945 era were achieved by the defeated powers of Germany, Italy, and Japan, and by countries like Sweden and Singapore that eschewed military conflicts and imperial conquests.

Finally, the second half of the twentieth century also witnessed a profound cultural transformation, with the decline of age-old militaristic ideals. Artists increasingly focused on depicting the senseless horrors of combat rather than on glorifying its architects, and politicians came to power dreaming more of domestic reforms than of foreign conquests. Due to these technological, economic, and cultural changes, in the decades following the end of World War II most governments stopped seeing wars of aggression as an appealing tool to advance their interests, and most nations stopped fantasizing about conquering and destroying their neighbors. While civil wars and insurgencies have remained commonplace, the post-1945 world has seen a significant decline in full-scale wars between states, and most notably in direct armed conflicts between great powers.[49]

Numerous statistics attest to the decline of war in this post-1945 era, but perhaps the clearest evidence is found in state budgets. For most of recorded history, the military was the number one item on the budget of every empire, sultanate, kingdom, and republic. Governments spent little on health care and education, because most of their resources were consumed by paying soldiers, constructing walls, and building warships. When the bureaucrat Chen Xiang examined the annual budget of the Chinese Song dynasty for the year 1065, he found that out of sixty million minqian (currency unit), fifty million (83 percent) were consumed by the military. Another official, Cai Xiang, wrote, “If [we] split [all the property] under Heaven into six shares, five shares are spent on the military, and one share is spent on temple offerings and state expenses. How can the country not be poor and the people not in difficulty?”[50]

The same situation prevailed in many other polities, from ancient times to the modern era. The Roman Empire spent about 50–75 percent of its budget on the military,[51] and the figure was about 60 percent in the late-seventeenth-century Ottoman Empire.[52] Between 1685 and 1813 the share of the military in British government expenditure averaged 75 percent.[53] In France, military expenditure between 1630 and 1659 varied between 89 percent and 93 percent of the budget, remained above 30 percent for much of the eighteenth century, and dropped to a low of 25 percent in 1788 only due to the financial crisis that led to the French Revolution. In Prussia, from 1711 to 1800 the military share of the budget never fell below 75 percent and occasionally reached as high as 91 percent.[54] During the relatively peaceful years of 1870–1913, the military ate up an average of 30 percent of the state budgets of the major powers of Europe, as well as Japan and the United States, while smaller powers like Sweden were spending even more.[55] When war broke out in 1914, military budges skyrocketed. During their involvement in World War I, French military expenditure averaged 77 percent of the budget; in Germany it was 91 percent, in Russia 48 percent, in the U.K. 49 percent, and in the United States 47 percent. During World War II, the U.K. figure rose to 69 percent and the U.S. figure to 71 percent.[56] Even during the détente years of the 1970s, Soviet military expenditure still amounted to 32.5 percent of the budget.[57]

State budgets in more recent decades make for far more hopeful reading material than any pacifist tract ever composed. In the early twenty-first century, the worldwide average government expenditure on the military has been only around 7 percent of the budget, and even the dominant superpower of the United States spent only around 13 percent of its annual budget to maintain its military hegemony.[58] Since most people no longer lived in terror of external invasion, governments could invest far more money in welfare, education, and health care. Worldwide average expenditure on health care in the early twenty-first century has been about 10 percent of the government budget, or about 1.4 times the defense budget.[59] For many people in the 2010s, the fact that the health-care budget was bigger than the military budget was unremarkable. But it was the result of a major change in human behavior, and one that would have sounded impossible to most previous generations.

The decline of war didn’t result from a divine miracle or from a metamorphosis in the laws of nature. It resulted from humans changing their own laws, myths, and institutions and making better decisions. Unfortunately, the fact that this change has stemmed from human choice also means that it is reversible. Technology, economics, and culture are ever changing. In the early 2020s, more leaders are again dreaming of martial glory, armed conflicts are on the rise,[60] and military budgets are increasing.[61]

A critical threshold was crossed in early 2022. Russia had already destabilized the global order by mounting a limited invasion of Ukraine in 2014 and occupying Crimea and other regions in eastern Ukraine. But on February 24, 2022, Vladimir Putin launched an all-out assault aimed to conquer the whole of Ukraine and extinguish Ukrainian nationhood. To prepare and sustain this attack, Russia increased its military budget far beyond the global average of 7 percent. Exact figures are difficult to determine, because many aspects of the Russian military budget are shrouded in secrecy, but the best estimates put the figure somewhere in the vicinity of 30 percent, and it may even be higher.[62] The Russian onslaught in turn has forced not only Ukraine but also many other European nations to increase their own military budgets.[63] The reemergence of militaristic cultures in places like Russia, and the development of unprecedented cyber weapons and autonomous armaments throughout the world, could result in a new era of war, worse than anything we have seen before.

The decisions leaders like Putin make on matters of war and peace are shaped by their understanding of history. Which means that just as overly optimistic views of history could be dangerous illusions, overly pessimistic views could become destructive self-fulfilling prophecies. Prior to his all-out 2022 attack on Ukraine, Putin had often expressed his historical conviction that Russia is trapped in an endless struggle with foreign enemies, and that the Ukrainian nation is a fabrication by these enemies. In June 2021, he published a fifty-three-hundred-word essay titled “On the Historical Unity of Russians and Ukrainians” in which he denied the existence of Ukraine as a nation and argued that foreign powers have repeatedly tried to weaken Russia by fostering Ukrainian separatism. While professional historians reject these claims, Putin seems to genuinely believe in this historical narrative.[64] Putin’s historical convictions led him in 2022 to prioritize the conquest of Ukraine over other policy goals, such as providing Russian citizens with better health care or spearheading a global initiative to regulate AI.[65]

If leaders like Putin believe that humanity is trapped in an unforgiving dog-eat-dog world, that no profound change is possible in this sorry state of affairs, and that the relative peace of the late twentieth century and early twenty-first century was an illusion, then the only choice remaining is whether to play the part of predator or prey. Given such a choice, most leaders would prefer to go down in history as predators and add their names to the grim list of conquerors that unfortunate pupils are condemned to memorize for their history exams. These leaders should be reminded, however, that in the era of AI the alpha predator is likely to be AI.

Perhaps, though, we have more choices available to us. I cannot predict what decisions people will make in the coming years, but as a historian I do believe in the possibility of change. One of the chief lessons of history is that many of the things that we consider natural and eternal are, in fact, man-made and mutable. Accepting that conflict is not inevitable, however, should not make us complacent. Just the opposite. It places a heavy responsibility on all of us to make good choices. It implies that if human civilization is consumed by conflict, we cannot blame it on any law of nature or any alien technology. It also implies that if we make the effort, we can create a better world. This isn’t naïveté; it’s realism. Every old thing was once new. The only constant of history is change.