Alan Turing and Gordon Moore could never have predicted, let alone altered the rise of, social media, memes, Wikipedia, or cyberattacks. Decades after their invention, the architects of the atomic bomb could no more stop a nuclear war than Henry Ford could stop a car accident. Technology’s unavoidable challenge is that its makers quickly lose control over the path their inventions take once introduced to the world.
Technology exists in a complex, dynamic system (the real world), where second-, third-, and nth-order consequences ripple out unpredictably. What on paper looks flawless can behave differently out in the wild, especially when copied and further adapted downstream. What people actually do with your invention, however well intentioned, can never be guaranteed. Thomas Edison invented the phonograph so people could record their thoughts for posterity and to help the blind. He was horrified when most people just wanted to play music. Alfred Nobel intended his explosives to be used only in mining and railway construction.
Gutenberg just wanted to make money printing Bibles. Yet his press catalyzed the Scientific Revolution and the Reformation, and so became the greatest threat to the Catholic Church since its establishment. Fridge makers didn’t aim to create a hole in the ozone layer with chlorofluorocarbons (CFCs), just as the creators of the internal combustion and jet engines had no thought of melting the ice caps. In fact early enthusiasts for automobiles argued for their environmental benefits: engines would rid the streets of mountains of horse dung that spread dirt and disease across urban areas. They had no conception of global warming.
Understanding technology is, in part, about trying to understand its unintended consequences, to predict not just positive spillovers but “revenge effects.” Quite simply, any technology is capable of going wrong, often in ways that directly contradict its original purpose. Think of the way that prescription opioids have created dependence, or how the overuse of antibiotics renders them less effective, or how the proliferation of satellites and debris known as “space junk” imperils spaceflight.
As technology proliferates, more people can use it, adapt it, shape it however they like, in chains of causality beyond any individual’s comprehension. As the power of our tools grows exponentially and as access to them rapidly increases, so do the potential harms, an unfolding labyrinth of consequences that no one can fully predict or forestall. One day someone is writing equations on a blackboard or fiddling with a prototype in the garage, work seemingly irrelevant to the wider world. Within decades, it has produced existential questions for humanity. As we have built systems of increasing power, this aspect of technology has felt more and more pressing to me. How do we guarantee that this new wave of technologies does more good than harm?
Technology’s problem here is a containment problem. If this aspect cannot be eliminated, it might be curtailed. Containment is the overarching ability to control, limit, and, if need be, close down technologies at any stage of their development or deployment. It means, in some circumstances, the ability to stop a technology from proliferating in the first place, checking the ripple of unintended consequences (both good and bad).
The more powerful a technology, the more ingrained it is in every facet of life and society. Thus, technology’s problems have a tendency to escalate in parallel with its capabilities, and so the need for containment grows more acute over time.
Does any of this get technologists off the hook? Not at all; more than anyone else it is up to us to face it. We might not be able to control the final end points of our work or its long-term effects, but that is no reason to abdicate responsibility. Decisions technologists and societies make at the source can still shape outcomes. Just because consequences are difficult to predict doesn’t mean we shouldn’t try.
In most cases, containment is about meaningful control, the capability to stop a use case, change a research direction, or deny access to harmful actors. It means preserving the ability to steer waves to ensure their impact reflects our values, helps us flourish as a species, and does not introduce significant harms that outweigh their benefits.
This chapter shows just how challenging and rare that actually is.
For many, the word “containment” brings echoes of the Cold War. The American diplomat George F. Kennan argued that “the main element of any United States policy toward the Soviet Union must be that of a long-term, patient but firm and vigilant containment of Russian expansive tendencies.” Viewing the world as a constantly shifting field of struggle, Western nations, Kennan contended, must monitor and counter Soviet power wherever they found it, safely containing the Red menace and its ideological tentacles across all dimensions.
While this reading of containment offers some useful lessons, it’s inadequate for our purposes. Technology is not an adversary; it’s a basic property of human society. Containing technology needs to be a much more fundamental program, a balance of power not between competing actors but between humans and our tools. It’s a necessary prerequisite for the survival of our species over the next century. Containment encompasses regulation, better technical safety, new governance and ownership models, and new modes of accountability and transparency, all as necessary (but not sufficient) precursors to safer technology. It’s an overarching lock uniting cutting-edge engineering, ethical values, and government regulation. Containment shouldn’t be seen as the final answer to all technology’s problems; it is rather the first, critical step, a foundation on which the future is built.
Think of containment, then, as a set of interlinked and mutually reinforcing technical, cultural, legal, and political mechanisms for maintaining societal control of technology during a time of exponential change; an architecture up to the task of containing what would have once been centuries or millennia of technological change happening now in a matter of years or even months, where consequences ricochet around the world in seconds.
Technical containment refers to what happens in a lab or an R&D facility. In AI, for example, it means air gaps, sandboxes, simulations, off switches, hard built-in safety and security measures—protocols for verifying the safety or integrity or uncompromised nature of a system and taking it offline if needed. Then come the values and cultures around creation and dissemination that support boundaries, layers of governance, acceptance of limits, a vigilance for harms and unintended consequences. Last, containment includes both national and international legal mechanisms for containment: regulations passed by national legislatures and treaties operating through the UN and other global bodies. Technology is always deeply caught up in the laws and customs, the norms and habits, the structures of power and knowledge of any given society; each must be addressed. We’ll return to this in more detail in part 4.
For now, you may be wondering, have we ever really attempted this, tried to contain a wave?
As the printing press roared across Europe in the fifteenth century, the Ottoman Empire had a rather different response. It tried to ban it. Unhappy at the prospect of unregulated mass production of knowledge and culture, the sultan considered the press an alien, “Western” innovation. Despite rivaling cities like London, Paris, and Rome in population, Istanbul didn’t possess a sanctioned printing press until 1727, nearly three centuries after its invention. For a long time historians saw the Ottoman Empire’s resistance as a classic example of early techno-nationalism, a conscious, backward-looking rejection of modernity.
But it’s more complicated than that. Under the empire’s rules, only Arabic characters were banned, not printing altogether. More than some fundamentally antitechnology posture, the ban came down to the huge expense and complexity of running Arabic-language printers; only the sultan could afford to fund printing, and successive sultans had little interest in it. So the Ottoman press stalled; for a time the empire said no thank you. But eventually, just like everywhere else, printing became a fact of life in the Ottoman Empire, in its descendant countries, and indeed across the world. States, it seems, might say no, but as things get cheaper and more widely used, they can’t say no forever.
In hindsight, waves might appear smooth and inevitable. But there is an almost infinite array of small, local, and often arbitrary factors that affect a technology’s trajectory. Indeed, no one should imagine diffusion is easy. It can be costly, slow, and risky, or require wrenching changes in behavior feasible over only decades or lifetimes. It has to fight existing interests, established knowledge, and those who jealously hold both. Fear and suspicion of anything new and different are endemic. Everyone from guilds of skilled craftsmen to suspicious monarchs has reason to push back. Luddites, the groups that violently rejected industrial techniques, are not the exception to the arrival of new technologies; they are the norm.
In medieval times Pope Urban II wanted to ban the crossbow. Queen Elizabeth I nixed a new kind of knitting machine in the late sixteenth century on the grounds it might upset the guilds. Guilds harassed and smashed new kinds of looms and lathes in Nuremberg, Danzig, the Netherlands, and England. John Kay, the inventor of the flying shuttle, which made weaving more efficient and was one of the key technologies of the Industrial Revolution, was so scared of violent reprisals he fled from England to France.
People throughout history have attempted to resist new technologies because they felt threatened and worried their livelihoods and way of life would be destroyed. Fighting, as they saw it, for the future of their families, they would, if necessary, physically destroy what was coming. If peaceful measures failed, Luddites wanted to take apart the wave of industrial machinery.
Under the seventeenth-century Tokugawa shogunate, Japan shut out the world—and by extension its barbarous inventions—for nearly three hundred years. Like most societies throughout history, it was distrustful of the new, the different, and the disruptive. Similarly, China dismissed a British diplomatic mission and its offer of Western tech in the late eighteenth century, with the Qianlong emperor arguing, “Our Celestial Empire possesses all things in prolific abundance and lacks no product within its borders. There is therefore no need to import the manufactures of outside barbarians.”
None of it worked. The crossbow survived until it was usurped by guns. Queen Elizabeth’s knitting machine returned, centuries later, in the supercharged form of large-scale mechanical looms to spark the Industrial Revolution. China and Japan are today among the most technologically advanced and globally integrated places on earth. The Luddites were no more successful at stopping new industrial technologies than horse owners and carriage makers were at preventing cars. Where there is demand, technology always breaks out, finds traction, builds users.
Once established, waves are almost impossible to stop. As the Ottomans discovered when it came to printing, resistance tends to be ground down with the passage of time. Technology’s nature is to spread, no matter the barriers.
Plenty of technologies come and go. You don’t see too many penny-farthings or Segways, listen to many cassettes or minidiscs. But that doesn’t mean personal mobility and music aren’t ubiquitous; older technologies have just been replaced by new, more efficient forms. We don’t ride on steam trains or write on typewriters, but their ghostly presence lives on in their successors, like Shinkansens and MacBooks.
Think of how, as parts of successive waves, fire, then candles and oil lamps, gave way to gas lamps and then to electric lightbulbs, and now LED lights, and the totality of artificial light increased even as the underlying technologies changed. New technologies supersede multiple predecessors. Just as electricity did the work of candles and steam engines alike, so smartphones replaced satnavs, cameras, PDAs, computers, and telephones (and invented entirely new classes of experience: apps). As technologies let you do more, for less, their appeal only grows, along with their adoption.
Imagine trying to build a contemporary society without electricity or running water or medicines. Even if you could, how would you convince anyone it was worthwhile, desirable, a decent trade? Few societies have ever successfully removed themselves from the technological frontier; doing so usually either is part of a collapse or precipitates one. There is no realistic way to pull back.
Inventions cannot be uninvented or blocked indefinitely, knowledge unlearned or stopped from spreading. Scattered historical examples give little reason to think it might happen again. The Library of Alexandria was left to wither, and it finally burned down, swaths of classical learning lost forever. But eventually the wisdom of antiquity was rediscovered and revalued. Aided by a lack of modern communications tools, China kept the secret of silk making under wraps for centuries, but it got out in the end thanks to two determined Nestorian monks in 552 CE. Technologies are ideas, and ideas cannot be eliminated.
Technology is an eternally dangling carrot, constantly promising more, better, easier, cheaper. Our appetite for invention is insatiable. The seeming inevitability of waves comes not from the absence of resistance but from demand overwhelming it. People have often said no, desired contained technology for a plethora of reasons. It’s just never been enough. It’s not that the containment problem hasn’t been recognized in history; it’s just that it has never been solved.
Are there exceptions? Or does the wave always break everywhere, in the end?
On September 11, 1933, the physicist Ernest Rutherford argued to the British Association for the Advancement of Science in Leicester that “anyone who says that with the means at present at our disposal and with our present knowledge we can utilize atomic energy is talking moonshine.” Reading an account of Rutherford’s argument at a hotel in London, the Hungarian émigré Leo Szilard mulled it over at breakfast. He went for a walk. The day after Rutherford called it moonshine, Szilard conceptualized a nuclear chain reaction.
The first nuclear explosion came just twelve years later. On July 16, 1945, under the auspices of the Manhattan Project, the U.S. Army detonated a device code-named Trinity in the New Mexico desert. Weeks later a Boeing B-29 Superfortress, the Enola Gay, dropped a device code-named Little Boy containing sixty-four kilograms of uranium-235 over the city of Hiroshima, killing 140,000 people. In an instant, the world had changed. Yet from there, against the wider pattern of history, nuclear weapons did not endlessly proliferate.
Nuclear weapons have been detonated only twice in wartime. To date only nine countries have acquired them. Indeed, South Africa relinquished the technology altogether in 1989. As far as we know, no non-state actors have acquired nuclear weapons, and today the total number of warheads stands at around ten thousand, frighteningly large, but lower than Cold War highs, when that figure hovered at more than sixty thousand.
So what happened? Nuclear weapons clearly confer a significant strategic advantage. At the end of World War II many unsurprisingly assumed they would proliferate widely. After the successful development of early nuclear bombs, the United States and Russia had been on a path of developing ever more destructive weapons, like thermonuclear hydrogen bombs. The biggest explosion ever recorded was a test of an H-bomb called the Tsar Bomba. Detonated over a remote archipelago in the Barents Sea in 1961, the explosion created a three-mile fireball and a mushroom cloud fifty-nine miles wide. The blast was ten times more powerful than the combined total of all the conventional explosives deployed in World War II. Its scale frightened everyone. In this respect it might have actually helped. Both the United States and Russia stepped back from ramping up their weapons in the face of their sheer, horrific power.
That nuclear technology remained contained was no accident; it was a conscious nonproliferation policy of the nuclear powers, helped by the fact that nuclear weapons are incredibly complex and expensive to produce.
Some of the early proposals for achieving containment were admirably high-minded. In 1946 the Acheson-Lilienthal Report suggested the UN create an “Atomic Development Authority” with explicit worldwide control of all nuclear activities. That of course didn’t happen, but a series of international treaties nonetheless followed. Although countries like China and France stood aside, the Partial Test Ban Treaty was signed in 1963, reducing the drumbeat of test explosions that spurred on competition.
A turning point came in 1968 with the Treaty on the Non-proliferation of Nuclear Weapons, a landmark moment when nations explicitly agreed never to develop nuclear weapons. The world had come together to decisively arrest the proliferation of nuclear weapons to new states. From the first test, their destructive power was clear. Popular revulsion at the possibility of a thermonuclear apocalypse was a powerful motivator for signing the treaty. But these weapons have also been contained by cold calculation. Mutually assured destruction hemmed in possessors since it soon became clear that using them in anger is a quick way of ensuring your own destruction.
They’re also eye-wateringly expensive and difficult to manufacture. Not only do they require rare and difficult-to-handle materials like enriched uranium-235, but maintaining and ultimately decommissioning them is also challenging. Lack of widespread demand has meant little pressure to reduce costs and grow access; they are not subject to the classic cost curves of modern consumer technology. These were never going to spread like transistors or flat-screen TVs; producing fissile material is not like rolling aluminum. Nonproliferation is in no small part a function of the fact that building a nuke is one of the largest, most expensive, and most complicated endeavors a state can embark on.
It would be wrong to say they have not proliferated, when even now so many nuclear weapons sit on submarines patrolling the seas or are on hair-trigger alert in great silos. But to a remarkable degree, and thanks to a huge spectrum of technical and political efforts over decades, they have avoided technology’s deep underlying pattern.
And yet, even though nuclear capability has been largely contained, a partial exception, it’s not a reassuring story. Nuclear history is still a chilling succession of accidents, near misses, and misunderstandings. Since the first tests in 1945, hundreds of incidents merit serious concern, from relatively minor process problems to terrifying escalations that could have (and still might) trigger destruction on a truly horrific scale.
Failure could come in a variety of guises. What if the software goes wrong? After all, it was only in 2019 that U.S. command and control systems were upgraded from 1970s hardware and eight-inch floppy disks. The world’s most sophisticated and destructive weapons arsenal ran on technology so antiquated it would be unrecognizable (and unusable) to most people alive today.
Accidents are legion. In 1961, for example, a B-52 in the skies above North Carolina developed a fuel leak. The crew ejected from the ailing aircraft, leaving it and its payload to plummet to the ground. In the process, a live hydrogen bomb’s safety switch flicked to “armed” as it crashed into a field. Of its four safety mechanisms, just one was left in place, and an explosion was miraculously avoided. In 2003 the British Ministry of Defence disclosed more than 110 near misses and accidents in the history of its nuclear weapons program. Even the Kremlin, hardly a model of openness, has admitted 15 serious nuclear accidents between 2000 and 2010.
Tiny hardware malfunctions can produce outsized risks. In 1980 a single faulty computer chip costing forty-six cents almost triggered a major nuclear incident over the Pacific. And in perhaps the most well-known case, nuclear catastrophe was only avoided during the Cuban missile crisis when one man, the acting Russian commodore, Vasili Arkhipov, refused to give an order to fire nuclear torpedoes. The two other officers on the submarine, convinced they were under attack, had brought the world within a split second of full-scale nuclear war.
Worries remain abundant. Nuclear sabers rattled anew in the wake of Russia’s invasion of Ukraine. North Korea went to extraordinary lengths to acquire nuclear weapons and appears to have sold ballistic missiles to and co-developed nuclear technologies with countries like Iran and Syria. China, India, and Pakistan are ramping up arsenals and have opaque safety records. Everyone from Turkey and Saudi Arabia to Japan and South Korea has at least expressed interest in nuclear weapons. Brazil and Argentina even had uranium enrichment programs.
To date no terrorist group is known to have acquired either a conventional warhead or sufficient radiological material for a “dirty” bomb. But methods to construct such a device are hardly secret. A rogue insider could credibly produce one. The engineer A. Q. Khan helped Pakistan develop nuclear weapons by stealing centrifuge blueprints and fleeing the Netherlands.
Plenty of nuclear material is unaccounted for, from hospitals, businesses, militaries, even recently from Chernobyl. In 2018, plutonium and cesium were stolen from a Department of Energy official’s car in San Antonio, Texas, while they slept in a nearby hotel. The nightmare scenario is a loose warhead, stolen in transit or even somehow missed in an accounting exercise. It may sound fanciful, but the United States has in fact lost at least three nuclear weapons.
Nuclear is an exception to the unstoppable spread of technology, but only because of the tremendous costs and complexity involved, the decades of tough multilateral effort, the fear-inducing enormity of its lethal potential, and pure luck. To some extent it might, then, have bucked the wider trend, but it also shows how the game has changed. Given the potential consequences, given its looming existential reach, even partial, relative containment is woefully insufficient.
The worrying truth of this fearsome technology is that humanity has tried to say no and only partially succeeded. Nuclear weapons are among the most contained technologies in history, and yet the containment problem—in its hardest, most literal sense—even here remains acutely unsolved.
Glimmers of containment are rare and often flawed. They include moratoriums on biological and chemical weapons; the Montreal Protocol of 1987, which phased out substances damaging the atmosphere’s ozone layer, particularly CFCs; the EU’s ban on genetically modified organisms in foodstuffs; and a self-organized moratorium on human gene editing. Perhaps the most ambitious containment agenda is decarbonization, measures like the Paris Agreement, which aims to limit global temperature rise to two degrees Celsius. In essence, it represents a worldwide attempt to say no to a suite of foundational technologies.
We’ll take a closer look at these modern examples of containment in part 4. For now, though, it’s important to note that, while instructive, none of these achievements are particularly robust. Chemical weapons were recently used in Syria. Such weapons are only a relatively narrow application of constantly developing fields. Despite the moratoriums, the world’s chemical and biological capabilities grow every year; should anyone perceive the need to weaponize them, it would be easier than ever.
While the EU bans GMOs in the food supply, they’re ubiquitous in other parts of the world. As we will see, the science behind gene editing is charging forward. The call for a global moratorium on human gene editing has stalled. Luckily, cheaper and more effective alternatives were readily available to supplant CFCs, which in any case were hardly a general-purpose technology. Without them, modeling suggests the ozone layer might have collapsed by the 2040s, creating an additional 1.7 degrees Celsius of warming in the twenty-first century. In general these containment efforts are limited to highly specific technologies, some in narrow jurisdictions, all with only a shaky purchase.
While the Paris Agreement aims to go beyond these limitations, will it work? We have to hope so. But it’s worth pointing out that this containment comes only in the wake of significant damage and an existential-level threat growing more obvious by the day. It is coming late, and its success is far from guaranteed.
This is not containment proper. None of these efforts represent the full-scale arresting of a wave of general-purpose technology, although, as we will see later, they do offer important pointers for the future. But these examples do not remotely provide as much comfort as we’d hope—or need.
There are always good reasons to resist or curtail technology. Although its history is one of enabling people to do more, increasing capabilities, driving improvements in well-being, it’s not a one-sided story: Technology creates more lethal and destructive weapons as well as better tools. It produces losers, eliminates some jobs and ways of life, and creates harm up to the planetary, existential scale of climate change. New technologies can be unsettling and destabilizing, alien and invasive. Technology causes problems, and always has.
And yet none of that seems to matter. It might take time, but the pattern is unmistakable: proliferating, cheaper, and more efficient technologies, wave upon wave of them. As long as a technology is useful, desirable, affordable, accessible, and unsurpassed, it survives and spreads and those features compound. While technology doesn’t tell us when, or how, or whether to walk through the doors it opens, sooner or later we do seem to walk through them. There is no necessary relationship here, just a persistent empirical linkage throughout history.
Everything about a given technology is contingent, path dependent; it rests on a mind-bendingly intricate set of circumstances, chance happenings, myriad specific local, cultural, institutional, and economic factors. Zoom in and lucky meetings, random events, quirks of character, and tiny acts of creation—and sometimes pushback—loom large. But zoom out and what do we see? A more tectonic process, where it’s a question of not if these powers are harnessed but when, in what form, and by whom.
Given its extreme rarity, containment has unsurprisingly dropped out of the vocabularies of technologists and policy makers. We have collectively resigned ourselves to the story of this chapter because it is so ingrained. By and large we’ve let the waves wash over us, managing on an uncoordinated, ad hoc basis, accepting that capabilities spreading inevitably and uncontrollably is, whether welcomed or reviled, a fact of life.
In the space of around a hundred years, successive waves took humanity from an era of candles and horse carts to one of power stations and space stations. Something similar is going to occur in the next thirty years. In the coming decades, a new wave of technology will force us to confront the most foundational questions our species has ever faced. Do we want to edit our genomes so that some of us can have children with immunity to certain diseases, or with more intelligence, or with the potential to live longer? Are we committed to holding on to our place at the top of the evolutionary pyramid, or will we allow the emergence of AI systems that are smarter and more capable than we can ever be? What are the unintended consequences of exploring questions like these?
They illustrate a key truth about Homo technologicus in the twenty-first century. For most of history, the challenge of technology lay in creating and unleashing its power. That has now flipped: the challenge of technology today is about containing its unleashed power, ensuring it continues to serve us and our planet.
That challenge is about to decisively escalate.