CHAPTER 10

FRAGILITY AMPLIFIERS

NATIONAL EMERGENCY 2.0:
UNCONTAINED ASYMMETRY IN ACTION

 

On the morning of May 12, 2017, Britain’s National Health Service (NHS) ground to a halt. Thousands of its facilities nationwide suddenly saw their IT systems freeze up. In hospitals, staff were locked out of crucial medical equipment like MRI scanners and unable to access patient records. Thousands of scheduled procedures, ranging from cancer appointments to elective surgeries, had to be canceled. Panicked care teams reverted to manual stopgaps, using paper notes and personal phones. The Royal London Hospital shuttered its emergency department, with patients left lying on gurneys outside the operating theaters.

The NHS had been hit by a ransomware attack. It was called WannaCry, and its scale was immense. Ransomware works by compromising a system to encrypt and thus lock down access to key files and capabilities. Cyberattackers typically demand a ransom in exchange for liberating a captive system.

The NHS wasn’t WannaCry’s only target. Exploiting a vulnerability in older Microsoft systems, hackers had found a way to grind swaths of the digital world to a halt, including organizations like Deutsche Bahn, Telefónica, FedEx, Hitachi, even the Chinese Ministry of Public Security. WannaCry tricked some users into opening an email, which released a “worm” replicating and transporting itself to infect a quarter of a million computers across 150 countries in just one day. For a few hours after the attack much of the digital world teetered, held for ransom by a distant, faceless assailant. The ensuing damage cost up to $8 billion, but the implications were even graver. The WannaCry attack exposed just how vulnerable institutions whose operation we take for granted were to sophisticated cyberattacks.

In the end, the NHS—and the world—caught a lucky break. A twenty-two-year-old British hacker called Marcus Hutchins stumbled on a kill switch. Going through the malware’s code, he saw an odd-looking domain name. Guessing this might be part of the worm’s command and control structure, and seeing the domain was unregistered, Hutchins bought it for just $10.69, allowing him to control the virus while Microsoft pushed out updates closing the vulnerability.

Perhaps the most extraordinary thing about WannaCry is where it came from. WannaCry was built using technology created by the U.S. National Security Agency (NSA). An elite NSA unit called the Office of Tailored Access Operations had developed a cyberattack exploit called EternalBlue. In the words of one NSA staffer these were “the keys to the kingdom,” tools designed to “undermine the security of a lot of major government and corporate networks both here and abroad.”

How had this formidable technology, developed by one of the most technically sophisticated organizations on the planet, been obtained by a group of hackers? As Microsoft pointed out at the time, “An equivalent scenario with conventional weapons would be the U.S. military having some of its Tomahawk missiles stolen.” Unlike Tomahawk missiles, the NSA’s digital weapons could quietly slip onto a thumb drive. The hackers who stole the technology, a group known as the Shadow Brokers, put EternalBlue up for sale. From there it soon ended up in the hands of North Korean hackers, probably the state-sponsored Bureau 121 cyber unit. They then launched it on the world.

Despite speedy patches, the fallout from the EternalBlue leak wasn’t over. In June 2017 a new version of the weapon emerged, this time specifically designed to target Ukrainian national infrastructure in an attack quickly attributed to Russian military intelligence. The NotPetya cyberattack almost brought the country to its knees. Radiation monitoring systems at Chernobyl lost power. ATMs stopped dispensing money. Mobile phones went silent. Ten percent of the country’s computers were infected, and basic infrastructure from the electrical grid to the Ukrainian State Savings Bank went down. Major multinationals like the shipping giant Maersk were immobilized, collateral damage.

Here is a parable for technology in the twenty-first century. Software created by the security services of the world’s most technologically sophisticated state is leaked or stolen. From there it finds its way into the hands of digital terrorists working for one of the world’s most failed states and capricious nuclear powers. It is then weaponized, turned against the core fabric of the contemporary state: health services, transport and power infrastructures, essential businesses in global communications and logistics. In other words, thanks to a basic failure of containment, a global superpower became a victim of its own powerful and supposedly secure technology.

This is uncontained asymmetry in action.


Luckily, the ransomware attacks described above relied on conventional cyberweapons. Luckily, inasmuch as they did not rely on the features of the coming wave. Their power and potential were limited. The nation-state was scratched and bruised, but it wasn’t fundamentally undermined. Yet it is a matter of when, not if, the next attack will occur, and next time we may not be so lucky.

It’s tempting to argue cyberattacks are far less effective than we might have imagined, given the speed at which critical systems recovered from attacks like WannaCry. With the coming wave that assumption is a serious mistake. Such attacks demonstrate that there are those who would use cutting-edge technologies to degrade and disable key state functions. They show that core institutions of modern life are vulnerable. A lone individual and a private company (Microsoft) patched up the systemic weakness. This attack did not respect national boundaries. Government’s role in handling the crisis was limited.

Now imagine if, instead of accidentally leaving open a loophole, the hackers behind WannaCry had designed the program to systematically learn about its own vulnerabilities and repeatedly patch them. Imagine if, as it attacked, the program evolved to exploit further weaknesses. Imagine that it then started moving through every hospital, every office, every home, constantly mutating, learning. It could hit life-support systems, military infrastructure, transport signaling, the energy grid, financial databases. As it spread, imagine the program learning to detect and stop further attempts to shut it down. A weapon like this is on the horizon if not already in development.

WannaCry and NotPetya are limited compared with the kinds of increasingly general-purpose learning agents that will make up the next generation of cyberweapons, which threaten to bring on national emergency 2.0. Today’s cyberattacks are not the real threat; they are the canary in the coal mine of a new age of vulnerability and instability, degrading the nation-state’s role as the sole arbiter of security.

Here is a specific, near-term application of next-wave technology fraying the state’s fabric. In this chapter, we look at how this and other stressors chip away at the very edifice responsible for governing technology. These fragility amplifiers, system shocks, emergencies 2.0, will greatly exacerbate existing challenges, shaking the state’s foundation, upsetting our already precarious social balance. This is, in part, a story of who can do what, a story of power and where it lies.

THE PLUMMETING COST OF POWER

 

Power is “the ability or capacity to do something or act in a particular way;…to direct or influence the behavior of others or the course of events.” It’s the mechanical or electrical energy that underwrites civilization. The bedrock and central principle of the state. Power in one form or another shapes everything. And it, too, is about to be transformed.

Technology is ultimately political because technology is a form of power. And perhaps the single overriding characteristic of the coming wave is that it will democratize access to power. As we saw in part 2, it will enable people to do things in the real world. I think of it like this: just as the costs of processing and broadcasting information plummeted in the consumer internet era, the cost of actually doing something, taking action, projecting power, will plummet with the next wave. Knowing is great, but doing is much more impactful.

Instead of just consuming content, anyone can produce expert-quality video, image, and text content. AI doesn’t just help you find information for that best man speech; it will write the speech, too. And all on a scale unseen before. Robots won’t just manufacture cars and organize warehouse floors; they’ll be available to every garage tinkerer with a little time and imagination. The past wave enabled us to sequence, or read, DNA. The coming wave will make DNA synthesis universally available.

Wherever power is today, it will be amplified. Anyone with goals—that is, everyone—will have huge help in realizing them. Overhauling a business strategy, putting on social events for a local community, or capturing enemy territory all get easier. Building an airline or grounding a fleet are both more achievable. Whether it’s commercial, religious, cultural, or military, democratic or authoritarian, every possible motivation you can think of can be dramatically enhanced by having cheaper power at your fingertips.

Today, no matter how wealthy you are, you simply cannot buy a more powerful smartphone than is available to billions of people. This phenomenal achievement of civilization is too often overlooked. In the next decade, access to ACIs will follow the same trend. Those same billions will soon have broadly equal access to the best lawyer, doctor, strategist, designer, coach, executive assistant, negotiator, and so on. Everyone will have a world-class team on their side and in their corner.

This will be the greatest, most rapid accelerant of wealth and prosperity in human history. It will also be one of the most chaotic. If everyone has access to more capability, that clearly also includes those who wish to cause harm. With technology evolving faster than defensive measures, bad actors, from Mexican drug cartels to North Korean hackers, are given a shot in the arm. Democratizing access necessarily means democratizing risk.

We are about to cross a critical threshold in the history of our species. This is what the nation-state will have to contend with over the next decade. In this chapter we run through some of the key examples of fragility amplification stemming from the coming wave. First let’s look more closely at this near-term risk: how bad actors will be able to launch new offensive operations. Such attacks could be lethal, broadly accessible, and a chance for someone to strike at scale with impunity.

ROBOTS WITH GUNS:
THE PRIMACY OF OFFENSE

 

In November 2020, Mohsen Fakhrizadeh was the head scientist and linchpin of Iran’s long effort to attain nuclear weapons. Patriotic, dedicated, highly experienced, he was a prime target for Iran’s adversaries. Cognizant of the risks, he kept his whereabouts and movements cloaked in secrecy with help from Iran’s security services.

Driving in a heavily guarded convoy down a dusty road to his country house near the Caspian Sea, Fakhrizadeh’s motorcade suddenly screeched to a halt. The scientist’s vehicle was hit by a barrage of bullets. Wounded, Fakhrizadeh stumbled out of his car, only to be killed by a second burst of machine-gun fire that tore through him. His bodyguards, members of Iran’s Revolutionary Guard, scrambled to make sense of what was happening. Where was the shooter? A few moments later there was an explosion, and a nearby pickup truck erupted into flames.

The truck, however, was empty save for a gun. There were no assassins on the ground that day. In the words of a New York Times investigation, this was a “debut test of a high-tech, computerized sharpshooter kitted out with artificial intelligence and multiple-camera eyes, operated via satellite and capable of firing 600 rounds a minute.” Mounted on a strategically parked but innocuous-looking pickup truck fitted with cameras, it was a kind of robot weapon assembled by Israeli agents. A human authorized the strike, but it was the AI that automatically adjusted the gun’s aim. Just fifteen bullets were fired and one of the most high-profile and well-guarded people in Iran was killed in under a minute. The explosion was merely a failed attempt to hide the evidence.

Fakhrizadeh’s assassination is a harbinger of what’s to come. More sophisticated armed robots will further reduce barriers to violence. Videos of the latest generation of robots, with names like Atlas and BigDog, are easy to find on the internet. Here you’ll see stocky, strange-looking humanoids and small doglike robots scamper over obstacle courses. They look curiously unbalanced yet never seem to fall. They navigate complex landscapes with an uncanny motion, their heavy-looking frames never toppling. They do backflips, jumps, spins, and tricks. Push them over, and they calmly, inexorably get up. And they’re ready to do it again and again. It’s spooky.

Now imagine robots equipped with facial recognition, DNA sequencing, and automatic weapons. Future robots may not take the form of scampering dogs. Miniaturized even further, they will be the size of a bird or a bee, armed with a small firearm or a vial of anthrax. They might soon be accessible to anyone who wants them. This is what bad actor empowerment looks like.


The cost of military-grade drones has fallen by three orders of magnitude over the last decade. By 2028, $26 billion a year will be spent on military drones, and at that point many are likely to be fully autonomous.

Live deployments of autonomous drones are becoming more plausible by the day. In May 2021, for example, an AI drone swarm in Gaza was used to find, identify, and attack Hamas militants. Start-ups like Anduril, Shield AI, and Rebellion Defense have raised hundreds of millions of dollars to build autonomous drone networks and other military applications of AI. Complementary technologies like 3-D printing and advanced mobile communications will reduce the cost of tactical drones to a few thousand dollars, putting them within reach of everyone from amateur enthusiasts to paramilitaries to lone psychopaths.

In addition to easier access, AI-enhanced weapons will improve themselves in real time. WannaCry’s impact ended up being far more limited than it could have been. Once the software patch was applied, the immediate issue was resolved. AI transforms this kind of attack. AI cyberweapons will continuously probe networks, adapting themselves autonomously to find and exploit weaknesses. Existing computer worms replicate themselves using a fixed set of preprogrammed heuristics.

But what if you had a worm that improved itself using reinforcement learning, experimentally updating its code with each network interaction, each time finding more and more efficient ways to take advantage of cyber vulnerabilities? Just as systems like AlphaGo learn unexpected strategies from millions of self-played games, so too will AI-enabled cyberattacks. However much you war-game every eventuality, there’s inevitably going to be a tiny vulnerability discoverable by a persistent AI.

Everything from cars and planes to fridges and data centers relies on vast code bases. The coming AIs make it easier than ever to identify and exploit weaknesses. They could even find legal or financial means of damaging corporations or other institutions, hidden points of failure in banking regulation or technical safety protocols. As the cybersecurity expert Bruce Schneier has pointed out, AIs could digest the world’s laws and regulations to find exploits, arbitraging legalities. Imagine a huge cache of documents from a company leaked. A legal AI might be able to parse this against multiple legal systems, figure out every possible infraction, and then hit that company with multiple crippling lawsuits around the world at the same time. AIs could develop automated trading strategies designed to destroy competitors’ positions or create disinformation campaigns (more on this in the next section) engineering a run on a bank or a product boycott, enabling a competitor to swoop in and buy the company—or simply watch it collapse.

AI adept at exploiting not just financial, legal, or communications systems but also human psychology, our weaknesses and biases, is on the way. Researchers at Meta created a program called CICERO. It became an expert at playing the complex board game Diplomacy, a game in which planning long, complex strategies built around deception and backstabbing is integral. It shows how AIs could help us plan and collaborate, but also hints at how they could develop psychological tricks to gain trust and influence, reading and manipulating our emotions and behaviors with a frightening level of depth, a skill useful in, say, winning at Diplomacy or electioneering and building a political movement.

The space for possible attacks against key state functions grows even as the same premise that makes AI so powerful and exciting—its ability to learn and adapt—empowers bad actors.


For centuries cutting-edge offensive capabilities, like massed artillery, naval broadsides, tanks, aircraft carriers, or ICBMs, have initially been so costly that they remained the province of the nation-state. Now they are evolving so fast that they quickly proliferate into the hands of research labs, start-ups, and garage tinkerers. Just as social media’s one-to-many broadcast effect means a single person can suddenly broadcast globally, so the capacity for far-reaching consequential action is becoming available to everyone.

This new dynamic—where bad actors are emboldened to go on the offensive—opens up new vectors of attack thanks to the interlinked, vulnerable nature of modern systems: not just a single hospital but an entire health system can be hit; not just a warehouse but an entire supply chain. With lethal autonomous weapons the costs, in both material and above all human terms, of going to war, of attacking, are lower than ever. At the same time, all this introduces greater levels of deniability and ambiguity, degrading the logic of deterrence. If no one can be sure who initiated an assault, or what exactly has happened, why not go ahead?

When non-state and bad actors are empowered in this way, one of the core propositions of the state is undermined: the semblance of a security umbrella for citizens is deeply damaged. Provisions of safety and security are fundamental underpinnings of the nation-state system, not nice-to-have add-ons. States broadly know how to respond to questions of law and order, or direct attacks from hostile countries. But this is far murkier, more amorphous and asymmetric, blurring the lines of territoriality and easy attribution.

How does a state maintain the confidence of its citizens, uphold that grand bargain, if it fails to offer the basic promise of security? How can it ensure that hospitals will keep running, schools stay open, lights remain—literally—switched on in this world? If the state can’t protect you and your family, what’s the point of compliance and belonging? If we feel that the fundamentals—the electricity running our houses, the transport systems getting us around, the energy networks keeping us warm, our personal everyday security—are falling to pieces and there is nothing either we or the government can do, a foundation of the system is chipped away. If the state began in new forms of war, perhaps it will end in the same way.

Throughout history technology has produced a delicate dance of offensive and defensive advantage, the pendulum swinging between the two but a balance roughly holding: for every new projectile or cyberweapon, a potent countermeasure has quickly arisen. Cannons may wear down a castle’s walls, but they can also rip apart an invading army. Now powerful, asymmetric, omni-use technologies are certain to reach the hands of those who want to damage the state. While defensive operations will be strengthened in time, the nature of the four features favors offense: this proliferation of power is just too wide, fast, and open. An algorithm of world-changing significance can be stored on a laptop; soon it won’t even require the kind of vast, regulatable infrastructure of the last wave and the internet. Unlike an arrow or even a hypersonic missile, AI and bioagents will evolve more cheaply, more rapidly, and more autonomously than any technology we’ve ever seen. Consequently, without a dramatic set of interventions to alter the current course, millions will have access to these capabilities in just a few years.

Maintaining a decisive, indefinite strategic advantage across such a broad spectrum of general-use technologies is simply not possible. Eventually, the balance might be restored, but not before a wave of immensely destabilizing force is unleashed. And as we’ve seen, the nature of the threat is far more widespread than blunt forms of physical assault. Information and communication together is its own escalating vector of risk, another emerging fragility amplifier requiring attention.

Welcome to the deepfake era.

THE MISINFORMATION MACHINE

 

In the 2020 local elections in India, the Bharatiya Janata Party Delhi president, Manoj Tiwari, was filmed making a campaign speech—in both English and a local Hindi dialect. Both looked and sounded convincingly real. In the video he goes on the attack, accusing the head of a rival party of having “cheated us.” But the version in the local dialect was a deepfake, a new kind of AI-enabled synthetic media. Produced by a political communications firm, it exposed the candidate to new, hard-to-reach constituencies. Lacking awareness of the discourse around fake media, many assumed it was real. The company behind the deepfake argued it was a “positive” use of the technology, but to any sober observer this incident heralded a perilous new age in political communication. In another widely publicized incident, a clip of Nancy Pelosi was reedited to make her look ill and impaired and then proceeded to circulate widely on social media.

Ask yourself, what happens when anyone has the power to create and broadcast material with incredible levels of realism? These examples occurred before the means to generate near-perfect deepfakes—whether text, images, video, or audio—became as easy as writing a query into Google. As we saw in chapter 4, large language models now show astounding results at generating synthetic media. A world of deepfakes indistinguishable from conventional media is here. These fakes will be so good our rational minds will find it hard to accept they aren’t real.

Deepfakes are spreading fast. If you want to watch a convincing fake of Tom Cruise preparing to wrestle an alligator, well, you can. More and more everyday people will be imitated as the required training data falls to just a handful of examples. It’s already happening. A bank in Hong Kong transferred millions of dollars to fraudsters in 2021, after one of their clients was impersonated by a deepfake. Sounding identical to the real client, the fraudsters phoned the bank manager and explained how the company needed to move money for an acquisition. All the documents seemed to check out, the voice and character were flawlessly familiar, so the manager initiated the transfer.

Anyone motivated to sow instability now has an easier time of it. Say three days before an election the president is caught on camera using a racist slur. The campaign press office strenuously denies it, but everyone knows what they’ve seen. Outrage seethes around the country. Polls nose-dive. Swing states suddenly shift toward the opponent, who, against all expectations, wins. A new administration takes charge. But the video is a deepfake, one so sophisticated it evades even the best fake-detecting neural networks.

The threat here lies not so much with extreme cases as in subtle, nuanced, and highly plausible scenarios being exaggerated and distorted. It’s not the president charging into a school screaming nonsensical rubbish while hurling grenades; it’s the president resignedly saying he has no choice but to institute a set of emergency laws or reintroduce the draft. It’s not Hollywood fireworks; it’s the purported surveillance camera footage of a group of white policemen caught on tape beating a Black man to death.

Sermons from the radical preacher Anwar al-Awlaki inspired the Boston Marathon bombers, the attackers of Charlie Hebdo in Paris, and the shooter who killed forty-nine people at an Orlando nightclub. Yet al-Awlaki died in 2011, the first U.S. citizen killed by a U.S. drone strike, before any of these events. His radicalizing messages were, though, still available on YouTube until 2017. Suppose that using deepfakes new videos of al-Awlaki could be “unearthed,” each commanding further targeted attacks with precision-honed rhetoric. Not everyone would buy it, but those who wanted to believe would find it utterly compelling.

Soon these videos will be fully and believably interactive. You are talking directly to him. He knows you and adapts to your dialect and style, plays on your history, your personal grievances, your bullying at school, your terrible, immoral Westernized parents. This is not disinformation as blanket carpet bombing; it’s disinformation as surgical strike.

Phishing attacks against politicians or businesspeople, disinformation with the aim of major financial-market disruption or manipulation, media designed to poison key fault lines like sectarian or racial divides, even low-level scams—trust is damaged and fragility again amplified.

Eventually entire and rich synthetic histories of seemingly real-world events will be easy to generate. Individual citizens won’t have time or the tools to verify a fraction of the content coming their way. Fakes will easily pass sophisticated checks, let alone a two-second smell test.

STATE-SPONSORED INFO ASSAULTS

 

In the 1980s, the Soviet Union funded disinformation campaigns suggesting that the AIDS virus was the result of a U.S. bioweapons program. Years later, some communities were still dealing with the mistrust and fallout. The campaigns, meanwhile, have not stopped. According to Facebook, Russian agents created no fewer than eighty thousand pieces of organic content that reached 126 million Americans on their platforms during the 2016 election.

AI-enhanced digital tools will exacerbate information operations like these, meddling in elections, exploiting social divisions, and creating elaborate astroturfing campaigns to sow chaos. Unfortunately, it’s far from just Russia. More than seventy countries have been found running disinformation campaigns. China is quickly catching up with Russia; others from Turkey to Iran are developing their skills. (The CIA, too, is no stranger to info ops.)

Early in the COVID-19 pandemic a blizzard of disinformation had deadly consequences. A Carnegie Mellon study analyzed more than 200 million tweets discussing COVID-19 at the height of the first lockdown. Eighty-two percent of influential users advocating for “reopening America” were bots. This was a targeted “propaganda machine,” most likely Russian, designed to intensify the worst public health crisis in a century.

Deepfakes automate these information assaults. Until now effective disinformation campaigns have been labor-intensive. While bots and fakes aren’t difficult to make, most are of low quality, easily identifiable, and only moderately effective at actually changing targets’ behavior.

High-quality synthetic media changes this equation. Not all nations currently have the funds to build huge disinformation programs, with dedicated offices and legions of trained staff, but that’s less of a barrier when high-fidelity material can be generated at the click of a button. Much of the coming chaos will not be accidental. It will come as existing disinformation campaigns are turbocharged, expanded, and devolved out to a wide group of motivated actors.

The rise of synthetic media at scale and minimal cost amplifies both disinformation (malicious and intentionally misleading information) and misinformation (a wider and more unintentional pollution of the information space) at once. Cue an “Infocalypse,” the point at which society can no longer manage a torrent of sketchy material, where the information ecosystem grounding knowledge, trust, and social cohesion, the glue holding society together, falls apart. In the words of a Brookings Institution report, ubiquitous, perfect synthetic media means “distorting democratic discourse; manipulating elections; eroding trust in institutions; weakening journalism; exacerbating social divisions; undermining public safety; and inflicting hard-to-repair damage on the reputation of prominent individuals, including elected officials and candidates for office.”

Not all stressors and harms come from bad actors, however. Some come from the best of intentions. Amplification of fragility is accidental as well as deliberate.

LEAKY LABS AND
UNINTENDED INSTABILITY

 

In one of the world’s most secure laboratories, a group of researchers were experimenting with a deadly pathogen. No one can be sure what happened next. Even with the benefit of hindsight, detail about the research is scant. What is certain is that, in a country famed for secrecy and government control, a strange new illness began appearing.

Soon it was found around the world, in the U.K., the United States, and beyond. Oddly, this didn’t seem like an entirely natural strain of the disease. Certain features raised alarm in the scientific community and suggested that something at the lab had gone horribly wrong, that this wasn’t a natural event. Soon the death toll started rising. That hyper-secure lab wasn’t looking so secure after all.

If this sounds like a familiar story, it probably isn’t the one you’re thinking about. This was 1977 and an influenza epidemic known as the Russian flu. First discovered in China, it was detected in the Soviet Union soon after, spreading from there and reportedly killing up to 700,000 people. What was unusual about the H1N1 flu strain was how closely it resembled one circulating in the 1950s. The disease hit young people hardest, a possible sign they had a weaker immunity than those around a few decades earlier.

Theories abound over what happened. Had something escaped from the permafrost? Was it part of Russia’s extensive and shadowy bioweapons program? To date, though, the best explanation is a lab leak. A version of the earlier virus likely somehow escaped during lab experiments with a vaccine. The epidemic was itself caused by well-meaning research intended to prevent epidemics.

Biological labs are subject to global standards that should stop accidents. The most secure are known as biosafety level 4 (BSL-4) labs. They represent the highest standards of containment for working with the most dangerous pathogenic materials. Facilities are completely sealed. Entry is by air lock. Everything going in and out gets thoroughly checked. Everyone wears a pressurized suit. Anyone leaving needs to shower. All materials are disposed of, subject to the strictest protocols. Sharp edges of any kind, capable of puncturing gloves or suits, are banned. Researchers at BSL-4 labs are quite rightly trained to create the most bio-secure environments humanity has ever seen.

And yet accidents and leaks still happen. The 1977 Russian flu is just one example. Just two years later anthrax spores were accidentally released from a secret Soviet bioweapons facility, producing a fifty-kilometer trail of disease that killed at least sixty-six people.

In 2007 a leaking pipe at the U.K.’s Pirbright Institute, which includes BSL-4 labs, caused an outbreak of foot-and-mouth disease costing £147 million. In 2021, a pharmaceutical company researcher near Philadelphia left smallpox vials in an unmarked, unsecured freezer. Luckily, they were found by someone cleaning the freezer. The person was lucky to be wearing a mask and gloves. Had it got out, the consequences would have been catastrophic. Before it was eradicated, smallpox killed an estimated 300 to 500 million people in the twentieth century alone, with a reproduction rate equivalent to more contagious strains of COVID-19, but with a mortality rate thirty times that of COVID.

SARS is supposed to be kept in BSL-3 conditions, but it has escaped from virology labs in Singapore, Taiwan, and China. Quite incredibly, it escaped four times from the same laboratory in Beijing. The errors were all too human and mundane. The Singapore case was down to a graduate student unaware of the presence of SARS. In Taiwan a research scientist mishandled biohazardous waste. In Beijing, the leaks were attributed to poor deactivation of the virus and handling in non-biosecure labs. And all that’s before you even mention Wuhan, home to the world’s largest BSL-4 lab and a center for coronavirus research.

Even as the number of BSL-4 labs booms, only a quarter of them score highly on safety, according to the Global Health Security Index. Between 1975 and 2016, researchers cataloged at least seventy-one either deliberate or accidental exposures to highly infectious and toxic pathogens. Most were tiny accidents that even the most highly trained human is surely bound to make sometimes—a slip with a needle, a spilled vial, an experiment prepared with a small error. Our picture is almost certainly incomplete. Few researchers report accidents publicly or promptly. A survey of biosafety officers found that most never reported accidents beyond their institution. A U.S. risk assessment from 2014 estimated that over a decade the chance of “a major lab leak” across ten labs was 91 percent; the risk of a resulting pandemic, 27 percent.

Nothing should get out. Yet pathogens do, time and again. Despite being some of the toughest around, the protocols, technologies, and regulations for containment fail. A shaking pipette. A punctured piece of plastic sheeting. A drop of solution spilled on a shoe. These are tangible failures of containment. Accidental. Incidental. Occurring with a grim, inevitable regularity. In the age of synthetic life, though, it introduces the chance of accidents that could represent both an enormous stressor and something we’ll return to later in part 3—catastrophe.


Few areas of biology are as controversial as gain-of-function (GOF) research. Put simply, gain-of-function experiments deliberately engineer pathogens to be more lethal or infectious, or both. In nature, viruses usually trade off lethality for transmissibility. The more transmissible a virus, the less lethal it often is. But there is no absolute reason this must be so. One way of understanding how it might happen—that is, how viruses might become more lethal and transmissible at the same time—and how we might combat that is to, well, make it happen.

That’s where gain-of-function research comes in. Researchers investigate disease incubation times, or how they evade vaccine resistance, or maybe how they can spread asymptomatically through a population. Work like this has been undertaken on diseases including Ebola, influenzas like H1N1, and measles.

Such research efforts are generally credible and well intentioned. Work with an avian flu in Holland and the United States around a decade ago is a good example. This disease had shockingly high mortality rates but luckily was very difficult to catch. Researchers wanted to understand how that picture might change, how this disease could morph into a more transmissible form, and used ferrets to see how this might occur. In other words they made a deadly disease in principle easier to catch.

It doesn’t take a wild imagination, however, to envisage how such research could go wrong. Deliberately engineering or evolving viruses like this was, some felt, including myself, a bit like playing with the nuclear trigger.

Gain-of-function research is, suffice to say, controversial. For a time U.S. funding agencies imposed a moratorium on funding it. In a classic failure of containment, such work resumed in 2019. There is at least some indication that COVID-19 has been genetically altered and a growing body of (circumstantial) evidence, from the Wuhan Institute’s track record to the molecular biology of the virus itself, suggesting a lab leak might have been the origin of the pandemic.

Both the FBI and the U.S. Department of Energy believe this to be the case, with the CIA undecided. Unlike in previous outbreaks, there is no smoking gun on zoonotic transmission. It’s eminently plausible that biological research has already killed millions, brought society worldwide to a standstill, and cost trillions of dollars. In late 2022, an NIH study at Boston University combined the original, more deadly strain of COVID with the spike protein of the more transmissible omicron variant. Many felt the research shouldn’t have gone ahead, but there it was, funded by public money.

This is not about bad actors weaponizing technology; this is about unintended consequences from good people who want to improve health outcomes. It’s about what goes wrong when powerful tools proliferate, what mistakes get made, what “revenge effects” unfurl, what random, unforeseen mess results from technology’s collision with reality. Off the drawing board, away from the theory, that central problem of uncontained technology holds even with the best of intentions.

GOF research is meant to keep people safe. Yet it inevitably occurs in a flawed world, where labs leak, where pandemics happen. Regardless of what did happen in Wuhan, it’s still grimly plausible that such research on coronaviruses was taking place and leaked. The historical record of lab leaks is hard to overlook.


Gain-of-function research and lab leaks are just two particularly sharp examples of how the coming wave will introduce a plethora of revenge effects and inadvertent failure modes. If every half-competent lab or even random biohacker can embark on this research, tragedy cannot be indefinitely postponed. It was this kind of scenario that was outlined to me in that seminar I mentioned in chapter 1.

As the power and spread of any technology grows, so its failure modes escalate. If a plane crashes, it’s a terrible tragedy. But if a whole fleet of planes crash, it’s something altogether more frightening. To reiterate: these risks are not about malicious harm; they come from simply operating on the bleeding edge of the most capable technologies in history widely embedded throughout core societal systems. A lab leak is just one good example of unintended consequences, the heart of the containment problem, a coming-wave equivalent of reactor meltdowns or lost warheads. Accidents like this create another unpredictable stressor, another splintering crack in the system.

Yet stressors might also be less discrete events, less a robot attack, lab leak, or deepfake video, and more a slow and diffuse process undermining foundations. Consider that throughout history, tools and technologies have been designed to help us do more with less. Each individual instance counts for almost nothing. But what happens if the ultimate side effect of these compounding efficiencies is that humans aren’t needed for much work at all?

THE AUTOMATION DEBATE

 

In the years since I co-founded DeepMind, no AI policy debate has been given more airtime than the future of work—to the point of oversaturation.

Here was the original thesis. In the past, new technologies put people out of work, producing what the economist John Maynard Keynes called “technological unemployment.” In Keynes’s view, this was a good thing, with increasing productivity freeing up time for further innovation and leisure. Examples of tech-related displacement are myriad. The introduction of power looms put old-fashioned weavers out of business; motorcars meant that carriage makers and horse stables were no longer needed; lightbulb factories did great as candlemakers went bust.

Broadly speaking, when technology damaged old jobs and industries, it also produced new ones. Over time these new jobs tended toward service industry roles and cognitive-based white-collar jobs. As factories closed in the Rust Belt, demand for lawyers, designers, and social media influencers boomed. So far at least, in economic terms, new technologies have not ultimately replaced labor; they have in the aggregate complemented it.

But what if new job-displacing systems scale the ladder of human cognitive ability itself, leaving nowhere new for labor to turn? If the coming wave really is as general and wide-ranging as it appears, how will humans compete? What if a large majority of white-collar tasks can be performed more efficiently by AI? In few areas will humans still be “better” than machines. I have long argued this is the more likely scenario. With the arrival of the latest generation of large language models, I am now more convinced than ever that this is how things will play out.

These tools will only temporarily augment human intelligence. They will make us smarter and more efficient for a time, and will unlock enormous amounts of economic growth, but they are fundamentally labor replacing. They will eventually do cognitive labor more efficiently and more cheaply than many people working in administration, data entry, customer service (including making and receiving phone calls), writing emails, drafting summaries, translating documents, creating content, copywriting, and so on. In the face of an abundance of ultra-low-cost equivalents, the days of this kind of “cognitive manual labor” are numbered.

We are only just now starting to see what impact this new wave is about to have. Early analysis of ChatGPT suggests it boosts the productivity of “mid-level college educated professionals” by 40 percent on many tasks. That in turn could affect hiring decisions: a McKinsey study estimated that more than half of all jobs could see many of their tasks automated by machines in the next seven years, while fifty-two million Americans work in roles with a “medium exposure to automation” by 2030.

The economists Daron Acemoglu and Pascual Restrepo estimate that robots cause the wages of local workers to fall. With each additional robot per thousand workers there is a decline in the employment-to-population ratio, and consequently a fall in wages. Today algorithms perform the vast bulk of equity trades and increasingly act across financial institutions, and yet, even as Wall Street booms, it sheds jobs as technology encroaches on more and more tasks.

Many remain unconvinced. Economists like David Autor argue that new technology consistently raises incomes, creating demand for new labor. Technology makes companies more productive, it generates more money, which then flows back into the economy. Put simply, demand is insatiable, and this demand, stoked by the wealth technology has generated, gives rise to new jobs requiring human labor. After all, skeptics say, ten years of deep learning success has not unleashed a jobs automation meltdown. Buying into that fear was, some argue, just a repeat of the old “lump of labor” fallacy, which erroneously claims there is only a set amount of work to go around. Instead, the future looks more like billions of people working in high-end jobs still barely conceived of.

I believe this rosy vision is implausible over the next couple of decades; automation is unequivocally another fragility amplifier. As we saw in chapter 4, AI’s rate of improvement is well beyond exponential, and there appears no obvious ceiling in sight. Machines are rapidly imitating all kinds of human abilities, from vision to speech and language. Even without fundamental progress toward “deep understanding,” new language models can read, synthesize, and generate eye-wateringly accurate and highly useful text. There are literally hundreds of roles where this single skill alone is the core requirement, and yet there is so much more to come from AI.

Yes, it’s almost certain that many new job categories will be created. Who would have thought that “influencer” would become a highly sought-after role? Or imagined that in 2023 people would be working as “prompt engineers”—nontechnical programmers of large language models who become adept at coaxing out specific responses? Demand for masseurs, cellists, and baseball pitchers won’t go away. But my best guess is that new jobs won’t come in the numbers or timescale to truly help. The number of people who can get a PhD in machine learning will remain tiny in comparison to the scale of layoffs. And, sure, new demand will create new work, but that doesn’t mean it all gets done by human beings.

Labor markets also have immense friction in terms of skills, geography, and identity. Consider that in the last bout of deindustrialization the steelworker in Pittsburgh or the carmaker in Detroit could hardly just up sticks, retrain mid-career, and get a job as a derivatives trader in New York or a branding consultant in Seattle or a schoolteacher in Miami. If Silicon Valley or the City of London creates lots of new jobs, it doesn’t help people on the other side of the country if they don’t have the right skills or aren’t able to relocate. If your sense of self is wedded to a particular kind of work, it’s little consolation if you feel your new job demeans your dignity.

Working on a zero-hours contract in a distribution center doesn’t provide the sense of pride or social solidarity that came from working for a booming Detroit auto manufacturer in the 1960s. The Private Sector Job Quality Index, a measure of how many jobs provide above-average income, has plunged since 1990; it suggests that well-paying jobs as a proportion of the total have already started to fall.

Countries like India and the Philippines have seen a huge boom from business process outsourcing, creating comparatively high-paying jobs in places like call centers. It’s precisely this kind of work that will be targeted by automation. New jobs might be created in the long term, but for millions they won’t come quick enough or in the right places.

At the same time, a jobs recession will crater tax receipts, damaging public services and calling into question welfare programs just as they are most needed. Even before jobs are decimated, governments will be stretched thin, struggling to meet all their commitments, finance themselves sustainably, and deliver services the public has come to expect. Moreover, all this disruption will happen globally, on multiple dimensions, affecting every rung of the development ladder from primarily agricultural economies to advanced service-based sectors. From Lagos to L.A., pathways to sustainable employment will be subject to immense, unpredictable, and fast-evolving dislocations.

Even those who don’t foresee the most severe outcomes of automation still accept that it is on course to cause significant medium-term disruptions. Whichever side of the jobs debate you fall on, it’s hard to deny that the ramifications will be hugely destabilizing for hundreds of millions who will, at the very least, need to re-skill and transition to new types of work. Optimistic scenarios still involve troubling political ramifications from broken government finances to underemployed, insecure, and angry populations.

It augurs trouble. Another stressor in a stressed world.


Labor market disruptions are, like social media, fragility amplifiers. They damage and undermine the nation-state. The first signs of this are coming into view, but like social media late in the first decade of the twenty-first century, it’s not quite clear what the exact shape and extent of the implications will be. In any case, just because the consequences aren’t yet evident doesn’t mean they can be wished away.

The stressors outlined in this chapter (which are by no means exhaustive)—new forms of attack and vulnerability, the industrialization of misinformation, lethal autonomous weapons, accidents like lab leaks, and the consequences of automation—are all familiar to people in tech, policy, and security circles. Yet they are too often viewed in isolation. What gets lost in the analysis is that all these new pressures on our institutions stem from the same underlying general-purpose revolution. How they will arrive together, simultaneous stressors intersecting, buttressing, and boosting one another. The full amplification of fragility is missed because it often appears as if these impacts were happening incrementally and in convenient silos. They are not. They stem from a single coherent and interrelated phenomenon manifesting itself in different ways. The reality is much more enmeshed, entwined, emergent, and chaotic than any sequential presentation can convey. Fragility, amplified. The nation-state, weakened.

It has weathered bouts of instability before. What’s different here is that a general-purpose revolution is not limited to specific niches, given problems, neatly demarcated sectors. It is, by definition, everywhere. The falling costs of power, of doing, aren’t just about rogue bad actors or nimble start-ups, cloistered and limited applications.

Instead, power is redistributed and reinforced across the entire sum and span of society. The fully omni-use nature of the coming wave means it is found at every level, in every sector, every business, or subculture, or group, or bureaucracy, in every corner of our world. It produces trillions of dollars in new economic value while also destroying certain existing sources of wealth. Some individuals are greatly enabled; others stand to lose everything. Militarily it empowers some nation-states and militias alike. This is not, then, confined to amplifying specific points of fragility; it is, in the slightly longer term, about a transformation of the very ground on which society is built. And in this great redistribution of power, the state, already fragile and growing more so, is shaken to its core, its grand bargain left tattered and precarious.