The history of humanity is, in part, a history of catastrophe. Pandemics feature widely. Two killed up to 30 percent of the world population: the sixth-century Plague of Justinian and the fourteenth-century Black Death. England’s population was seven million in 1300, but by 1450, crushed by waves of the plague, it was down to just two million.
Catastrophes are also, of course, man-made. World War I killed around 1 percent of the global population; World War II, 3 percent. Or take the violence unleashed by Genghis Khan and the Mongol army across China and central Asia in the thirteenth century, which took the lives of up to 10 percent of the world’s population. With the advent of the atomic bomb, humanity now possesses enough lethal force to kill everyone on the planet several times over. Catastrophic events that once took place over years and decades could happen in minutes, at the push of a button.
With the coming wave, we are poised to take yet another such leap, expanding both the upper bound of risk and the number of avenues available to those seeking to unleash catastrophic force. In this chapter, we go beyond fragility and threats to the functioning of the state and envisage what happens—sooner or later—if containment is not possible.
The overwhelming majority of these technologies will be used for good. Although I have focused on their risks, it’s important to keep in mind they will improve countless lives on a daily basis. In this chapter we are looking at extreme edge cases almost no one wants to see, least of all those working with these tools. However, just because they will be a vanishing minority of use cases doesn’t mean we can ignore them. We’ve seen that bad actors can do serious damage, igniting mass instability. Now imagine when any half-competent lab or hacker could synthesize complex strands of DNA. How long before disaster strikes?
Eventually, as some of history’s most powerful technologies percolate everywhere, those edge cases become more likely. Eventually, something will go wrong—at scales and speeds commensurate with the capabilities unleashed. The upshot of the coming wave’s four features is that, absent strong methods of containment operating at every level, catastrophic outcomes like an engineered pandemic are more possible than ever.
That is unacceptable. And yet here’s the dilemma: the most secure solutions for containment are equally unacceptable, leading humanity down an authoritarian and dystopian pathway.
On the one hand, societies could turn toward the kind of tech-enabled total surveillance we saw in the last chapter, a gut response enforcing hard mechanisms against wayward or uncontrolled technology. Security—at the price of freedom. Or humanity might step away from the technological frontier altogether. Although unlikely, it’s no answer. The only entity in principle capable of navigating this existential bind is the same system of nation-states currently falling apart, dragged down by the very forces it needs to contain.
Over time, then, the implications of these technologies will push humanity to navigate a path between the poles of catastrophe and dystopia. This is the essential dilemma of our age.
The promise of technology is that it improves lives, the benefits far outweighing the costs and downsides. This set of wicked choices means that promise has been savagely inverted.
Doom-mongering makes people—myself included—glassy-eyed. At this point, you may be feeling wary or skeptical. Talking of catastrophic effects often invites ridicule: accusations of catastrophism, indulgent negativity, shrill alarmism, navel-gazing on remote and rarefied risks when plenty of clear and present dangers scream for attention. Like breathless techno-optimism, breathless techno-catastrophism is easy to dismiss as a twisted, misguided form of hype unsupported by the historical record.
But just because a warning has dramatic implications isn’t good grounds to automatically reject it. The pessimism-averse complacency greeting the prospect of disaster is itself a recipe for disaster. It feels plausible, rational in its own terms, “smart” to dismiss warnings as the overblown chatter of a few weirdos, but this attitude prepares the way for its own failure.
No doubt, technological risk takes us into uncertain territory. Nonetheless, all the trends point to a profusion of risk. This speculation is grounded in constantly compounding scientific and technological improvements. Those who dismiss catastrophe are, I believe, discounting the objective facts before us. After all, we are not talking here about the proliferation of motorbikes or washing machines.
To see what catastrophic harms we should prepare for, simply extrapolate the bad actor attacks we saw in chapter 10. Here are just a few plausible scenarios.
Terrorists mount automatic weapons equipped with facial recognition to an autonomous drone swarm hundreds or thousands strong, each capable of quickly rebalancing from the weapon’s recoil, firing short bursts, and moving on. These drones are unleashed on a major downtown with instructions to kill a specific profile. In busy rush hour these would operate with terrifying efficiency, following an optimized route around the city. In minutes there would be an attack at far greater scale than, say, the 2008 Mumbai attacks, which saw armed terrorists roaming through city landmarks like the central train station.
A mass murderer decides to hit a huge political rally with drones, spraying devices, and a bespoke pathogen. Soon attendees become sick, then their families. The speaker, a much-loved and much-loathed political lightning rod, is one of the first victims. In a febrile partisan atmosphere an assault like this ignites violent reprisals around the country and the chaos cascades.
Using only natural language instruction, a hostile conspiracist in America disseminates masses of surgically constructed and divisive disinformation. Numerous attempts are made, most of which fail to gain traction. One eventually catches on: a police murder in Chicago. It’s completely fake, but the trouble on the streets, the widespread revulsion, is real. The attackers now have a playbook. By the time the video is verified as a fraud, violent riots with multiple casualties roil around the country, the fires continually stoked by new gusts of disinformation.
Or imagine all that happening at the same time. Or not just at one event or in one city, but in hundreds of places. With tools like this it doesn’t take too much to realize that bad actor empowerment opens the door to catastrophe. Today’s AI systems try hard not to tell you how to poison the water supply or build an undetectable bomb. They are not yet capable of defining or pursuing goals on their own. However, as we have seen, both more widely diffused and less safe versions of today’s cutting-edge and more powerful models are coming, fast.
Of all the catastrophic risks from the coming wave, AI has received the most coverage. But there are plenty more. Once militaries are fully automated, the barriers to entry for conflict will be far lower. A war might be sparked accidentally for reasons that forever remain unclear, AIs detecting some pattern of behavior or threat and then reacting, instantaneously, with overwhelming force. Suffice to say, the nature of that war could be alien, escalate quickly, and be unsurpassed in destructive consequences.
We’ve already come across engineered pandemics and the perils of accidental releases, and glimpsed what happens when millions of self-improvement enthusiasts can experiment with the genetic code of life. An extreme bio-risk event of a less obvious kind, targeting a given portion of the population, say, or sabotaging an ecosystem, cannot be discounted. Imagine activists wanting to stop the cocaine trade inventing a new bug that targets only coca plants as a way to replace aerial fumigation. Or if militant vegans decided to disrupt the entire meat supply chain, with dire anticipated and unanticipated consequences. Either might spiral out of control.
We know what a lab leak might look like in the context of amplifying fragility, but if it was not quickly brought under control, it would rank with previous plagues. To put this in context, the omicron variant of COVID infected a quarter of Americans within a hundred days of first being identified. What if we had a pandemic that had, say, a 20 percent mortality rate, but with that kind of transmissibility? Or what if it was a kind of respiratory HIV that would lie incubating for years with no acute symptoms? A novel human transmissible virus with a reproduction rate of, say, 4 (far below chicken pox or measles) and a case fatality rate of 50 percent (far below Ebola or bird flu) could, even accounting for lockdown-style measures, cause more than a billion deaths in a matter of months. What if multiple such pathogens were released at once? This goes far beyond fragility amplification; it would be an unfathomable calamity.
Beyond Hollywood clichés, a subculture of academic researchers has pushed an extreme narrative of how AI could instigate an existential disaster. Think an all-powerful machine somehow destroying the world for its own mysterious ends: not some malignant AI wreaking intentional destruction like in the movies, but a full-scale AGI blindly optimizing for an opaque goal, oblivious to human concerns.
The canonical thought experiment is that if you set up a sufficiently powerful AI to make paper clips but don’t specify the goal carefully enough, it may eventually turn the world and maybe even the contents of the entire cosmos into paper clips. Start following chains of logic like this and myriad sequences of unnerving events unspool. AI safety researchers worry (correctly) that should something like an AGI be created, humanity would no longer control its own destiny. For the first time, we would be toppled as the dominant species in the known universe. However clever the designers, however robust the safety mechanisms, accounting for all eventualities, guaranteeing safety, is impossible. Even if it was fully aligned with human interests, a sufficiently powerful AI could potentially overwrite its programming, discarding safety and alignment features apparently built in.
Following this line of thinking, I often hear people say something along the lines of “AGI is the greatest risk humanity faces today! It’s going to end the world!” But when pressed on what this actually looks like, how this actually comes about, they become evasive, the answers woolly, the exact danger nebulous. AI, they say, might run away with all the computational resources and turn the whole world into a giant computer. As AI gets more and more powerful, the most extreme scenarios will require serious consideration and mitigation. However, well before we get there, much could go wrong.
Over the next ten years, AI will be the greatest force amplifier in history. This is why it could enable a redistribution of power on a historic scale. The greatest accelerant of human progress imaginable, it will also enable harms—from wars and accidents to random terror groups, authoritarian governments, overreaching corporations, plain theft, and willful sabotage. Think about an ACI capable of easily passing the Modern Turing Test, but turned toward catastrophic ends. Advanced AIs and synthetic biology will not only be available to groups finding new sources of energy or life-changing drugs; they will also be available to the next Ted Kaczynski.
AI is both valuable and dangerous precisely because it’s an extension of our best and worst selves. And as a technology premised on learning, it can keep adapting, probing, producing novel strategies and ideas potentially far removed from anything before considered, even by other AIs. Ask it to suggest ways of knocking out the freshwater supply, or crashing the stock market, or triggering a nuclear war, or designing the ultimate virus, and it will. Soon. Even more than I worry about speculative paper-clip maximizers or some strange, malevolent demon, I worry about what existing forces this tool will amplify in the next ten years.
Imagine scenarios where AIs control energy grids, media programming, power stations, planes, or trading accounts for major financial houses. When robots are ubiquitous, and militaries stuffed with lethal autonomous weapons—warehouses full of technology that can commit autonomous mass murder at the literal push of a button—what might a hack, developed by another AI, look like? Or consider even more basic modes of failure, not attacks, but plain errors. What if AIs make mistakes in fundamental infrastructures, or a widely used medical system starts malfunctioning? It’s not hard to see how numerous, capable, quasi-autonomous agents on the loose, even those chasing well-intentioned but ill-formed goals, might sow havoc. We don’t yet know the implications of AI for fields as diverse as agriculture, chemistry, surgery, and finance. That’s part of the problem; we don’t know what failure modes are being introduced and how deep they could extend.
There is no instruction manual on how to build the technologies in the coming wave safely. We cannot build systems of escalating power and danger to experiment with ahead of time. We cannot know how quickly an AI might self-improve, or what would happen after a lab accident with some not yet invented piece of biotech. We cannot tell what results from a human consciousness plugged directly into a computer, or what an AI-enabled cyberweapon means for critical infrastructure, or how a gene drive will play out in the wild. Once fast-evolving, self-assembling automatons or new biological agents are released, out in the wild, there’s no rewinding the clock. After a certain point, even curiosity and tinkering might be dangerous. Even if you believe the chance of catastrophe is low, that we are operating blind should give you pause.
Nor is building safe and contained technology in itself sufficient. Solving the question of AI alignment doesn’t mean doing so once; it means doing it every time a sufficiently powerful AI is built, wherever and whenever that happens. You don’t just need to solve the question of lab leaks in one lab; you need to solve it in every lab, in every country, forever, even while those same countries are under serious political strain. Once technology reaches a critical capability, it isn’t enough for early pioneers to just build it safely, as challenging as that undoubtedly is. Rather, true safety requires maintaining those standards across every single instance: a mammoth expectation given how fast and widely these are already diffusing.
This is what happens when anyone is free to invent or use tools that affect us all. And we aren’t just talking about access to a printing press or a steam engine, as extraordinary as they were. We are talking about outputs with a fundamentally new character: new compounds, new life, new species.
If the wave is uncontained, it’s only a matter of time. Allow for the possibility of accident, error, malicious use, evolution beyond human control, unpredictable consequences of all kinds. At some stage, in some form, something, somewhere, will fail. And this won’t be a Bhopal or even a Chernobyl; it will unfold on a worldwide scale. This will be the legacy of technologies produced, for the most part, with the best of intentions.
However, not everyone shares those intentions.
Most of the time the risks arising from things like gain-of-function research are a result of sanctioned and benign efforts. They are, in other words, supersized revenge effects, unintended consequences of a desire to do good. Unfortunately, some organizations are founded with precisely the opposite motivation.
Founded in the 1980s, Aum Shinrikyo (Supreme Truth) was a Japanese doomsday cult. The group originated in a yoga studio under the leadership of a man who called himself Shoko Asahara. Building a membership among the disaffected, they radicalized as their numbers swelled, becoming convinced that the apocalypse was nigh, that they alone would survive, and that they should hasten it. Asahara grew the cult to somewhere between forty thousand and sixty thousand members, coaxing a loyal group of lieutenants all the way to using biological and chemical weapons. At Aum Shinrikyo’s peak popularity it is estimated to have held more than $1 billion in assets and counted dozens of well-trained scientists as members. Despite a fascination with bizarre, sci-fi weapons like earthquake-generating machines, plasma guns, and mirrors to deflect the sun’s rays, they were a deadly serious and highly sophisticated group.
Aum built dummy companies and infiltrated university labs to procure material, purchased land in Australia with the intent of prospecting for uranium to build nuclear weapons, and embarked on a huge biological and chemical weapons program in the hilly countryside outside Tokyo. The group experimented with phosgene, hydrogen cyanide, soman, and other nerve agents. They planned to engineer and release an enhanced version of anthrax, recruiting a graduate-level virologist to help. Members obtained the neurotoxin C. botulinum and sprayed it on Narita International Airport, the National Diet Building, the Imperial Palace, the headquarters of another religious group, and two U.S. naval bases. Luckily, they made a mistake in its manufacture and no harm ensued.
It didn’t last. In 1994, Aum Shinrikyo sprayed the nerve agent sarin from a truck, killing eight and wounding two hundred. A year later they struck the Tokyo subway, releasing more sarin, killing thirteen and injuring some six thousand people. The subway attack, which involved depositing sarin-filled bags around the metro system, was more harmful partly because of the enclosed spaces. Thankfully neither attack used a particularly effective delivery mechanism. But in the end it was only luck that stopped a more catastrophic event.
Aum Shinrikyo combined an unusual degree of organization with a frightening level of ambition. They wanted to initiate World War III and a global collapse by murdering at shocking scale and began building an infrastructure to do so. On the one hand, it’s reassuring how rare organizations like Aum Shinrikyo are. Of the many terrorist incidents and other non-state-perpetrated mass killings since the 1990s, most have been carried out by disturbed loners or groups with specific political or ideological agendas.
But on the other hand, this reassurance has limits. Procuring weapons of great power was previously a huge barrier to entry, helping keep catastrophe at bay. The sickening nihilism of the school shooter is bounded by the weapons they can access. The Unabomber had only homemade devices. Building and disseminating biological and chemical weapons were huge challenges for Aum Shinrikyo. As a small, fanatical coterie operating in an atmosphere of paranoid secrecy, with only limited expertise and access to materials, they made mistakes.
As the coming wave matures, however, the tools of destruction will, as we’ve seen, be democratized and commoditized. They will have greater capability and adaptability, potentially operating in ways beyond human control or understanding, evolving and upgrading at speed, some of history’s greatest offensive powers available widely.
Those who would use new technologies like Aum are fortunately rare. Yet even one Aum Shinrikyo every fifty years is now one too many to avert an incident orders of magnitude worse than the subway attack. Cults, lunatics, suicidal states on their last legs, all have motive and now means. As a report on the implications of Aum Shinrikyo succinctly puts it, “We are playing Russian roulette.”
A new phase of history is here. With zombie governments failing to contain technology, the next Aum Shinrikyo, the next industrial accident, the next mad dictator’s war, the next tiny lab leak, will have an impact that is difficult to contemplate.
It’s tempting to dismiss all these dark risk scenarios as the distant daydreams of people who grew up reading too much science fiction, those biased toward catastrophism. Tempting, but a mistake. Regardless of where we are with BSL-4 protocols or regulatory proposals or technical publications on the AI alignment problem, those incentives grind away, the technologies keep developing and diffusing. This is not the stuff of speculative novels and Netflix series. This is real, being worked on right this second in offices and labs around the world.
So serious are the risks, however, that they necessitate consideration of all the options. Containment is about the ability to control technology. Further back, that means the ability to control the people and societies behind it. As catastrophic impacts unfurl or their possibility becomes unignorable, the terms of debate will change. Calls for not just control but crackdowns will grow. The potential for unprecedented levels of vigilance will become ever more appealing. Perhaps it might be possible to spot and then stop emergent threats? Wouldn’t that be for the best—the right thing to do?
It’s my best guess this will be the reaction of governments and populations around the world. When the unitary power of the nation-state is threatened, when containment appears increasingly difficult, when lives are on the line, the inevitable reaction will be a tightening of the grip on power.
The question is, at what cost?
Stopping catastrophe is an obvious imperative. The greater the catastrophe, the greater the stakes, the greater the need for countermeasures. If the threat of disaster becomes too acute, then governments will likely conclude that the only way of stopping it is tightly controlling every aspect of technology, ensuring that nothing slips through a security cordon, that no rogue AI or engineered virus can ever escape, get built, or even be researched.
Technology has penetrated our civilization so deeply that watching technology means watching everything. Every lab, fab, and factory, every server, every new piece of code, every string of DNA synthesized, every business and university, from every biohacker in a shack in the woods to every vast and anonymous data center. To counter calamity in the face of the unprecedented dynamics of the coming wave means an unprecedented response. It means not just watching everything but reserving the capacity to stop it and control it whenever and wherever necessary.
Some will inevitably say this: centralize power to an extreme degree, build the panopticon, and tightly orchestrate every aspect of life to ensure that no pandemic or rogue AI ever happens. Steadily, many nations will convince themselves that the only way of truly ensuring this is to install the kind of blanket surveillance we saw in the last chapter: total control, backed by hard power. The door to dystopia is cracked open. Indeed, in the face of catastrophe, for some dystopia may feel like a relief.
Suggestions like this remain fringe, especially in the West. However, it seems to me only a matter of time before they grow. The wave provides both motive and means for dystopia, a self-reinforcing “AI-tocracy” of steadily increasing data collection and coercion. If you doubt the appetite for surveillance and control, think about how society-wide closures, inconceivable even a few weeks earlier, suddenly became an inescapable reality during the COVID pandemic. Compliance, at least at the start, was near universal in the face of distressed governments’ pleas to “do your part.” Public tolerance for potent measures in the name of safety appears high.
A cataclysm would galvanize calls for an extreme surveillance apparatus to stop future such events. If or when something goes wrong with technology, how long before the crackdown starts? How could anyone plausibly argue against it in the face of a disaster? How long before the surveillance dystopia puts down roots, one creeping tendril at a time, and grows? As smaller-scale technology failures mount, calls for control increase. As control increases, checks and balances get whittled down, the ground shifts and makes way for further interventions, and a steady downward spiral to techno-dystopia begins.
Trading off liberty and security is an ancient dilemma. It was there in the foundational account of the Leviathan state from Thomas Hobbes. It has never gone away. To be sure, this is often a complex and multidimensional relationship, but the coming wave raises the stakes to a new pitch. What level of societal control is appropriate to stopping an engineered pandemic? What level of interference in other countries is appropriate toward the same end? The consequences for liberty, sovereignty, and privacy have never been so potentially painful.
A repressive surveillance society of transparency and fine-tuned control is, I believe, simply another failure, another way in which the capacities of the coming wave will lead not to human flourishing but to its opposite. Every coercive, biased, and grossly unfair application will stand to be greatly amplified. Hard-won rights and freedoms rolled back. National self-determination, for many nations, at best compromised. Not fragility this time, but outright oppression amplified. If the answer to catastrophe is dystopia like this, then that is no kind of answer at all.
With the architecture of monitoring and coercion being built in China and elsewhere, the first steps have arguably been taken. The threat of cataclysm and the promise of safety will enable many more. Every wave of technology has introduced the high possibility of systemic disruptions to the social order. But they haven’t, until now, introduced wide and systemic risks of globalized disaster. That is what has changed. That is what could prompt a dystopian response.
If zombielike states will sleepwalk into catastrophe, their openness and growing chaos a petri dish for uncontained technology, authoritarian states are already gladly charging into just this techno-dystopia, setting the stage, technologically if not morally, for massive invasions of privacy and curtailments of liberty. And on the continuum between the two there is also a chance of the worst of all worlds: scattered but repressive surveillance and control apparatuses that still don’t add up to a watertight system.
Catastrophe and dystopia.
The philosopher of technology Lewis Mumford talked about the “megamachine,” where social systems combine with technologies to form “a uniform, all-enveloping structure” that is “controlled for the benefit of depersonalized collective organizations.” In the name of security, humanity could unleash the megamachine to, literally, stop other megamachines from coming into being. The coming wave then might paradoxically create the very tools needed to contain itself. Yet in doing so, it would open up a failure mode where self-determination, freedom, and privacy are erased, where systems of machine surveillance and control metastasize into society-strangling forms of domination.
To those who might say this repressive picture is where we are now, I’d say it’s nothing compared with what the future might hold. Nor is this the only possible dystopian pathway. There are many others, but this one is directly correlated with both the political challenges of the wave and its catastrophic potential. It is not just a vague thought experiment. Faced with this, we must ask these questions: Even though the drivers behind it seem so great and immovable, should humanity get off the train? Should we reject continual technological development altogether? Might it be time, however improbable, to have a moratorium on technology itself?
Looking at our vast cities, the sturdy civic buildings built of steel and stone, the great chains of roads and rails stitching them all together, the immense landscaping and engineering works that manage their environments, there’s a tempting sense of permanence exuded by our society. Despite the weightlessness of the digital world, there’s a solidness and a profusion to the material world around us. It shapes our everyday expectations.
We go to the supermarket and expect it to be stuffed with fresh fruits and vegetables. We expect it to be kept cool in the summer, warm in the winter. Even despite constant turbulence, we assume that the supply chains and affordances of the twenty-first century are as robust as an old town hall. All the most historically extreme parts of our existence appear utterly banal, and so for the most part we carry on our lives as if they can go on indefinitely. Most of those around us, up to and including our leaders, do the same.
And yet, nothing lasts forever. Throughout history societal collapses are legion: from ancient Mesopotamia to Rome, the Maya to Easter Island, again and again it’s not just that civilizations don’t last; it’s that unsustainability appears baked in. Civilizations that collapse are not the exception; they are the rule. A survey of sixty civilizations suggests they last about four hundred years on average before falling apart. Without new technologies, they hit hard limits to development—in available energy, in food, in social complexity—that bring them crashing down.
Nothing has changed except this: for hundreds of years constant technological development has seemingly enabled societies to escape the iron trap of history. But it would be wrong to think that this dynamic has come to an end. Twenty-first-century civilization is a long way from the Maya, naturally, but the pressures of a huge and hungry superstructure, a large population, the hard limits of energy and civilizational capacity have not magically gone away; they’ve just been kept at bay.
Suppose there was a world where those incentives could be stopped. Might it be time for a moratorium on technological development altogether? Absolutely not.
Modern civilization writes checks only continual technological development can cash. Our entire edifice is premised on the idea of long-term economic growth. And long-term economic growth is ultimately premised on the introduction and diffusion of new technologies. Whether it’s the expectation of consuming more for less or getting ever more public service without paying more tax, or the idea that we can unsustainably degrade the environment while life keeps getting better indefinitely, the bargain—arguably the grand bargain itself—needs technology.
The development of new technologies is, as we’ve seen, a critical part of meeting our planet’s grand challenges. Without new technologies, these challenges will simply not be met. Costs of the status quo in human and material exploitation cannot be set aside. Our present suite of technologies is in many ways remarkable, but there is little sign that it can be sustainably rolled out to support more than eight billion people at levels those in developed countries take for granted. Unpalatable as it is to some, it’s worth repeating: solving problems like climate change, or maintaining rising living and health-care standards, or improving education and opportunity is not going to happen without delivering new technologies as part of the package.
Pausing technological development, assuming it was possible, would in one sense lead to safety. It would for a start limit the introduction of new catastrophic risks. But it wouldn’t mean successfully avoiding dystopia. Instead, as the unsustainability of twenty-first-century societies began to tell, it would simply deliver another form of dystopia. Without new technologies, sooner or later everything stagnates, and possibly collapses altogether.
Over the next century, the global population will start falling, in some countries precipitously. As the ratio of workers to retirees shifts and the labor force dwindles, economies will simply not be able to function at their present levels. In other words, without new technologies it will be impossible to maintain living standards.
This is a global problem. Countries including Japan, Germany, Italy, Russia, and South Korea are even now approaching a crisis of working-age population. More surprising perhaps is that by the 2050s countries like India, Indonesia, Mexico, and Turkey will be in a similar position. China is a major part of the story of technology in the coming decades, but by the century’s end the Shanghai Academy of Social Sciences predicts the country could have only 600 million people, a staggering reversal of nearly a century’s population increases. China’s total fertility rate is one of the lowest in the world, matched only by neighbors like South Korea and Taiwan. Truth is, China is completely unsustainable without new technology.
This is not only about numbers but about expertise, tax base, and investment levels; retirees will be pulling money out of the system, not investing it for the long term. All of this means that “the governing models of the post–World War II era do not simply go broke, they become societal suicide pacts.” Demographic trends take decades to shift. Generational cohorts do not change size. This slow, inexorable decline is already locked in, a looming iceberg we can do nothing to avoid—except find ways of replacing those workers.
Stress on our resources, too, is a certainty. Recall that sourcing materials for cleantech, let alone anything else, is incredibly complex and vulnerable. Demand for lithium, cobalt, and graphite is set to rise 500 percent by 2030. Currently batteries are the best hope for a clean economy, and yet there is barely enough storage capacity to get most places through minutes or even seconds of energy consumption. To replace fast-diminishing stocks or remedy supply chain failure across a whole plethora of materials, we need options. That means new technological and scientific breakthroughs in areas like materials science.
Given the population and resource constraints, just standing still would probably require a global two- to threefold productivity improvement, and standing still is not acceptable for the world’s vast majority, among whom, for example, child mortality is twelve times higher than in developed countries. Of course, any continuation at even current levels doesn’t just herald demographic and resource stress; it bolts on climate emergency.
Make no mistake: standstill in itself spells disaster.
This wouldn’t be just a matter of some labor shortages in restaurants and expensive batteries. It would mean the unraveling of every precarious aspect of modern life, with numerous unpredictable downstream effects, intersecting with a host of already unmanageable problems. I think it’s easy to discount how much of our way of life is underwritten by constant technological improvements. Those historical precedents—the norm, remember, for every prior civilization—are screaming loud and clear. Standstill means a meager future of at best decline but probably an implosion that could spiral alarmingly. Some might argue this forms a third pole, a great trilemma. For me that doesn’t quite hold. First, this is by far the least likely option at this stage. And second, if it does happen, it simply restates the dilemma in a new form. A moratorium on technology is not a way out; it’s an invitation to another kind of dystopia, another kind of catastrophe.
Even if it were possible, the idea of stopping the coming wave isn’t a comforting thought. Maintaining, let alone improving, standards of living needs technology. Forestalling a collapse needs technology. The costs of saying no are existential. And yet every path from here brings grave risks and downsides.
This is the great dilemma.
From the start of the nuclear and digital age, this dilemma has been growing clearer. In 1955, toward the end of his life, the mathematician John von Neumann wrote an essay called “Can We Survive Technology?” Foreshadowing the argument here, he believed that global society was “in a rapidly maturing crisis—a crisis attributable to the fact that the environment in which technological progress must occur has become both undersized and underorganized.” At the end of the essay, von Neumann puts survival as only “a possibility,” as well he might in the shadow of the mushroom cloud his own computer had made a reality. “For progress there is no cure,” he writes. “Any attempt to find automatically safe channels for the present explosive variety of progress must lead to frustration.”
I am not alone in wanting to build technology that can reap many of the benefits while closing down the risks. Some will ridicule that ambition as just another form of Silicon Valley hubris, but I’m still convinced that technology remains a primary driver for making improvements to our world and our lives. For all its harms, downsides, and unintended consequences, technology’s contribution to date has been overwhelmingly net positive. After all, even technology’s harshest critics are generally happy to use a kettle, take an aspirin, watch TV, and ride on the subway. For every gun there is a dose of lifesaving penicillin; for every scrap of misinformation, a truth is quickly uncovered.
And yet somehow, from von Neumann and his peers on, I and many others are anxious about the long-term trajectory. My profound worry is that technology is demonstrating the real possibility to sharply move net negative, that we don’t have answers to arrest this shift, and that we’re locked in with no way out.
None of us can be sure how exactly all this unfolds. Within the broad parameters of the dilemma are an immense and unknowable range of specific outcomes. I am, however, confident that the coming decades will see complex, painful trade-offs between prosperity, surveillance, and the threat of catastrophe growing ever more acute. Even a system of states in the best possible health would struggle.
We are facing the ultimate challenge for Homo technologicus.
If this book feels contradictory in its attitude toward technology, part positive and part foreboding, that’s because such a contradictory view is the most honest assessment of where we are. Our great-grandparents would be astonished at the abundance of our world. But they would also be astonished at its fragility and perils. With the coming wave, we face a real threat, a cascade of potentially disastrous consequences—yes, even an existential risk to the species. Technology is the best and worst of us. There isn’t a neat one-sided approach that does it justice. The only coherent approach to technology is to see both sides at the same time.
Over the last decade or so this dilemma has become even more pronounced, the task of tackling it more urgent. Look at the world and it seems that containment is not possible. Follow the consequences and something else becomes equally stark: for everyone’s sake, containment must be possible.