I once intended to write a book with a rosier picture about the future of technology and the future in general. Although the world is far wiser and warier about “tech” these days, there’s still a huge amount to be positive about. But during the COVID-19 pandemic I had time to stop and reflect. I allowed myself to reconnect with a truth that I have been, if not denying, then downplaying for too long. Exponential change is coming. It is inevitable. That fact needs to be addressed.
If you accept even a small part of this book’s central argument, the real question is what to actually do about it. Once we’ve acknowledged this reality, what will really make a difference? Faced with a dilemma like the one I’ve outlined in the first three parts of this book, what might containment, even in theory, look like?
In recent years I’ve had countless conversations about this question. I’ve discussed it with the top AI researchers, with CEOs, with old friends, with policy makers in Washington, Beijing, and Brussels, with scientists and lawyers, with students in high schools, and with random people who’ll listen to me at the pub. Everyone immediately reaches for easy answers, and almost without exception everyone has the same prescription: regulation.
Here it seems is the answer, the way out of the dilemma, the key to containment, savior of the nation-state, and of civilization as we know it. Deft regulation, balancing the need to make progress alongside sensible safety constraints, on national and supranational levels, spanning everything from tech giants and militaries to small university research groups and start-ups, tied up in a comprehensive, enforceable framework. We’ve done it before, so the argument goes; look at cars, planes, and medicines. Isn’t this how we manage and contain the coming wave?
If only it were that simple. Saying “Regulation!” in the face of awesome technological change is the easy part. It’s also the classic pessimism-averse answer. It’s a simple way to shrug off the problem. On paper regulation looks enticing, even obvious and straightforward; suggesting it lets people sound smart, concerned, and even relieved. The unspoken implication being that it’s solvable, but it’s someone else’s problem. Look deeper, though, and the fissures become evident.
In part 4 we’ll explore the many ways society can begin to face the dilemma, to shake off pessimism aversion and really grapple with the containment problem, to seek answers in a world where solving it must be possible. Before we do that, however, it’s vital to acknowledge a central truth: regulation alone is not enough. Convening a White House roundtable and delivering earnest speeches are easy; enacting effective legislation is a different proposition. As we’ve seen, governments face multiple crises independent of the coming wave—declining trust, entrenched inequality, polarized politics, to name a few. They’re overstretched, their workforces under-skilled and unprepared for the kinds of complex and fast-moving challenges that lie ahead.
While garage amateurs gain access to more powerful tools and tech companies spend billions on R&D, most politicians are trapped in a twenty-four-hour news cycle of sound bites and photo ops. When a government has devolved to the point of simply lurching from crisis to crisis, it has little breathing room for tackling tectonic forces requiring deep domain expertise and careful judgment on uncertain timescales. It’s easier to ignore these issues in favor of low-hanging fruit more likely to win votes in the next election.
Even technologists and researchers in areas like AI struggle with the pace of change. What chance, then, do regulators have, with fewer resources? How do they account for an age of hyper-evolution, for the pace and unpredictability of the coming wave?
Technology evolves week by week. Drafting and passing legislation takes years. Consider the arrival of a new product on the market like Ring doorbells. Ring put a camera on your front door and connected it to your phone. The product was adopted so quickly and is now so widespread that it has fundamentally changed the nature of what needs regulating; suddenly your average suburban street went from relatively private space to surveilled and recorded. By the time the regulation conversation caught up, Ring had already created an extensive network of cameras, amassing data and images from the front doors of people around the world. Twenty years on from the dawn of social media, there’s no consistent approach to the emergence of a powerful new platform (and besides, is privacy, polarization, monopoly, foreign ownership, or mental health the core problem—or all of the above?). The coming wave will worsen this dynamic.
Discussions of technology sprawl across social media, blogs and newsletters, academic journals, countless conferences and seminars and workshops, their threads distant and increasingly lost in the noise. Everyone has a view, but it doesn’t add up to a coherent program. Talking about the ethics of machine learning systems is a world away from, say, the technical safety of synthetic bio. These discussions happen in isolated, echoey silos. They rarely break out.
Yet I believe they are aspects of what amounts to the same phenomenon; they all aim to address different aspects of the same wave. It’s not enough to have dozens of separate conversations about algorithmic bias or bio-risk or drone warfare or the economic impact of robotics or the privacy implications of quantum computing. It completely underplays how interrelated both causes and effects are. We need an approach that unifies these disparate conversations, encapsulating all those different dimensions of risk, a general-purpose concept for this general-purpose revolution.
The price of scattered insights is failure, and we know what that looks like. Right now, scattered insights are all we’ve got: hundreds of distinct programs across distant parts of the technosphere, chipping away at well-meaning but ad hoc efforts without an overarching plan or direction. At the highest level we need a clear and simple goal, a banner imperative integrating all the different efforts around technology into a coherent package. Not just tweaking this or that element, not just in this or that company or research group or even country, but everywhere, across all the fronts and risk zones and geographies at once. Whether it’s facing an emergent AGI or a strange but useful new life-form, the goal has to be unified: containment.
The central problem for humanity in the twenty-first century is how we can nurture sufficient legitimate political power and wisdom, adequate technical mastery, and robust norms to constrain technologies to ensure they continue to do far more good than harm. How, in other words, we can contain the seemingly uncontainable.
From the history of Homo technologicus to the reality of an era when technology pervades every aspect of life, the odds are stacked against us in making this a reality. But, it doesn’t mean we shouldn’t try.
Most organizations, however, not just governments, are ill-suited to the complex challenges on the way. As we’ve seen, even wealthy nations can struggle in the face of an unfolding crisis. Going into 2020, the Global Health Security Index ranked the United States number one in the world and the U.K. not far behind in terms of pandemic readiness. Yet a catalog of disastrous decisions delivered mortality rates and financial costs materially worse than in peer countries like Canada and Germany. Despite what looked like excellent expertise, institutional depth, planning, and resources, even those best prepared on paper were sideswiped.
Governments should, on the face of it, be better primed for managing novel risks and technologies than ever before. National budgets for such things are generally at record levels. Truth is, though, novel threats are just exceptionally difficult for any government to navigate. That’s not a flaw with the idea of government; it’s an assessment of the scale of the challenge before us. When they are faced with something like an ACI that can pass my version of the Modern Turing Test, the response of even the most thoughtful, farsighted bureaucracies will resemble the response to COVID. Governments fight the last war, the last pandemic, regulate the last wave. Regulators regulate for things they can anticipate.
This, meanwhile, is an age of surprises.
Despite the headwinds, efforts to regulate frontier technologies are necessary and growing. The most ambitious legislation is probably the EU’s AI Act, first proposed in 2021. As of this writing in 2023, the act is going through the lengthy process of becoming European law. If it is enacted, AI research and deployment will be categorized on a risk-based scale. Technologies with “unacceptable risk” of causing direct harm will be prohibited. Where AI affects fundamental human rights or critical systems like basic infrastructure, public transport, health, or welfare, it will get classed as “high risk,” subjected to greater levels of oversight and accountability. High-risk AI must be “transparent, secure, subject to human control and properly documented.”
Yet the act, although one of the world’s most advanced, ambitious, and farsighted regulatory attempts to date, also demonstrates the inherent problems with regulation. It has been attacked from all sides, for going too far and not going far enough. Some argue it’s too focused on nascent, future-facing risks, trying to regulate something that doesn’t even exist; others that it’s not farsighted enough. Some believe it lets big tech companies off the hook, that they were instrumental in its drafting and watered down its provisions. Others think it overreaches and will chill research and innovation in the EU, hurting jobs and tax revenues.
Most regulation walks a tightrope of competing interests. But in few areas other than frontier technology must it tackle something so widely diffused, so critical to the economy, and yet so fast evolving. All the noise and confusion makes clear how hard and complex any form of regulation is, especially amid accelerating change, and how, because of that, it will almost certainly leave gaps, falling short of effective containment.
Regulating not just hyper-evolutionary but omni-use general-purpose technologies is incredibly challenging. Consider how motorized transport is regulated. There isn’t a single regulator, or even just a few laws. Instead, we have regulations around traffic, roads, parking, seatbelts, emissions, driver training, and so on. This comes not just from national legislatures but also from local governments, highway agencies, transport ministries issuing guidance, licensing bodies, offices of environmental standards. It relies not just on lawmakers but on police forces, traffic wardens, car companies, mechanics, city planners, and insurers.
Complex regulations refined over decades made roads and vehicles incrementally safer and more ordered, enabling their growth and spread. And yet 1.35 million people a year still die in traffic accidents. Regulation may lessen the negative effects, but it can’t erase bad outcomes like crashes, pollution, or sprawl. We have decided that this is an acceptable human cost, given the benefits. That “we” is crucial. Regulation doesn’t just rely on the passing of a new law. It is also about norms, structures of ownership, unwritten codes of compliance and honesty, arbitration procedures, contract enforcement, oversight mechanisms. All of this needs to be integrated and the public needs to buy in.
This takes time—time we don’t have. With the coming wave we don’t have half a century for numerous bodies to figure out what to do, for the right values and best practices to emerge. Advanced regulation needs to get it right, and quickly. Nor is it clear how all this will be managed over such a broad spectrum of unprecedented technologies. When you regulate synthetic biology, are you regulating food, medicine, industrial tools, academic research, or all of them at once? Which bodies are responsible for what? How does it all fit together? Which actors are liable for which parts of the supply chain? The pitfalls of even one serious accident are extreme, and yet even deciding which agency would be responsible is a minefield.
Above the cut and thrust of legislative debate, nations are also caught in a contradiction. On the one hand, they are in a strategic competition to accelerate the development of technologies like AI and synthetic biology. Every nation wants to be, and be seen, at the technological frontier. It’s a measure of national pride, of national security, and an existential imperative. On the other hand, they’re desperate to regulate and manage these technologies—to contain them, not least for fear they will threaten the nation-state as the ultimate seat of power. The scary thing is that this assumes a best-case scenario of strong, reasonably competent, cohesive (liberal democratic) nation-states capable of working coherently as units internally and coordinating well internationally.
For containment to be possible, rules need to work well in places as diverse as the Netherlands and Nicaragua, New Zealand and Nigeria. Where someone slows down, others will rush forward. Every country already brings its distinct legal and cultural customs to the development of technology. The EU heavily restricts genetically modified organisms in the food supply. Yet in the United States genetically modified organisms are a routine part of agribusiness. China, on the face of it, is a regulatory leader of sorts. The government has issued multiple edicts on AI ethics, seeking to impose wide-ranging restrictions. It proactively banned various cryptocurrencies and DeFi initiatives, and limits the time children under eighteen can spend on games and social apps to ninety minutes a day during the week, three hours on the weekend. Draft regulation of recommendation algorithms and LLMs in China far exceeds anything we’ve yet seen in the West.
China is slamming on the brakes in some areas while also—as we’ve seen—charging ahead in others. Its regulation is matched by an unparalleled deployment of technology as a tool of authoritarian government power. Speak to Western defense and policy insiders and they’re adamant that although China talks a good game on AI ethics and limitations, when it comes to national security, there are no meaningful barriers. In effect, Chinese AI policy has two tracks: a regulated civilian path and a freewheeling military-industrial one.
Unless regulation can address the deep-seated nature of the incentives outlined in part 2, it won’t be enough to contain technology. It doesn’t stop motivated bad actors or accidents. It doesn’t cut to the heart of an open and unpredictable research system. It doesn’t provide alternatives given the immense financial rewards on offer. And above all, it doesn’t mitigate strategic necessity. It doesn’t describe how countries might coordinate on an enticing, hard-to-define transnational phenomenon, building a delicate critical mass of alliances, especially in a context where international treaties all too often fail. There is an unbridgeable gulf between the desire to rein in the coming wave and the desire to shape and own it, between the need for protections against technologies and the need for protections against others. Advantage and control point in opposing directions.
The reality is that containment is not something that a government, or even a group of governments, can do alone. It requires innovation and boldness in partnering between the public and the private sectors and a completely new set of incentives for all parties. Regulations like the EU AI Act do at least hint at a world where containment is on the map, one where leading governments take the risks of proliferation seriously, demonstrating new levels of commitment and willingness to make serious sacrifices.
Regulation is not enough, but at least it’s a start. Bold steps. A real understanding of the stakes involved in the coming wave. In a world where containment seems like it’s not possible, all of this gestures toward a future where it might be.
Does any entity have the power to prevent mass proliferation while capturing the immense power and benefits arising from the coming wave? To stop bad actors acquiring a technology, or shape the spread of nascent ideas around it? As autonomy increases, can anyone or anything really hope to have meaningful control at the macro level? Containment means answering yes to questions like these. In theory, contained technology gets us out of the dilemma. It means at once harnessing and controlling the wave, a vital tool for building sustainable and flourishing societies, while checking it in ways that avoid serious catastrophe, but not so invasively as to invite dystopia. It means writing a new kind of grand bargain.
Earlier in the book I described containment as a foundation for controlling and governing technology, spanning technical, cultural, and regulatory aspects. At root, I believe this means having the power to drastically curtail or outright stop technology’s negative impacts, from the local and small scale up to the planetary and existential. Encompassing hard enforcement against misuse of proliferated technologies, it also steers the development, direction, and governance of nascent technologies. Contained technology is technology whose modes of failure are known, managed, and mitigated, a situation where the means to shape and govern technology escalate in parallel with its capabilities.
It’s tempting to think of containment in an obvious, literal sense, a kind of magic box in which a given technology can be sealed away. At the outer limit—in the case of rogue malware or pathogens—such drastic steps might be needed. Generally, though, consider containment more as a set of guardrails, a way to keep humanity in the driver’s seat when a technology risks causing more harm than good. Picture those guardrails operating at different levels and with different modes of implementation. In the next chapter we’ll consider what they might look like at a more granular level, from AI alignment research to lab design, international treaties to best practice protocols. For now, the key point is that those guardrails need to be strong enough that, in theory, they could stop a runaway catastrophe.
Containment will need to respond to the nature of a technology, and channel it in directions that are easier to control. Recall the four features of the coming wave: asymmetry, hyper-evolution, omni-use, and autonomy. Each feature must be viewed through the lens of containability. Before outlining a strategy it’s worth asking the following kinds of questions to prompt promising avenues:
Is the technology omni-use and general-purpose or specific? A nuclear weapon is a highly specific technology with one purpose, whereas a computer is inherently multi-use. The more potential use cases, the more difficult to contain. Rather than general systems, then, those that are more narrowly scoped and domain specific should be encouraged.
Is the tech moving away from atoms toward bits? The more dematerialized a technology, the more it is subject to hard-to-control hyper-evolutionary effects. Areas like materials design or drug development are going to rapidly accelerate, making the pace of progress harder to track.
Are price and complexity coming down, and if so how fast? The price of fighter jets has not come down in the way the price of transistors or consumer hardware has. A threat originating in basic computing is of a wider nature than that of fighter jets, despite the latter’s obvious destructive potential.
Are there viable alternatives ready to go? CFCs could be banned partly because there are cheaper and safer alternatives for refrigeration. What alternatives are available? The more that safe alternatives are available, the easier it is to phase out use.
Does the technology enable asymmetric impact? Think of a drone swarm against the conventional military or a tiny computer or biological virus damaging vital social systems. The risk of certain technologies to surprise and exploit vulnerabilities is greater.
Does it have autonomous characteristics? Is there scope for self-learning, or operation without oversight? Think gene drives, viruses, malware, and of course robotics. The more a technology by design requires human intervention, the less chance there is of losing control.
Does it confer outsized geopolitical strategic advantage? Chemical weapons, for example, have limited advantages and lots of downsides, whereas getting ahead in AI or bio has enormous upsides, both economic and military. Saying no is consequently harder.
Does it favor offense or defense? In World War II the development of missiles like the V-2 helped offensive operations. But a technology like radar bolstered defense. Orienting development toward defense over offense tends toward containment.
Are there resource or engineering constraints on its invention, development, and deployment? Silicon chips require specialized and highly concentrated materials, machines, and knowledge. The talent available for a synthetic biology start-up is, in global terms, still quite small. Both help containment in the near term.
Where additional friction keeps things in the tangible world of atoms, for example, or makes things expensive, or if safer alternatives are easily available, there is more chance of containment because it is easier to slow the technologies down, limit access, or drop them altogether. Specific technologies are easier to regulate than omni-use technologies, but regulating omni-use is more important. Likewise, the more potential for offensive actions or autonomy, the greater the requirement for containment. If you can keep price and ease of access out of reach for many, proliferation becomes more difficult. Ask questions like these, and a holistic vision of containment begins to emerge.
I’ve worked on this issue for the best part of fifteen years. Over that time I have felt the sheer force of what’s described in this book, of those incentives, and of the urgent need for answers even as the contours of the dilemma became ever clearer. And yet even I have been taken aback at what technology has made possible in a few short years. I’ve struggled with these ideas, watching as the pace of development keeps picking up.
The reality is, we have often not controlled or contained technologies in the past. And if we want to do so now, it would take something dramatically new, an all-encompassing program of safety, ethics, regulation, and control that doesn’t even really have a name and doesn’t seem possible in the first place.
The dilemma should be a pressing call to action. But over the years it’s become obvious that most people find this a lot to take in. I absolutely get it. It barely seems real on first encounter. In all those many discussions about AI and regulation, I’ve been struck by how hard it is, compared with a host of existing or looming challenges, to convey exactly why the risks in this book need to be taken seriously, why they aren’t just nearly irrelevant tail risks or the province of science fiction.
One challenge in even beginning to have this conversation is that technology, in the popular imagination, has become associated with a narrow band of often superfluous applications. “Technology” now mostly means social media platforms and wearable gadgets to measure our steps and heart rate. It’s easy to forget that technology includes the irrigation systems essential to feeding the planet and newborn life-support machines. Technology isn’t just a way to store your selfies; it represents access to the world’s accumulated culture and wisdom. Technology is not a niche; it is a hyper-object dominating human existence.
A useful comparison here is climate change. It too deals with risks that are often diffuse, uncertain, temporally distant, happening elsewhere, lacking the salience, adrenaline, and immediacy of an ambush on the savanna—the kind of risk we are well primed to respond to. Psychologically, none of this feels present. Our prehistoric brains are generally hopeless at dealing with amorphous threats like these.
However, over the last decade or so, the challenge of climate change has come into better focus. Although the world still spews out increasing amounts of CO2, scientists everywhere can measure CO2 parts per million (ppm) in the atmosphere. As recently as the 1970s, global atmospheric carbon was around the low 300s ppm. In 2022 it was at 420 ppm. Whether in Beijing, Berlin, or Burundi, whether an oil major or a family farm, everyone can see, objectively, what is happening to the climate. Data brings clarity.
Pessimism aversion is much harder when the effects are so nakedly quantifiable. Like climate change, technological risk can only be addressed at planetary scale, but there is no equivalent clarity. There’s no handy metric of risk, no objective unit of threat shared in national capitals, boardrooms, and public sentiment, no parts per million for measuring what technology might do or where it is. There’s no commonly agreed on or obvious standard we can check year by year. No consensus among scientists and technologists on the cutting edge. No popular movement behind stopping it, no graphic images of melting icebergs and stranded polar bears or flooded villages to raise awareness. Obscure research published on arXiv, in cult Substack blogs, or in dry think tank white papers hardly cuts it here.
How do we find common ground amid competing agendas? China and the United States don’t share a vision of restricting development of AI; Meta wouldn’t share the view that social media is part of the problem; AI researchers and virologists believe their work is a critical part not of causing catastrophe but of understanding and averting it. “Technology” is not, on the face of it, a problem in the same sense as a heating planet.
And yet it might be.
The first step is recognition. We need to calmly acknowledge that the wave is coming and the dilemma is, absent a jarring change in course, unavoidable. Either we can grapple with the vast array of good and bad outcomes ignited by our continued openness and heedless chase, or we can confront the dystopian and authoritarian risks arising from our attempts to limit proliferation of powerful technologies, risks moreover inherent in concentrated ownership of those same technologies.
Pick your poison. Ultimately, this balance has to be struck in consultation with everyone. The more it’s on the public’s radar, the better. If this book prompts criticisms, arguments, proposals, and counterproposals, the more the better.
There will be no single, magic fix from a roomful of smart people in a bunker somewhere. Quite the opposite. Current elites are so invested in their pessimism aversion that they are afraid to be honest about the dangers we face. They’re happy to opine and debate in private, less so to come out and talk about it. They are used to a world of control and order: the control of a CEO over a company, of a central banker over interest rates, of a bureaucrat over military procurement, or of a town planner over which potholes to fix. Their levers of control are imperfect, sure, but they are known, tried, and tested and they generally work. Not so here.
This is a unique moment. The coming wave really is coming, but it hasn’t washed over us yet. While unstoppable incentives are locked in, the wave’s final form, the precise contours of the dilemma, are still to be decided. Let’s not waste decades waiting to find out. Let’s get started on managing it today.
In the next chapter, I outline ten areas of focus. This is not a complete map, not remotely a set of final answers, but necessary groundwork. My intent is to seed ideas in the hopes of taking the crucial first steps toward containment. What unifies these ideas is that they are all about marginal gains, the slow and constant aggregation of small efforts to produce a greater probability of good outcomes. They are about creating a different context for how technology is built and deployed: finding ways of buying time, slowing down, giving space for more work on the answers, bringing attention, building alliances, furthering technical work.
Containment of the coming wave is, I believe, not possible in our current world. What these steps might do, however, is change the underlying conditions. Nudge forward the status quo so containment has a chance. We should do all this with the knowledge that it might fail but that it is our best shot at building a world where containment—and human flourishing—are possible.
There are no guarantees here, no rabbits pulled out of hats. Anyone hoping for a quick fix, a smart answer, is going to be disappointed. Approaching the dilemma, we are left in the same all-too-human position as always: giving it everything and hoping it works out. Here’s how I think it might—just might—come together.