Soon after the Russian invasion of Ukraine began on February 24, 2022, residents of the city of Kyiv knew they were in a fight for survival. Over the border with Belarus a colossal massing of Russian troops, armor, and matériel had been building for months. Then, at the outset of the invasion, Russian forces readied for a major push on what was still, at this stage, their primary goal: capture Ukraine’s capital and overthrow its government.
The centerpiece of this concentration of force was a column of trucks, tanks, and heavy artillery some forty kilometers long—a ground offensive on a scale not seen in Europe since World War II. It began moving toward the city. On paper the Ukrainians were hopelessly outmatched. Kyiv seemed to be days, maybe hours, from falling.
But that didn’t happen. Instead, a unit of about thirty Ukrainian soldiers wearing night vision goggles rode quad bikes through the forests around the capital that evening. They dismounted near the column’s head and launched jerry-rigged drones equipped with small explosives. These took out a handful of lead vehicles. Those disabled vehicles then clogged up the central road. Surrounding fields were muddy and impassable. The column, facing freezing weather and faltering supply lines, ground to a halt. Then the same small unit of drone operators managed to blow up a critical supply base using the same tactics, depriving the Russian army of fuel and food.
From here the Battle of Kyiv turned. The greatest buildup of conventional military muscle in a generation was humbled, sent back to Belarus in embarrassing disarray. This semi-improvised Ukrainian militia was called Aerorozvidka. A ragtag volunteer band of drone hobbyists, software engineers, management consultants, and soldiers, they were amateurs, designing, building, and modifying their own drones in real time, much like a start-up. A lot of their equipment was crowdsourced and crowdfunded.
The Ukrainian resistance made good use of coming-wave technologies and demonstrated how they can undermine a conventional military calculus. Cutting-edge satellite internet from SpaceX’s Starlink was integral to maintaining connectivity. A thousand-strong group of nonmilitary elite programmers and computer scientists banded together in an organization called Delta to bring advanced AI and robotics capabilities to the army, using machine learning to identify targets, monitor Russian tactics, and even suggest strategies.
In the early days of the war, the Ukrainian army was constantly short of ammunition. Every strike counted. Accuracy was a matter of survival. Delta’s ability to create machine learning systems to spot camouflaged targets and help guide munitions was critical. A precision missile in a conventional military costs hundreds of thousands of dollars; with AI and consumer-grade drones, with custom software and 3-D printed parts, something similar has now been battle-tested in Ukraine at a cost of around $15,000. Alongside the initial Aerorozvidka efforts, the United States supplied Ukraine with hundreds of Switchblade loitering munitions, drones that wait around a target until an optimal moment to strike.
Drones and AI played a small but important part in the early days of the conflict in Ukraine, new technologies with a pronounced asymmetric potential that closed some of the gap with a much larger aggressor. American, British, and European forces provided just under €100 billion of military aid in the first months, including a massive amount of conventional firepower, which, to be clear, undoubtedly had a decisive impact. However, this was still a landmark conflict because it demonstrated how quickly a relatively untrained fighting force could assemble and arm itself using relatively affordable technologies available in the consumer market. When technology confers a cost and tactical advantage like this, it will, of course, inevitably proliferate and be taken up by all sides.
Drones provide us with a glimpse of what’s in store for the future of warfare. They are a reality that planners and combatants deal with on a daily basis. The real question is what this means for conflict when production costs fall by another order of magnitude and capabilities multiply. Conventional militaries and governments are already struggling to contain them. What comes next will be much harder to contain.
As we saw in part 1, technologies from X-ray machines to AK-47s have always proliferated, with broad consequences. The coming wave is, however, characterized by a set of four intrinsic features compounding the problem of containment. First among them is the primary lesson of this section: hugely asymmetric impact. You don’t need to hit like with like, mass with mass; instead, new technologies create previously unthinkable vulnerabilities and pressure points against seemingly dominant powers.
Second, they are developing fast, a kind of hyper-evolution, iterating, improving, and branching into new areas at incredible speed. Third, they are often omni-use; that is, they can be used for many different purposes. And fourth, they increasingly have a degree of autonomy beyond any previous technology.
These features define the wave. Understanding them is vital in identifying what benefits and risks arise from their creation; together they escalate containment and control to a new plane of difficulty and danger.
Emerging technologies have always created new threats, redistributed power, and removed barriers to entry. Cannons meant a small force could destroy castles and level armies. A few colonial soldiers with advanced weapons could massacre thousands of indigenous people. The printing press meant a single workshop might produce thousands of pamphlets—spreading ideas with an ease that medieval monks copying books by hand could scarcely fathom. Steam power enabled single factories to do the work of entire towns. The internet took this capacity to a new peak: a single tweet or image might travel the world in minutes or seconds; a single algorithm could help a small start-up to grow into a vast, globe-spanning corporation.
Now this effect is again sharpened. This new wave of technology has unlocked powerful capabilities that are cheap, easy to access and use, targeted, and scalable. This clearly brings risks. It won’t just be Ukrainian soldiers using weaponized drones. It will be anyone who wants to. In the words of the security expert Audrey Kurth Cronin, “Never before have so many had access to such advanced technologies capable of inflicting death and mayhem.”
In the skirmishes outside Kyiv, the drones were hobbyist toys. The Shenzhen-based company DJI builds cheap and widely accessible products like its flagship $1,399 Phantom camera quadcopter, a drone so good it has been used by the U.S. military. If you combine advances in AI and autonomy, cheap but effective UAVs, and further progress in areas from robotics to computer vision, then you have potent, precise, and potentially untraceable weaponry. Combating attacks is difficult and expensive; both Americans and Israelis use $3 million Patriot missiles to shoot down drones worth a couple hundred dollars. Jammers, missiles, and counter-drones are all still nascent and not always battle-tested.
These developments represent a colossal transfer of power away from traditional states and militaries toward anyone with the capacity, and motivation, to deploy these devices. There is no obvious reason why a single operator, with enough wherewithal, could not control a swarm of thousands of drones.
A single AI program can write as much text as all of humanity. A single two-gigabyte image-generation model running on your laptop can compress all the pictures on the open web into a tool that generates images with extraordinary creativity and precision. A single pathogenic experiment could spark a pandemic, a tiny molecular event with global ramifications. One viable quantum computer could render the world’s entire encryption infrastructure redundant. Prospects for asymmetric impact are growing all around, and also in the positive sense—single systems can deliver huge benefits as well.
The reverse of asymmetric action is also true. The very scale and interconnectedness of the coming wave create new systemic vulnerabilities: one point of failure can quickly cascade around the world. The less localized a technology, the less easily it can be contained—and vice versa. Think about the risks involved with cars. Traffic accidents are as old as traffic, but over time damage was minimized. Everything from road markings to seatbelts to traffic police helped. Although the motorcar was one of history’s fastest-proliferating and most globalized technologies, accidents were inherently local, discrete events whose ultimate damage was contained. But now a fleet of vehicles might be networked together. Or a single system could control autonomous vehicles throughout a territory. However many safeguards and security protocols are in place, the scale of impact is far wider than we’ve seen before.
AI creates asymmetric risks beyond those of a bad batch of food, a plane accident, or a faulty product. Its risks extend to entire societies, making it not so much a blunt tool as a lever with global consequences. Just as globalized and highly connected markets transmit contagion in a financial crisis, so with technology. Network scale makes containing damage, if or when it comes, almost impossible. Interlinked global systems are containment nightmares. And we already live in an age of interlinked global systems. In the coming wave a single point—a given program, a genetic change—can alter everything.
If you want to contain technology, you might hope it develops at a manageable pace, giving society time and space to understand and adapt to it. Cars are again a good example. Their development over the last century was incredibly fast but also provided time for introducing all sorts of safety standards. There was always a lag, but the standards could still catch up. However, with the rate of change in the coming wave, that looks unlikely.
Over the last forty years, the internet grew to be one of the most fruitful innovation platforms in history. The world digitized, and this dematerialized realm evolved at a bewildering pace. An explosion of development saw the world’s most widely used services and the largest commercial enterprises in history spring up in just a few years. All of this was underwritten by the ever-increasing power and fall in costs of computation we saw in chapter 2. Consider what Moore’s law alone will deliver over the next decade. Should it hold, in ten years a dollar will buy you a hundred times the compute of today. That fact alone suggests some extraordinary outcomes.
The flip side is that innovation beyond digital was often less spectacular. Outside the weightless world of code, a growing chorus began to wonder what happened to the kind of broad-based innovation seen, for example, in the late nineteenth century or the middle of the twentieth century. During that brief period, almost every aspect of the world—from transport to factories, powered flight to new materials—changed radically. But by the early years of the twenty-first century, innovation followed the path of least resistance, concentrated on bits rather than atoms.
That’s now shifting. Software’s hyper-evolution is spreading. The next forty years will see both the world of atoms rendered into bits at new levels of complexity and fidelity and, crucially, the world of bits rendered back into tangible atoms with a speed and ease unthinkable until recently.
Put simply, innovation in the “real world” could start moving at a digital pace, in near-real time, with reduced friction and fewer dependencies. You will be able to experiment in small, speedy, malleable domains, creating near-perfect simulations, and then translate them into concrete products. And then do it again, and again, learning, evolving, and improving at rates previously impossible in the expensive, static world of atoms.
The physicist César Hidalgo argues that configurations of matter are significant because of the information they contain. A Ferrari is valuable not because of its raw matter but rather for the complex information stored in its intricate construction and form; the information characterizing the arrangement of its atoms is what makes it a desirable car. The more powerful the computational base, the more tractable this becomes. Couple that with AI and manufacturing techniques like sophisticated robotics and 3-D printing, and we can design, manipulate, and manufacture real-world products with greater speed, precision, and inventiveness.
AI already helps find new materials and chemical compounds. For example, scientists have used neural networks to produce new configurations of lithium, with big implications for battery technology. AI has helped design and build a car using 3-D printers. In some cases the final outcome looks bizarrely different from anything designed by a human, resembling the undulating and efficient forms found in nature. Configurations of wiring and ducting are organically melded into the chassis for optimal use of space. Parts are too complex to build using conventional tooling and have to be 3-D printed.
In chapter 5, we saw what tools like AlphaFold are doing to catalyze biotech. Until recently biotech relied on endless manual lab work: measuring, pipetting, carefully preparing samples. Now simulations speed up the process of vaccine discovery. Computational tools help automate parts of the design processes, re-creating the “biological circuits” that program complex functions into cells like bacteria that can produce a certain protein. Software frameworks, like one called Cello, are almost like open-source languages for synthetic biology design. This could mesh with fast-moving improvements in laboratory robotics and automation and faster biological techniques like the enzymatic synthesis we saw in chapter 5, expanding synthetic biology’s range and making it more accessible. Biological evolution is becoming subject to the same cycles as software.
Just as today’s models produce detailed images based on a few words, so in decades to come similar models will produce a novel compound or indeed an entire organism with just a few natural language prompts. That compound’s design could be improved by countless self-run trials, just as AlphaZero became an expert chess or Go player through self-play. Quantum technologies, many millions of times more powerful than the most powerful classical computers, could let this play out at a molecular level. This is what we mean by hyper-evolution—a fast, iterative platform for creation.
Nor will this evolution be limited to specific, predictable, and readily containable areas. It will be everywhere.
Defying conventional wisdom, progress in health care was one of the areas that slowed in the recent stagnation of innovation in the realm of atoms. Discovering new drugs became harder and more expensive. Life expectancy leveled off and even started to decline in some U.S. states. Progress on conditions like Alzheimer’s failed to live up to expectations.
One of the most promising areas of AI, and a way out of this grim picture, is automated drug discovery. AI techniques can search through the vast space of possible molecules for elusive but helpful treatments. In 2020 an AI system sifted through 100 million molecules to create the first machine-learning-derived antibiotic—called halicin (yes, after HAL from 2001: A Space Odyssey)—which can potentially help fight tuberculosis. Start-ups like Exscientia, alongside traditional pharmaceutical giants like Sanofi, have made AI a driver of medical research. To date eighteen clinical assets have been derived with the help of AI tools.
There’s a flip side. Researchers looking for these helpful compounds raised an awkward question. What if you redirected the discovery process? What if, instead of looking for cures, you looked for killers? They ran a test, asking their molecule-generating AI to find poisons. In six hours it identified more than forty thousand molecules with toxicity comparable to the most dangerous chemical weapons, like Novichok. It turns out that in drug discovery, one of the areas where AI will undoubtedly make the clearest possible difference, the opportunities are very much “dual use.”
Dual-use technologies are those with both civilian and military applications. In World War I, the process of synthesizing ammonia was seen as a way of feeding the world. But it also allowed for the creation of explosives, and helped pave the way for chemical weapons. Complex electronics systems for passenger aircraft can be repurposed for precision missiles. Conversely, the Global Positioning System was originally a military system, but now has countless everyday consumer uses. At launch, the PlayStation 2 was regarded by the U.S. Department of Defense as so powerful that it could potentially help hostile militaries usually denied access to such hardware. Dual-use technologies are both helpful and potentially destructive, tools and weapons. What the concept captures is how technologies tend toward the general, and a certain class of technologies come with a heightened risk because of this. They can be put toward many ends—good, bad, everywhere in between—often with difficult-to-predict consequences.
But the real problem is that it’s not just frontier biology or nuclear reactors that are dual use. Most technologies have military and civilian applications or potential; most technologies are in some way dual use. And the more powerful the technology, the more concern there should be about how many uses it might have.
Technologies of the coming wave are highly powerful, precisely because they are fundamentally general. If you’re building a nuclear warhead, it’s obvious what it’s for. But a deep learning system might be designed for playing games yet capable of flying a fleet of bombers. The difference is not a priori obvious.
A more appropriate term for the technologies of the coming wave is “omni-use,” a concept that grasps at the sheer levels of generality, the extreme versatility on display. Omni-use technologies like steam or electricity have wider societal effects and spillovers than narrower technologies. If AI is indeed the new electricity, then like electricity it will be an on-demand utility that permeates and powers almost every aspect of daily life, society, the economy: a general-purpose technology embedded everywhere. Containing something like this is always going to be much harder than containing a constrained, single-task technology, stuck in a tiny niche with few dependencies.
AI systems started out using general techniques like deep learning for specific purposes like managing energy use at a data center or playing Go. That is changing. Now single systems like DeepMind’s generalist Gato can capably perform more than six hundred different tasks. The same network can play Atari games, caption images, answer questions, and stack blocks with a real robot arm. Gato is trained not only with text but also with images, torques acting on robotic arms, button presses from computer game playing, and so on. It’s still very early days, and truly general systems are still some way off, but at some point these capabilities will expand to many thousands of activities.
Consider synthetic biology, too, through the omni-use prism. Engineering life is a completely general technique whose potential uses are near limitless; it might create material for construction, tackle disease, and store data. More is more, and there is a good reason for this. Omni-use technologies are more valuable than narrow ones. Nowadays, technologists don’t want to design technologies that are limited, specific, mono-functional applications. Instead, the goal is to design things more like smartphones: phones but more importantly devices for taking pictures, keeping fit, playing games, navigating cities, sending emails, and so on.
Over time, technology tends toward generality. What this means is that weaponizable or harmful uses of the coming wave will be possible regardless of whether this was intended. Simply creating civilian technologies has national security ramifications. Anticipating the full spectrum of use cases in history’s most omni-use wave is harder than ever.
The notion of a new technology being adapted for multiple uses isn’t new. A simple tool like a knife can chop onions or enable a deranged killing spree. Even seemingly specific technologies have dual-use implications: the microphone enabled both the Nuremberg rallies and the Beatles. What’s different about the coming wave is how quickly it is being embedded, how globally it spreads, how easily it can be componentized into swappable parts, and just how powerful and above all broad its applications could be. It unfurls complex implications for everything from media to mental health, markets to medicine. This is the containment problem supersized. After all, we’re talking about fundamentals like intelligence and life. But both those properties have a feature even more interesting than their generality.
Technological evolution has been speeding up for centuries. Omni-use features and asymmetric impacts are magnified in the coming wave, but to some extent they’re inherent properties of all technology. That isn’t the case for autonomy. For all of history technology has been “just” a tool, but what if the tool comes to life?
Autonomous systems are able to interact with their surroundings and take actions without the immediate approval of humans. For centuries the idea that technology is somehow running out of control, a self-directed and self-propelling force beyond the realms of human agency, remained a fiction.
Not anymore.
Technology has always been about allowing us to do more, but crucially with humans still doing the doing. It has leveraged our existing abilities and automated precisely codified tasks. Until now, constant oversight and management have been the default. Technology remained to greater or lesser degrees under meaningful human control. Full autonomy is qualitatively different.
Take autonomous vehicles. In certain conditions today, they can drive on roads with minimal or no direct input from the driver. Researchers in the field categorize autonomy from level 0, no autonomy whatsoever, to level 5, where a vehicle can drive itself under all conditions and the driver simply inputs a destination and then can fall happily asleep. You won’t find level 5 vehicles on the roads anytime soon, not least for legal and insurance reasons.
The new wave of autonomy heralds a world where constant intervention and oversight are increasingly unnecessary. What’s more, with every interaction we are teaching machines to be successfully autonomous. In this paradigm, there is no need for a human to laboriously define the manner in which a task should take place. Instead, we just specify a high-level goal and rely on a machine to figure out the optimal way of getting there. Keeping humans “in the loop,” as the saying goes, is desirable, but optional.
Nobody told AlphaGo that move 37 was a good idea. It discovered this insight largely on its own. It was precisely this feature that struck me so forcibly watching DQN play Breakout. Given some clearly specified objective, systems now exist that can find their own strategies to be effective. AlphaGo and DQN were not in themselves autonomous. But they hint at what a self-improving system might look like. Nobody hand codes GPT-4 to write like Jane Austen, or produce an original haiku, or generate marketing copy for a website selling bicycles. These features are emergent effects of a wider architecture whose outputs are never decided in advance by its designers. This is the first step on the ladder toward greater and greater autonomy. Internal research on GPT-4 concluded that it was “probably” not capable of acting autonomously or self-replicating, but within days of launch users had found ways of getting the system to ask for its own documentation and to write scripts for copying itself and taking over other machines. Early research even claimed to find “sparks of AGI” in the model, adding that it was “strikingly close to human-level performance.” These now are coming into view.
New forms of autonomy have the potential to produce a set of novel, hard-to-predict effects. Forecasting how bespoke genomes will behave is incredibly difficult. Moreover, once researchers make germ-line gene changes to a species, those changes could be out in live beings potentially for millennia, far beyond control or prediction. They might reverberate down countless generations. How they go on to evolve or interact with other changes at these distances is inevitably unclear—and beyond control. Synthetic organisms are literally taking on a life of their own.
We humans face a singular challenge: Will new inventions be beyond our grasp? Previously creators could explain how something worked and why it did what it did, even if this required vast detail. That’s increasingly no longer true. Many technologies and systems are becoming so complex that they’re beyond the capacity of any one individual to truly understand: quantum computing and other technologies operate toward the limits of what can be known.
A paradox of the coming wave is that its technologies are largely beyond our ability to comprehend at a granular level yet still within our ability to create and use. In AI, the neural networks moving toward autonomy are, at present, not explainable. You can’t walk someone through the decision-making process to explain precisely why an algorithm produced a specific prediction. Engineers can’t peer beneath the hood and easily explain what caused something to happen. GPT-4, AlphaGo, and the rest are black boxes, their outputs and decisions based on opaque and intricate chains of minute signals. Autonomous systems can and may be explainable, but the fact that so much of the coming wave operates at the edge of what we can understand should give us pause. We won’t always be able to predict what these autonomous systems will do next; that’s the nature of autonomy.
Right at the cutting edge, however, some AI researchers want to automate every aspect of building AI systems, feeding that hyper-evolution, but potentially with radical degrees of independence through self-improvement. AIs are already finding ways to improve their own algorithms. What happens when they couple this with autonomous actions on the web, as in the Modern Turing Test and ACI, conducting their own R&D cycles?
I’ve often felt there’s been too much focus on distant AGI scenarios, given the obvious near-term challenges present in so much of the coming wave. However, any discussion of containment has to acknowledge that if or when AGI-like technologies do emerge, they will present containment problems beyond anything else we’ve ever encountered. Humans dominate our environment because of our intelligence. A more intelligent entity could, it follows, dominate us. The AI researcher Stuart Russell calls it the “gorilla problem”: gorillas are physically stronger and tougher than any human being, but it is they who are endangered or living in zoos; they who are contained. We, with our puny muscles but big brains, do the containment.
By creating something smarter than us, we could put ourselves in the position of our primate cousins. With a long-term view in mind, those focusing on AGI scenarios are right to be concerned. Indeed, there is a strong case that by definition a superintelligence would be fully impossible to control or contain. An “intelligence explosion” is the point at which an AI can improve itself again and again, recursively making itself better in ever faster and more effective ways. Here is the definitive uncontained and uncontainable technology. The blunt truth is that nobody knows when, if, or exactly how AIs might slip beyond us and what happens next; nobody knows when or if they will become fully autonomous or how to make them behave with awareness of and alignment with our values, assuming we can settle on those values in the first place.
Nobody really knows how we can contain the very features being researched so intently in the coming wave. There comes a point where technology can fully direct its own evolution; where it is subject to recursive processes of improvement; where it passes beyond explanation; where it is consequently impossible to predict how it will behave in the wild; where, in short, we reach the limits of human agency and control.
Ultimately, in its most dramatic forms, the coming wave could mean humanity will no longer be at the top of the food chain. Homo technologicus may end up being threatened by its own creation. The real question is not whether the wave is coming. It clearly is; just look and you can see it forming already. Given risks like these, the real question is why it’s so hard to see it as anything other than inevitable.