AlphaGo’s significance was partly a matter of timing: the breakthrough surprised experts by arriving more quickly than most in the AI community had thought possible. Even days before its first public competition in March 2016, prominent researchers thought an AI simply couldn’t win at this level of Go. At DeepMind, we were still uncertain our program would prevail in a matchup with a master human competitor.
We saw the contest as a grand technical challenge, a waypoint on a wider research mission. Within the AI community, it represented a first high-profile public test of deep reinforcement learning and one of the first research uses of a very large cluster of GPU computation. In the press the matchup between AlphaGo and Lee Sedol was presented as an epic battle: human versus machine; humanity’s best and brightest against the cold, lifeless force of a computer. Cue all the tired tropes of Terminators and robot overlords.
But under the surface, another, more important dimension was becoming clear, a tension I’d dimly worried about ahead of the contest, but the contours of which emerged more starkly as the event unfolded. AlphaGo wasn’t just human versus machine. As Lee Sedol squared up against AlphaGo, DeepMind was represented by the Union Jack, while the Sedol camp flew the taegeukgi, South Korea’s unmistakable flag. West versus East. This implication of national rivalry was an aspect of the contest I soon came to regret.
It’s hard to overstate how popular the competition was in Asia. In the West the proceedings were followed by hard-core AI enthusiasts and attracted some newspaper attention. It was a significant moment in tech history—for those who care about such things. Across Asia, however, the event was bigger than the Super Bowl. More than 280 million people watched live. We’d taken over an entire hotel in Seoul’s downtown, mobbed by ever-present members of the local and international media. You could hardly move for hundreds of photographers and TV cameras. The intensity was unlike anything I’d experienced before, a level of scrutiny and hype that seemed alien in what was, to Western observers, an obscure game for math enthusiasts. AI developers, suffice to say, were not used to this.
In Asia it wasn’t just the geeks watching. It was everyone. And it soon became clear that the observers included tech companies, governments, and militaries. The result sent a shock wave through them all. The significance was lost on no one. The challenger, a Western firm, London based, American owned, had just marched into an ancient, iconic, cherished game, literally put its flag in the turf, and obliterated the home team. It was as if a group of Korean robots had shown up at Yankee Stadium and beat America’s all-star baseball team.
For us the event was a scientific experiment. It was a powerful—and, yes, cool—demonstration of cutting-edge techniques we’d spent years trying to perfect. It was exciting from an engineering perspective, exhilarating for its competition, and bewildering to be at the center of a media circus. For many in Asia it was something more painful, an instance of wounded regional and national pride.
Seoul wasn’t the end for AlphaGo. A year later, in May 2017, we took part in a second tournament, this time against the number-one-ranked player in the world: Ke Jie. This matchup took place in Wuzhen, China, at the Future of Go Summit. Our reception in Wuzhen was strikingly different. Livestreaming the matches was barred in the People’s Republic. No mention of Google was allowed. The environment was stricter, more controlled; the narrative closely curated by the authorities. No more media circus. The subtext was clear: this wasn’t just a game anymore. AlphaGo won again, but it did so amid an unmistakably tense atmosphere.
Something had changed. If Seoul offered a hint, Wuzhen brought it home. As the dust settled, it became clear AlphaGo was part of a much bigger story than one trophy, system, or company; it was that of great powers engaging in a new and dangerous game of technological competition—and a series of overwhelmingly powerful and interlocking incentives that ensure the coming wave really is coming.
Technology is pushed on by all too rudimentary and fundamentally human drivers. From curiosity to crisis, fortune to fear, at its heart technology emerges to fill human needs. If people have powerful reasons to build and use it, it will get built and used. Yet in most discussions of technology people still get stuck on what it is, forgetting why it was created in the first place. This is not about some innate techno-determinism. This is about what it means to be human.
Earlier we saw that no wave of technology has, so far, been contained. In this chapter we look at why history is likely to repeat itself; why, thanks to a series of macro-drivers behind technologies’ development and spread, the fruit will not be left on the tree; why the wave will break. As long as these incentives are in place, the important question of “should we?” is moot.
The first driver has to do with what I experienced with AlphaGo: great power competition. Technological rivalry is a geopolitical reality. Indeed it always has been. Nations feel the existential need to keep up with their peers. Innovation is power. Second comes a global research ecosystem with its ingrained rituals rewarding open publication, curiosity, and the pursuit of new ideas at all costs. Then come the immense financial gains from technology and the urgent need to tackle our global social challenges. And the final driver is perhaps the most human of all: ego.
Before that, back to geopolitics, where the recent past offers a potent lesson.
Postwar America took its technological supremacy for granted. Sputnik woke it up. In the fall of 1957, the Soviets launched Sputnik, the world’s first artificial satellite, humanity’s first encroachment on space. About the size of a beach ball, it was still impossibly futuristic. Sputnik was up there for the world to see, or rather hear, its extraterrestrial beeps broadcasting around the planet. Pulling it off was an undeniable feat.
This was a crisis for America, a technological Pearl Harbor. Policy reacted. Science and technology, from high schools to advanced laboratories, became national priorities, with new funding and new agencies like NASA and DARPA. Massive resources were plowed into major technology projects, not least the Apollo missions. These spurred many important advances in rocketry, microelectronics, and computer programming. Nascent alliances like NATO were strengthened. Twelve years later, it was the United States, not the USSR, that succeeded in putting a human on the moon. The Soviets almost bankrupted themselves trying to keep up. With Sputnik, Russia had blown past the United States, a historic technical achievement with enormous geopolitical ramifications. But when America needed to step up, it did.
Just as Sputnik eventually put the United States on course to be a superpower in rocketry, space technology, computing, and all their military and civilian applications, so something similar is now taking place in China. AlphaGo was quickly labeled China’s Sputnik moment for AI. The Americans and the West, just as they had done in the early days of the internet, were threatening to steal a march on an epoch-making technology. Here was the clearest possible reminder that China, beaten at a national pastime, could once again find itself far behind the frontier.
In China, Go wasn’t just a game. It represented a wider nexus of history, emotion, and strategic calculation. China was already committed to investing heavily in science and technology, but AlphaGo helped focus government minds even more acutely on AI. China, with its thousands of years of history, had once been the crucible of world technological innovation; it was now painfully aware of how it had fallen behind, losing the technological race to Europeans and Americans on various fronts from medicines to aircraft carriers. It had endured a “century of humiliation,” as the Chinese Communist Party (CCP) calls it. One that, the party believes, must never happen again.
Time, argued the CCP, to reclaim its rightful place. In the words of Xi Jinping, speaking to the Twentieth CCP Congress in 2022, “to meet strategic needs” the country “must adhere to science and technology as the number-one productive force, talent as the number-one resource, [and] innovation as the number-one driving force.”
China’s top-down model means it can marshal the state’s full resources behind technological ends. Today, China has an explicit national strategy to be the world leader in AI by 2030. The New Generation Artificial Intelligence Development Plan, announced just two months after Ke Jie was beaten by AlphaGo, was intended to harness government, the military, research organizations, and industry in a collective mission. “By 2030, China’s AI theories, technologies, and applications should achieve world-leading levels,” the plan declares, “making China the world’s primary AI innovation center.” From defense to smart cities, fundamental theory to new applications, China should occupy AI’s “commanding heights.”
These bold declarations are not just empty posturing. As I write this, just six years after China released the plan, the United States and other Western nations no longer have an outsized lead in AI research. Universities like Tsinghua and Peking are competitive with Western institutions like Stanford, MIT, and Oxford. Indeed, Tsinghua publishes more AI research than any other academic institution on the planet. China has a growing and impressive share of the most highly cited papers in AI. In terms of volume of AI research, Chinese institutions have published a whopping four and a half times more AI papers than U.S. counterparts since 2010, and comfortably more than the United States, the U.K., India, and Germany combined.
It’s not just AI either. From cleantech to bioscience, China surges across the spectrum of fundamental technologies, investing at an epic scale, a burgeoning IP behemoth with “Chinese characteristics.” China overtook the United States in number of PhDs produced in 2007, but since then investment in and expansion of programs have been significant, producing nearly double the number of STEM PhDs as the United States every year. More than four hundred “key state laboratories” anchor a lavishly funded public-private research system covering everything from molecular biology to chip design. In the early years of the twenty-first century, China’s R&D spending was just 12 percent of America’s. By 2020, it was 90 percent. On current trends it will be significantly ahead by the mid-2020s, as it already is on patent applications.
China was the first country to land a probe on the dark side of the moon. No other country had even attempted this. It has more of the world’s top five hundred supercomputers than anywhere else. The BGI Group, a Shenzhen-based genetics giant, has extraordinary DNA sequencing capacity, both private and state backing, thousands of scientists, and vast reserves of DNA data and computing capacity alike. Xi Jinping has explicitly called for a “robot revolution”: China installs as many robots as the rest of the world combined. It built hypersonic missiles thought years away by the United States, is a world leader in fields from 6G communications to photovoltaics, and is home to major tech companies like Tencent, Alibaba, DJI, Huawei, and ByteDance.
Quantum computing is an area of notable Chinese expertise. In the wake of Edward Snowden’s leak of classified information from U.S. intelligence programs, China became particularly paranoid and keen to build a secure communications platform. Another Sputnik moment. In 2014, China filed the same number of quantum technology patents as the United States; by 2018 it had filed twice as many.
In 2016, China sent the world’s first “quantum satellite,” Micius, into space, part of a new, supposedly secure communications infrastructure. But Micius was only the start in China’s quest for an unhackable quantum internet. A year later the Chinese built a two-thousand-kilometer quantum link between Shanghai and Beijing for transmitting secure financial and military information. They’re investing more than $10 billion in creating the National Laboratory for Quantum Information Sciences in Hefei, the world’s biggest such facility. They hold records for linking qubits together via quantum entanglement, an important step on the road to fully fledged quantum computers. Hefei scientists even claimed to have built a quantum computer 1014 times faster than Google’s breakthrough Sycamore.
Micius’s lead researcher and one of the world’s top quantum scientists, Pan Jianwei, made clear what this means. “I think we have started a worldwide quantum space race,” he said. “With modern information science, China has been a learner and a follower. Now, with quantum technology, if we try our best we can be one of the main players.”
The West’s persistent dismissals over decades about China’s capabilities “not being creative” were badly wrong. We said they were only good at imitating, were too restricted and unfree, that state-owned enterprises were terrible. In hindsight, most of these assessments were plain wrong, and where they had merit, they did not stop China from emerging as a modern-day titan in science and engineering—not least because legal transfers of IP like buying companies and translating journals were backed with outright theft, forced transfers, reverse engineering, and espionage operations.
Meanwhile, the United States is losing its strategic lead. For years it was obvious that America held supremacy in everything from semiconductor design to pharmaceuticals, the invention of the internet to the world’s most sophisticated military technology. It’s not gone, but it’s going. A report by Harvard’s Graham Allison argues that the situation is far more serious than most in the West appreciate. China is already ahead of the United States in green energy, 5G, and AI and is on a trajectory to overtake it in quantum and biotech in the next few years. The Pentagon’s first chief software officer resigned in protest in 2021 because he was so dismayed by the situation. “We have no competing fighting chance against China in 15 to 20 years. Right now, it’s already a done deal; it is already over in my opinion,” he told the Financial Times.
Shortly after becoming president in 2013, Xi Jinping made a speech with lasting consequences for China—and for the rest of the world. “Advanced technology is the sharp weapon of the modern state,” he declared. “Our technology still generally lags that of developed countries and we must adopt an asymmetric strategy of catching up and overtaking.”
It was a powerful analysis and, as we have seen, a statement of China’s policy priorities. But unlike much of what Xi says, any world leader could credibly make the same point. Any U.S. or Brazilian president, German chancellor, or Indian prime minister would subscribe to the central thesis—that technology is a “sharp weapon” enabling countries to “hold sway.” Xi was stating a bald truth, the self-declared mantra of not just China but virtually every state, from superpower leaders at the frontier to isolated pariahs: who builds, owns, and deploys technology matters.
Technology has become the world’s most important strategic asset, not so much the instrument of foreign policy as the driver of it. The great power struggles of the twenty-first century are predicated on technological superiority—a race to control the coming wave. Tech companies and universities are no longer seen as neutral but as major national champions.
Political will could disrupt or cancel the other incentives discussed in this chapter. A government could—in theory—rein in research incentives, clamp down on private business, curtail ego-driven initiatives. But it cannot wave away hard-edged competition from its geopolitical rivals. Choosing to limit technological development when perceived adversaries pile forward is, in the logic of an arms race, choosing to lose.
For a long time I objected, resisting the framing of technological progress as a zero-sum international arms race. At DeepMind, I always pushed back on references to us as a Manhattan Project for AI, not just because of the nuclear comparison, but because even the framing might initiate a series of other Manhattan Projects, feeding an arms race dynamic when close global coordination, break points, and slowdowns were needed. But the reality is that the logic of nation-states is at times painfully simple and yet utterly inevitable. In the context of a state’s national security, merely floating an idea becomes dangerous. Once the words are out, the starting gun is fired, the rhetoric itself producing a drastic national response. And then it spirals.
Countless friends and colleagues in Washington and Brussels, in government, in think tanks, and in academia would all trot out the same infuriating line: “Even if we are not actually in an arms race, we must assume ‘they’ think we are, and therefore we must ourselves race to achieve a decisive strategic advantage since this new technological wave might completely rebalance global power.” This attitude becomes a self-fulfilling prophecy.
There’s no use in pretending. Great power competition with China is one of the few areas enjoying bipartisan agreement in Washington. The debate now isn’t whether we are in a technological and AI arms race; it’s where it will lead.
The arms race is usually presented as a Sino-American duopoly. This is myopic. While it’s true these countries are the most advanced and well resourced, many others are significant participants. This new era of arms races heralds the rise of widespread techno-nationalism, in which multiple countries will be locked in an ever-escalating competition to gain a decisive geopolitical advantage.
Almost every country now has a detailed AI strategy. Vladimir Putin believes the leader in AI “will become the ruler of the world.” French president Emmanuel Macron declares that “we will fight to build a European metaverse.” His wider point is that Europe has failed to build the tech giants of the United States and China, produces fewer breakthroughs, and lacks both IP and manufacturing capacity in critical portions of the tech ecosystem. Security, wealth, prestige—all rest, for Europe, in his view and that of many others, on becoming a third power.
Countries have different strengths, from bioscience and AI (like the U.K.) to robotics (Germany, Japan, and South Korea) to cybersecurity (Israel). Each has major R&D programs across portions of the coming wave, with burgeoning civilian start-up ecosystems increasingly backed by the hard force of perceived military necessity.
India is an obvious fourth pillar to a new global order of giants, alongside the United States, China, and the EU. Its population is young and entrepreneurial, increasingly urbanized, and ever more connected and tech savvy. By 2030 its economy will have passed those of countries like the U.K., Germany, and Japan to be the third largest in the world; by 2050, it will be worth $30 trillion.
Its government is determined to make Indian tech a reality. Through its Atmanirbhar Bharat (Self-Reliant India) program, India’s government is working to ensure the world’s most populous country achieves ownership of core technology systems competitive with the United States and China. Under it, India established partnerships with, for example, Japan on AI and robotics, as well as Israel for drones and unmanned aerial vehicles. Prepare for an Indian wave.
In World War II the Manhattan Project, which consumed 0.4 percent of U.S. GDP, was seen as a race against time to get the bomb before the Germans. But the Nazis had initially ruled out pursuit of nuclear weapons, considering them too expensive and speculative. The Soviets were far behind and eventually relied on extensive leaks from the United States. America had conducted an arms race against phantoms, bringing nuclear weapons into the world far earlier than under other circumstances.
Something similar occurred in the late 1950s, when, in the wake of a Soviet ICBM test and Sputnik, Pentagon decision-makers became convinced of an alarming “missile gap” with the Russians. It later emerged that the United States had a ten-to-one advantage at the time of the key report. Khrushchev was following a tried-and-tested Soviet strategy: bluffing. Misreading the other side meant nuclear weapons and ICBMs were both brought forward by decades.
Could this same mistaken dynamic be playing out in the current technological arms races? Actually, no. First, the coming wave’s proliferation risk is acute. Because these technologies are getting cheaper and simpler to use even as they get more powerful, more nations can engage at the frontier. Large language models are still seen as cutting-edge, yet there is no great magic or hidden state secret to them. Access to computation is likely the biggest bottleneck, but plenty of services exist to make it happen. The same goes for CRISPR or DNA synthesis.
We can already see achievements like China’s moon landing or India’s billion-strong biometric identification system, Aadhaar, happening in real time. It’s no mystery that China has enormous LLMs, Taiwan is the leader in semiconductors, South Korea has world-class expertise in robots, and governments everywhere are announcing and implementing detailed technology strategies. This is happening out in the open, shared in patents and at academic conferences, reported in Wired and the Financial Times, broadcast live on Bloomberg.
Declaring an arms race is no longer a conjuring act, a self-fulfilling prophecy. The prophecy has been fulfilled. It’s here, it’s happening. It is a point so obvious it doesn’t often get mentioned: there is no central authority controlling what technologies get developed, who does it, and for what purpose; technology is an orchestra with no conductor. Yet this single fact could end up being the most significant of the twenty-first century.
And if the phrase “arms race” triggers worry, that’s with good reason. There could hardly be a more precarious foundation for a set of escalating technologies than the perception (and reality) of a zero-sum competition built on fear. There are, however, other, more positive drivers of technology to consider.
Raw curiosity, the quest for truth, the importance of openness, evidence-based peer review—these are core values for scientific and technological research. Since the Scientific Revolution and its industrial equivalents in the eighteenth and nineteenth centuries, scientific discoveries have not been hoarded like secret jewels but shared openly in journals, books, salons, and public lectures. The patent system created a mechanism for sharing knowledge while rewarding risk-taking. Broad access to information became an engine of our civilization.
Openness is science and technology’s cardinal ideology. What is known must be shared; what is discovered must be published. Science and technology live and breathe on free debate and the open sharing of information, to the extent that openness has itself grown into a powerful (and amazingly beneficial) incentive.
We live in an age of what Audrey Kurth Cronin calls “open technological innovation.” A global system of developing knowledge and technology is now so sprawling and open that it’s almost impossible to steer, govern, or, if need be, shut down. The ability to understand, create, build on, and adapt technology is highly distributed as a result. Obscure work done by a computer science grad student one year might be in the hands of hundreds of millions of users the next. That makes it hard to predict or control. Sure, tech companies want to keep their secrets, but they also tend to abide by the open philosophies characterizing software development and academia. Innovations diffuse far faster and further and more disruptively as a result.
The openness imperative saturates research culture. Academia is built around peer review; any paper not subject to critical scrutiny by credible peers doesn’t meet the gold standard. Funders don’t like supporting work that stays locked away. Both institutions and researchers pay careful attention to their publication records and how often their papers are cited. More citations mean more prestige, credibility, and research funding. Junior researchers are especially liable to be judged—and hired—on their publication record, publicly viewable on platforms like Google Scholar. Moreover, these days papers are announced on Twitter and often written with social media influence in mind. They are designed to be eye-catching and attract attention.
Academics fervently argue for open access to their research. In tech, strong norms around sharing and contributing support a flourishing space of open-source software. Some of the world’s biggest companies—Alphabet, Meta, Microsoft—regularly contribute huge amounts of IP for free. In areas like AI and synthetic biology, where the lines between scientific research and technological development are especially blurred, all of this makes the culture default to open.
At DeepMind we learned early that opportunities to publish were a key factor when leading researchers decided where to work. They wanted the openness and peer recognition they’d gotten used to in academia. Soon it became standard in leading AI labs: while not everything would be immediately made public, openness was considered a strategic advantage in attracting the best scientists. Meanwhile publication records are an important part of getting hired at leading technology labs, while competition is intense, a race for who goes public first.
All in all, to a degree that is perhaps underappreciated, publication and sharing aren’t just about the process of falsification in science. They’re also for prestige, for peers, for the pursuit of a mission, for the sake of a job, for likes. All of it both drives and accelerates the process of technological development.
Huge amounts of AI data and code are public. For example, GitHub has 190 million repositories of code, many of which are public. Academic preprint servers enable researchers to quickly upload work without any review or filtration mechanism. The original such service, arXiv, hosts more than two million papers. Dozens of more specialized preprint services, like bioRxiv in the life sciences, fuel the process. The great stock of the world’s scientific and technical papers is either accessible on the open web or available via easy-to-get institutional log-ins. This slots into a world where cross-border funding and collaboration are the norm; where projects often have hundreds of researchers freely sharing information; where thousands of tutorials and courses on state-of-the-art techniques are readily available online.
All of this takes place in the context of a turbocharged research landscape. Worldwide R&D spending is at well over $700 billion annually, hitting record highs. Amazon’s R&D budget alone is $78 billion, which would be the ninth biggest in the world if it were a country. Alphabet, Apple, Huawei, Meta, and Microsoft all spend well in excess of $20 billion a year on R&D. All these companies, those most keenly investing in the coming wave, those with the most lavish budgets, have a track record of openly publishing their research.
The future is remarkably open-source, published on arXiv, documented on GitHub. It’s being built for citations, research kudos, and the promise of tenure. Both the imperative of openness and the sheer mass of easily available research material mean this is an inherently deep-rooted and widely distributed set of incentives and foundations for future research that no one can fully govern.
Predicting anything at the frontier is tricky. If you want to direct the research process, to steer it toward or away from certain outcomes, to contain it ahead of time, you face multiple challenges. Not only is there the question of how you might coordinate between competing groups, but there’s the fact that at the frontier it’s also impossible to predict where breakthroughs might come from.
CRISPR gene editing technology, for example, has its roots in work done by the Spanish researcher Francisco Mojica, who wanted to understand how some single-celled organisms thrive in brackish water. Mojica soon stumbled across repeating sequences of DNA that would be a key part of CRISPR. These clustered repeating sections seemed important. He came up with the name CRISPR. Later work from two researchers at a Danish yogurt company looked at protecting the bacteria vital for starter cultures in the yogurt’s fermentation process. It helped show how the core mechanisms might function. These unlikely avenues are the foundation for arguably the biggest biotech story of the twenty-first century.
Likewise, fields can stall for decades but then change dramatically in months. Neural networks spent decades in the wilderness, trashed by luminaries like Marvin Minsky. Only a few isolated researchers like Geoffrey Hinton and Yann LeCun kept them going through a period when the word “neural” was so controversial that researchers would deliberately remove it from their papers. It seemed impossible in the 1990s, but neural networks came to dominate AI. And yet it was also LeCun who said AlphaGo was impossible just days before it made its first big breakthrough. That’s no discredit to him; it just shows that no one can ever be sure of anything at the research frontier.
Even in hardware the path toward AI was impossible to predict. GPUs—graphics processing units—are a foundational part of modern AI. But they were first developed to deliver ever more realistic graphics in computer games. In an illustration of the omni-use nature of technology, fast parallel processing for flashy graphics turned out to be perfect for training deep neural networks. It’s ultimately luck that demand for photorealistic gaming meant companies like NVIDIA invested so much into making better hardware, and that this then adapted so well to machine learning. (NVIDIA wasn’t complaining; its share price rose 1,000 percent in the five years after AlexNet.)
If you were looking to monitor and direct AI research in the past, you would likely have got it wrong, blocking or boosting work that eventually proved irrelevant, entirely missing the most important breakthroughs quietly brewing on the sidelines. Science and technology research is inherently unpredictable, exceptionally open, and growing fast. Governing or controlling it is therefore immensely difficult.
Today’s world is optimized for curiosity, sharing, and research at a pace never seen before. Modern research works against containment. So too do the necessity and desire to make a profit.
In 1830, the first passenger railway opened between Liverpool and Manchester. Building this marvel of engineering had required an act of Parliament. The route needed bridges, cut-throughs, elevated sections over boggy ground, and settling of seemingly endless property disputes: all titanic challenges. The railway’s opening was attended by dignitaries including the prime minister and Liverpool’s MP, William Huskisson. During the celebration the crowd stood on the tracks to welcome the new marvel as it approached. So unfamiliar was this striking machine that people failed to appreciate the speed of the oncoming train, and Huskisson himself was killed under the locomotive’s wheels. To the horrified spectators George Stephenson’s steam-powered Rocket was monstrous, an alien, belching, terrifying blur of modernity and machinery.
Yet it was also a sensation, faster than anything then experienced. Growth was rapid. Two hundred and fifty passengers a day had been forecast; twelve hundred a day were using it after only a month. Hundreds of tons of cotton could be hauled from the Liverpool docks to the Manchester mills with minimum fuss in record time. Five years in, it was delivering a dividend of 10 percent, presaging an 1830s mini-boom in railway construction. The government saw an opportunity for more. In 1844, a young MP called William Gladstone put forward the Railway Regulation Act to supercharge investment. Companies submitted hundreds of applications to build new railways in just a few months in 1845. While the rest of the stock market flatlined, railway companies boomed. Investors piled in. At their peak, railway stocks accounted for more than two-thirds of total stock market value.
Within a year the crash had started. The market eventually bottomed out in 1850, 66 percent lower than its peak. Easy profit, not for the first or last time, had made people greedy and foolish. Thousands lost everything. Nonetheless, a new era had arrived with the boom. With the locomotive, an older and bucolic world was torn to shreds in a blitz of viaducts and tunnels, cuttings and great stations, coal smoke and whistles. From a few scattered lines, the investment craze created the outlines of an integrated national network. It shrank the country. In the 1830s a journey between London and Edinburgh took days in an uncomfortable stagecoach. By the 1850s it took a single train under twelve hours. Connection to the rest of the country meant towns, cities, and regions boomed. Tourism, trade, and family life were transformed. Among many other impacts, it created the need for a standardized national time to make sense of the timetables. And it was all done thanks to a relentless thirst for profit.
The railway boom of the 1840s was “arguably the greatest bubble in history.” But in the annals of technology, it is more norm than exception. There was nothing inevitable about the coming of the railways, but there was something inevitable about the chance to make money. Carlota Perez sees an equivalent “frenzy phase” as being part of every major technology rollout for at least the last two hundred years, from the original telephone cables to contemporary high-bandwidth internet. The boom never lasts, but the raw speculative drive produces lasting change, a new technological substrate.
The truth is that the curiosity of academic researchers or the will of motivated governments is insufficient to propel new breakthroughs into the hands of billions of consumers. Science has to be converted into useful and desirable products for it to truly spread far and wide. Put simply: most technology is made to earn money.
If anything, this is perhaps the most persistent, entrenched, dispersed incentive of all. Profit drives the Chinese entrepreneur to develop moldings for a radically redesigned phone; it pushes the Dutch farmer to find new robotics and greenhouse technologies to grow tomatoes year-round in the cool climate of the North Sea; it leads suave investors on Palo Alto’s Sand Hill Road to invest millions of dollars in untested young entrepreneurs. While the motivations of their individual contributors may vary, Google is building AI, and Amazon is building robots, because as public companies with shareholders to please, they see them as ways to make a profit.
And this, the potential for profit, is built on something even more long-lasting and robust: raw demand. People both want and need the fruits of technology. People need food, or refrigeration, or telecoms to live their lives; they might want AC units, or a new kind of shoe design requiring some intricate new manufacturing technique, or some kind of revolutionary new food-coloring method for cupcakes, or any of the innumerable everyday ends to which technology is put to use. Either way, technology helps provide, and its creators take their cut. The sheer breadth of human wants and needs, and the countless opportunities to profit from them, are integral to the story of technology and will remain so in the future.
This is no bad thing. Go back just a few hundred years and economic growth was almost nonexistent. Living standards stagnated for centuries at unfathomably worse levels than today. In the last two hundred years, economic output is up more than three hundred times. Per capita GDP has risen at least thirteenfold over the same period, and in the very richest parts of the world it has risen a hundredfold. At the beginning of the nineteenth century, almost everyone lived in extreme poverty. Now, globally, this sits at around 9 percent. Exponential improvements in the human condition, once impossible, are routine.
At root, this is a story of systematically applying science and technology in the name of profit. This in turn drove huge leaps in output and living standards. In the nineteenth century, inventions like Cyrus McCormick’s threshing machine led to a 500 percent increase in output of wheat per hour. Isaac Singer’s sewing machine meant sewing a shirt went from taking fourteen hours to just one hour. In developed economies, people work far less than they used to for far more reward. In Germany, for example, annual working hours have decreased by nearly 60 percent since 1870.
Technology entered a virtuous circle of creating wealth that could be reinvested in further technological development, all of which drove up living standards. But none of these long-term goals were really the primary objective of any single individual. In chapter 1, I argued that almost everything around you is a product of human intelligence. Here’s a slight correction: much of what we see around us is powered by human intelligence in direct pursuit of monetary gain.
This engine has created a world economy worth $85 trillion—and counting. From the pioneers of the Industrial Revolution to the Silicon Valley entrepreneurs of today, technology has a magnetic incentive in the form of serious financial rewards. The coming wave represents the greatest economic prize in history. It is a consumer cornucopia and potential profit center without parallel. Anyone looking to contain it must explain how a distributed, global, capitalist system of unbridled power can be persuaded to temper its acceleration, let alone leave it on the table.
When a corporation automates insurance claims or adopts a new manufacturing technique, it creates efficiency savings or improves the product, boosting profits and attracting new customers. Once an innovation delivers a competitive advantage like this, everyone must either adopt it, leapfrog it, switch focus, or lose market share and eventually go bust. The attitude around this dynamic in technology businesses in particular is simple and ruthless: build the next generation of technology or be destroyed.
No surprise, then, that corporations play such a large role in the coming wave. Tech is by far the biggest single category in the S&P 500, constituting 26 percent of the index. Between them the major tech groups have cash on hand equivalent to the GDP of an economy like Taiwan’s or Poland’s. Capital expenditure, like R&D spending, is enormous, exceeding the oil majors, previously the biggest spenders. Anyone following the industry of late will have witnessed an increasingly intense commercial race around AI, with firms like Google, Microsoft, and OpenAI vying week by week to launch new products.
Hundreds of billions of dollars of venture capital and private equity are deployed into countless start-ups. Investment in AI technologies alone has hit $100 billion a year. These big numbers do actually matter. Huge quantities of capital expenditure, R&D spending, venture capital, and private equity investment, unmatched by any other sector, or any government outside China and the United States, are the raw fuel powering the coming wave. All of this money demands a return, and the technology it creates is the means of getting it.
As with the Industrial Revolution, potential economic rewards are enormous. Estimates are hard to intuit. PwC forecasts AI will add $15.7 trillion to the global economy by 2030. McKinsey forecasts a $4 trillion boost from biotech over the same period. Boosting world robot installations 30 percent above a baseline forecast could unleash a $5 trillion dividend, a sum bigger than Germany’s entire output. Especially when other sources of growth are increasingly scarce, these are strong incentives. With profits this high, interrupting the gold rush is likely to be incredibly challenging.
Are these predictions justified? The numbers are certainly eye-watering. Plucking huge numbers out of the near future is easy to do on paper. But over a slightly longer time frame, they are not entirely unreasonable. The total addressable market here eventually extends, like the First or Second Industrial Revolution, to the entire world economy. Someone in the late eighteenth century would have been incredulous at the idea of a hundredfold increase in per capita GDP. It would have seemed ludicrous to even contemplate. Yet it happened. Given all those forecasts and the fundamental areas addressed by the coming wave, even a 10–15 percent boost to the world economy in the next decade might be conservative. Over the longer term it is likely much bigger than that.
Consider that the world economy grew sixfold in the latter half of the twentieth century. Even if growth slowed to just a third of that level over the next fifty years, it would still unlock around $100 trillion of additional GDP.
Think about the impact of the new wave of AI systems. Large language models enable you to have a useful conversation with an AI about any topic in fluent, natural language. Within the next couple of years, whatever your job, you will be able to consult an on-demand expert, ask it about your latest ad campaign or product design, quiz it on the specifics of a legal dilemma, isolate the most effective elements of a pitch, solve a thorny logistical question, get a second opinion on a diagnosis, keep probing and testing, getting ever more detailed answers grounded in the very cutting edge of knowledge, delivered with exceptional nuance. All of the world’s knowledge, best practices, precedent, and computational power will be available, tailored to you, to your specific needs and circumstances, instantaneously and effortlessly. It is a leap in cognitive potential at least as great as the introduction of the internet. And that is before you even get into the implications of something like ACI and the Modern Turing Test.
Little is ultimately more valuable than intelligence. Intelligence is the wellspring and the director, architect, and facilitator of the world economy. The more we expand the range and nature of intelligences on offer, the more growth should be possible. With generalist AI, plausible economic scenarios suggest it could lead not just to a boost in growth but to a permanent acceleration in the rate of growth itself. In blunt economic terms, AI could, long term, be the most valuable technology yet, more so when coupled with the potential of synthetic biology, robotics, and the rest.
Those investments aren’t passive; they will play a big part in making it become so, another self-fulfilling prophecy fulfilled. Those trillions represent a huge value add and opportunity for society, delivering better living standards for billions as well as immense profits for private interests. Either way, that creates an ingrained incentive to keep finding and rolling out new technologies.
For most of history simply feeding yourself and your family was the dominant challenge of human life. Farming has always been a hard, uncertain business. But especially prior to the improvements of the twentieth century, it was much, much harder. Any variation in weather conditions—too cold, hot, dry, or wet—could be catastrophic. Almost everything was done by hand, maybe with the help of some oxen if you were lucky. At some times of the year there was little to do; at others, there were weeks of unceasing, backbreaking physical labor.
Crops could be ruined by disease or pests, spoil after harvesting, or get stolen by invading armies. Most farmers lived hand to mouth, often working as serfs, giving up much of their scant crop. Even in the most productive parts of the world, yields were low and fragile. Life was tough, lived on the edge of disaster. When Thomas Malthus argued in 1798 that a fast-growing population would quickly exhaust the carrying capacity of agriculture and lead to a collapse, he wasn’t wrong; static yields would and often did follow this rule.
What he hadn’t accounted for was the scale of human ingenuity. Assuming favorable weather conditions and using the latest techniques, in the thirteenth century each hectare of wheat in England yielded around half a ton. There it remained for centuries. Slowly the arrival of new techniques and technologies changed all that: from crop rotation to selective breeding, mechanized plows, synthetic fertilizer, pesticides, genetic modifications, and now even AI-optimized planting and weeding. In the twenty-first century, yields are now at about eight tons per hectare. The very same small, innocuous patch of ground, the same geography and soil that was reaped in the thirteenth century, can now deliver sixteen times the crop. Corn yields per hectare in the United States have tripled in the last fifty years. The labor required to produce a kilo of grain has fallen 98 percent since the beginning of the nineteenth century.
In 1945, around 50 percent of the world’s population was seriously undernourished. Today, despite a population well over three times bigger, that’s down to 10 percent. This still represents upwards of 600 million people, an unconscionable number. But at 1945 rates it would be 4 billion, although in truth those people could not have been kept alive. It’s easy to overlook how far we’ve come, and just how remarkable innovation really is. What would the medieval farmer have given for the vast combines, the epic irrigation systems of a modern farmer? To them, a sixteenfold improvement would be nothing less than a miracle. It is.
Feeding the world is still an enormous challenge. But this need has driven technology on and led to an abundance unimaginable in previous times: food sufficient, if not adequately distributed, for the planet’s eight billion and rising human inhabitants.
Technology, as in the case of food supply, is a vital part of addressing the challenges humanity inevitably faces today and will face tomorrow. We pursue new technologies, including those in the coming wave, not just because we want them, but because, at a fundamental level, we need them.
It’s likely that the world is heading for two degrees Celsius of climate warming or more. Every second of every day biospheric boundaries—from freshwater use to biodiversity loss—are breached. Even the most resilient, temperate, and wealthy countries will suffer disastrous heat waves and droughts, storms and water stress in the decades ahead. Crops will fail. Wildfires rage. Vast quantities of methane will escape the melting permafrost, threatening a feedback loop of extreme heating. Disease will spread far beyond its usual ranges. Climate refugees and conflict will engulf the world as sea levels inexorably rise, threatening major population centers. Marine and land-based ecosystems face collapse.
Despite well-justified talk of a clean energy transition, the distance still to travel is vast. Hydrocarbons’ energy density is incredibly hard to replicate for tasks like powering airplanes or container ships. While clean electricity generation is expanding fast, electricity accounts for only about 25 percent of global energy output. The other 75 percent is much trickier to transition. Since the start of the twenty-first century global energy use is up 45 percent, but the share coming from fossil fuels only fell from 87 to 84 percent—meaning fossil fuel use is greatly up despite all the moves into clean electricity as a power source.
The energy scholar Vaclav Smil calls ammonia, cement, plastics, and steel the four pillars of modern civilization: the material base underwriting modern society, each hugely carbon-intensive to produce, with no obvious successors. Without these materials modern life stops, and without fossil fuels the materials stop. The last thirty years saw 700 billion carbon-spewing tons of concrete sluiced out into our societies. How to replace that? Electric vehicles may not emit carbon when being driven, but they are resource hungry nonetheless: materials for just one EV require extracting around 225 tons of finite raw materials, demand for which is already spiking unsustainably.
Food production, as we have seen, is a major success story of technology. But from tractors in fields, to synthetic fertilizers, to plastic greenhouses, it’s saturated in fossil fuels. Imagine the average tomato soaked in five tablespoons of oil. That’s how much went into growing it. What’s more, to meet global demand, agriculture will need to produce almost 50 percent more food by 2050 just as yields decline in the face of climate change.
If we are to stand any chance of keeping global warming under two degrees Celsius, then the world’s scientists working under the UN’s Intergovernmental Panel on Climate Change have been clear: carbon capture and storage is an essential technology. And yet it’s largely not been invented or is still to be deployed at scale. To meet this global challenge, we’ll have to reengineer our agricultural, manufacturing, transport, and energy systems from the ground up with new technologies that are carbon neutral or probably even carbon negative. These are not inconsiderable tasks. In practice it means rebuilding the entire infrastructure of modern society while hopefully also offering quality-of-life improvements to billions.
Humanity has no choice but to meet challenges like these, and many others such as how to deliver ever more expensive health care to aging populations beset with intractable chronic conditions. Here, then, is another powerful incentive: a vital part of how we flourish in the face of daunting tasks that seem beyond us. There’s a strong moral case for new technologies beyond profit or advantage.
Technology can and will improve lives and solve problems. Think of a world populated by trees that are longer lived and absorb much greater amounts of CO2. Or phytoplanktons that help the oceans become a greater and more sustainable carbon sink. AI has helped design an enzyme that can break down the plastic clogging our oceans. It will also be an important part of how we predict what is coming, from guessing where a wildfire might hit suburbia to tracking deforestation through public data sets. This will be a world of cheap, personalized drugs; fast, accurate diagnoses; and AI-generated replacements for energy-intensive fertilizers.
Sustainable, scalable batteries need radical new technologies. Quantum computers paired with AI, with their ability to model down to the molecular level, could play a critical role in finding substitutes to conventional lithium batteries that are lighter, cheaper, cleaner, easier to produce and recycle, and more plentiful. Likewise on work with photovoltaic materials, or drug discovery, that enables molecular-level simulations to identify new compounds—far more precise and powerful than using the slow experimental techniques of the past. This is hyper-evolution in action, and it promises to save billions in R&D while going far beyond the present research paradigm.
A school of naive techno-solutionism sees technology as the answer to all of the world’s problems. Alone, it’s not. How it is created, used, owned, and managed all make a difference. No one should pretend that technology is a near-magical answer to something as multifaceted and immense as climate change. But the idea that we can meet the century’s defining challenges without new technologies is completely fanciful. It’s also worth remembering that the technologies of the wave will make life easier, healthier, more productive, and more enjoyable for billions. They will save time, cost, hassle, and millions of lives. The significance of this should not be trivialized or forgotten amid the uncertainty.
The coming wave is coming partly because there is no way through without it. Mega-scale, systemic forces like this drive technology forward. But another, more personal force is in my experience ever present and largely underestimated: ego.
Scientists and technologists are all too human. They crave status, success, and a legacy. They want to be the first and best and recognized as such. They’re competitive and clever with a carefully nurtured sense of their place in the world and in history. They love pushing boundaries, sometimes for money but often for glory, sometimes just for its own sake. AI scientists and engineers are among the best-paid people in the world, and yet what really gets them out of bed is the prospect of being first to a breakthrough or seeing their name on a landmark paper. Love them or hate them, technology magnates and entrepreneurs are viewed as unique lodestars of power, wealth, vision, and sheer will. Critics and fawning fans alike see them as expressions of ego, excelling at making things happen.
Engineers often have a particular mindset. The Los Alamos director J. Robert Oppenheimer was a highly principled man. But above all else he was a curiosity-driven problem solver. Consider these words, in their own way as chilling as his famous Bhagavad Gita quotation (on seeing the first nuclear test, he recalled some lines from Hindu scripture: “Now I am become Death, the destroyer of worlds”): “When you see something that is technically sweet, you go ahead and do it, and you argue about what to do about it only after you have had your technical success.” It was an attitude shared by his colleague on the Manhattan Project, the brilliant, polymathic Hungarian American John von Neumann. “What we are creating now,” he said, “is a monster whose influence is going to change history, provided there is any history left, yet it would be impossible not to see it through, not only for military reasons, but it would also be unethical from the point of view of the scientists not to do what they know is feasible, no matter what terrible consequences it may have.”
Spend enough time in technical environments and, despite all the talk about ethics and social responsibility, you will come to recognize the prevalence of this view, even when facing technologies of extreme power. I have seen it many times, and I’d probably be lying if I said I haven’t succumbed to it myself on occasion as well.
Making history, doing something that matters, helping others, beating others, impressing a prospective partner, impressing a boss, peers, rivals: it’s all in there, all part of the ever-present drive to take risks, explore the edges, go further into the unknown. Build something new. Change the game. Climb the mountain.
Whether noble and high-minded or bitter and zero-sum, when you work on technology, it’s often this aspect, even more than the needs of states or the imperatives of distant shareholders, animating progress. Find a successful scientist or technologist and somewhere in there you will see someone driven by raw ego, spurred on by emotive impulses that might sound base or even unethical but are nonetheless an under-recognized part of why we get the technologies we do. The Silicon Valley mythos of the heroic start-up founder single-handedly empire building in the face of a hostile and ignorant world is persistent for a reason. It is the self-image technologists too often still aspire to, an archetype to emulate, a fantasy that still drives new technologies.
Nationalism, capitalism, and science—these are, by now, embedded features of the world. Simply removing them from the scene is not possible in any meaningful time frame. Altruism and curiosity, arrogance and competition, the desire to win the race, make your name, save your people, help the world, whatever it may be: these are what propel the wave on, and these cannot be expunged or circumvented.
Moreover, these different incentives and elements of the wave compound. National arms races dovetail with corporate rivalries while labs and researchers spur each other on. A nested series of sub-races, in other words, adds up to a complex, mutually reinforcing dynamic. Technology “emerges” through countless independent contributions all layering on top of one another, a metastasizing, entangled morass of ideas unraveling themselves, driven on by deep-rooted and dispersed incentives.
Without tools to spread information at light speed, people in the past could happily sit with new technologies staring them in the face sometimes for decades before they realized their full implications. And even when they did, it would take a lot of time, and ultimately imagination, to fully realize the broad ramifications. Today the world is watching everyone else react in real time.
Everything leaks. Everything is copied, iterated, improved. And because everyone is watching and learning from everyone else, with so many people all scratching around in the same areas, someone is inevitably going to figure out the next big breakthrough. And they will have no hope of containing it, for even if they do, someone else will come behind them and uncover the same insight or find an adjacent way of doing the same thing; they will see the strategic potential or profit or prestige and go after it.
This is why we won’t say no. This is why the coming wave is coming, why containing it is such a challenge. Technology is now an indispensable mega-system infusing every aspect of daily life, society, and the economy. No one can do without it. Entrenched incentives are in place for more of it, radically more. No one is in full control of what it does or where it goes next. This is not some far-out philosophical concept or extreme determinist scenario or wild-eyed California technocentrism. It is a basic description of the world we all inhabit, indeed the world we have inhabited for quite some time.
In this sense it feels like technology is, to use an unforgiving image, one big slime mold slowly rolling toward an inevitable future, with billions of tiny contributions being made by each individual academic or entrepreneur without any coordination or ability to resist. Powerful attractors pull it on. Where blocks appear, gaps open elsewhere, and the whole rolls forward. Slowing these technologies is antithetical to national, corporate, and research interests.
This is the ultimate collective action problem. The idea that CRISPR or AI can be put back in the box is not credible. Until someone can create a plausible path to dismantling these interlocking incentives, the option of not building, saying no, perhaps even just slowing down or taking a different path isn’t there.
Containing technology means short-circuiting all these mutually reinforcing dynamics. It’s hard to envisage how that might be done on any kind of timescale that would affect the coming wave. There is only one entity that could, perhaps, provide the solution, one that anchors our political system and takes final responsibility for the technologies society produces: the nation-state.
But there’s a problem. States are already facing massive strain, and the coming wave looks set to make things much more complicated. The consequences of this collision will shape the rest of the century.