CHAPTER 6

THE WIDER WAVE

Technological waves are bigger than just one or two general-purpose technologies. They are clusters of technologies arriving at around the same time, anchored by one or more general-purpose technologies but extending far beyond them.

General-purpose technologies are accelerants. Invention sparks invention. Waves lay the ground for further scientific and technological experimentation, nudging open the doors of possibility. This in turn yields new tools and techniques, new areas of research—new domains of technology itself. Companies form in and around them, attracting investment, pushing the new technologies out into small and big niches alike, further adapting them for a thousand different purposes. Waves are so huge and historic precisely because of this protean complexity, this tendency to mushroom and spill over.

Technologies don’t develop or operate in air locks, removed from one another, least of all general-purpose technologies. Rather, they develop in rippling amplificatory loops. Where you find a general-purpose technology, you also find other technologies developing in constant dialogue, spurred on by it. Looking at waves, then, it’s clearly not just about a steam engine, or a personal computer, or synthetic biology, as significant as they are; it’s also about the vast nexus of further technologies and applications that come with them. It’s all the products made in steam-driven factories, the people carried on steam-driven trains, the software businesses, and, further down, everything else that relies on computing.

Bio and AI are at the center, but around them lies a penumbra of other transformative technologies. Each has immense significance in its own right, but that is heightened when seen through the lens of the greater wave’s cross-pollinating potential. In twenty years there will be numerous additional technologies, all breaking through at the same time. In this chapter, we examine a few key examples making up this wider wave.

We begin with robotics, or as I like to think of it, AI’s physical manifestation, AI’s body. Its impact is already being felt in some of the most cutting-edge industries on earth. But also the oldest. Come on down to the automated farm.

ROBOTICS COMES OF AGE

 

In 1837, John Deere was a blacksmith working in Grand Detour, Illinois. This was prairie country, with its dense black soil and wide-open spaces. It had potential as some of the world’s best arable land—great for crops but incredibly tough to plow.

Then one day Deere saw a broken steel saw at a mill. Steel being scarce, he took his find home and fashioned the blade into a plow. Strong and smooth, steel was the perfect material for plowing through the dense, sticky soil. Although others had seen steel as an alternative to the coarser iron plows, Deere’s breakthrough was to ramp up mass production. Before long farmers from across the Midwest were flocking to his workshop. His invention opened the prairie to a flood of settlers. The Midwest duly became the breadbasket of the world; John Deere quickly became synonymous with agriculture; and a techno-geographic revolution was instigated.

The John Deere company still makes agricultural technology today. You might be thinking tractors, sprinklers, and combines, and it’s true that John Deere does make all these things. Increasingly, though, the company builds robots. The future of agriculture, as John Deere sees it, involves autonomous tractors and combines that operate independently, following a field’s GPS coordinates and using an array of sensors to make automatic, real-time alterations to harvesting, maximizing yield and minimizing waste. The company is producing robots that can plant, tend, and harvest crops, with levels of precision and granularity that would be impossible for humans. Everything from soil quality to weather conditions is factored into a suite of machines that will soon do large chunks of the job. In an age of food price inflation and a growing population, the value is clear.

Farming robots aren’t just coming. They’re here. From drones watching livestock to precision irrigation rigs to small mobile robots patrolling vast indoor farms, from seeding to harvesting, picking to palletizing, watering tomatoes to tracking and herding cattle, the reality of the food we eat today is that it increasingly comes from a world of robots, driven by AI, currently being rolled out and scaled up.

Most of these robots don’t look like the androids of popular sci-fi. They look like, well, agricultural machines. And many of us don’t spend much time on farms in any case. But just as John Deere’s plow once transformed the business of agriculture, these new robot-centered inventions are transforming how food gets to our tables. It’s not a revolution we are well primed to recognize, but it is one already well underway.


Robots have advanced mainly as one-dimensional tools, machines capable of doing single tasks on a production line with speed and precision, a major productivity boost for manufacturers but way off from the 1960s Jetsons-style visions of diffident android helpers.

As with AI, robotics proved much more difficult in practice than early engineers assumed. The real world is a strange, uneven, unexpected, and unstructured environment, exquisitely sensitive to things like pressure: picking up an egg, an apple, a brick, a child, and a bowl of soup all require extraordinary dexterity, sensitivity, strength, and balance. An environment like a kitchen or workshop is messy, filled with dangerous items, oil slicks, and multiple different tools and materials. It’s a robot’s nightmare.

Nonetheless, mostly out of the public eye, robots have quietly been learning about torque, tensile strength, the physics of manipulation, precision, pressure, and adaptation. Just watch them at an automotive manufacturing plant on YouTube: you see a crisp, never-ending ballet of robotic arms and manipulators steadily constructing a car. Amazon’s “first fully autonomous mobile robot,” called Proteus, can buzz around warehouses in great fleets, picking up parcels. Equipped with “advanced safety, perception, and navigation technology,” it can do this comfortably alongside humans. Amazon’s Sparrow is the first that can “detect, select, and handle individual products in [its] inventory.”

It’s not hard to imagine these robots in warehouses and factories—relatively static environments. But soon they will increasingly be found in restaurants, bars, care homes, and schools. Robots are already performing intricate surgery—in tandem with humans but also autonomously, on pigs (for now). Such uses are just the beginning of a much more widespread robotics rollout.

Today human programmers still often control every detail of a robot’s operation. That makes the cost of integration in a new setting prohibitive. But as we’ve seen in so many other applications of machine learning, what starts with close human supervision ends up with the AI learning to do the task better by itself, eventually generalizing to new settings.

Google’s research division is building robots that could, like the 1950s dream, do household chores and basic jobs from stacking dishes to tidying chairs in meeting rooms. They built a fleet of a hundred robots capable of sorting trash and wiping down tables. Reinforcement learning helps each robot’s gripper pick up cups and open doors: just the kinds of actions, effortless to a toddler, that have vexed roboticists for decades. This new breed of robots can work on general activities, responding to natural language voice commands.

Another growing area is in the ability for robots to swarm, greatly amplifying the potential capabilities of any individual robot into a hive mind. Examples include the Harvard Wyss Institute’s miniature Kilobots—a swarm of a thousand robots that work collectively and assemble in shapes taken from nature that could be used on difficult, distributed tasks like stopping soil erosion and other environmental mediations, agriculture, search and rescue operations, or the entire field of construction and inspection. Imagine a swarm of builder robots throwing up a bridge in minutes or a large building in hours, or tending to enormous, highly productive farms 24/7, or cleaning up an oil spill. With honeybee populations under threat, Walmart filed a patent for robot bees to collaborate and cross-pollinate crops autonomously. All the promise (and peril) of robotics is amplified by their ability to coordinate in groups of unrestricted size, an intricate choreography that will reset the rules of what is possible, where, and in what time frame.

Robots today still often don’t look like the humanoid robots of the popular imagination. Consider the phenomenon of 3-D printing or additive manufacturing, a technique that uses robotic assemblers to layer up construction of anything from minuscule machine parts to apartment blocks. Giant concrete-spraying robots can build dwellings in a matter of days for a fraction of what traditional construction might have cost.

Robots can operate with precision in a far greater range of environments for far longer periods than humans. Their vigilance and diligence are boundless. If they’re networked together, the feats they might accomplish quite simply rewrite the rules of taking actions. I think we’re now getting to the point where AI is pushing robots toward their original promise: machines that can replicate all the physical actions of a human and more. As costs fall (the price of a robot arm declined by 46 percent in five years and is still going down), as they’re eventually equipped with powerful batteries, as they simplify, becoming easy to repair, they will become ubiquitous. And that will mean turning up in unusual, extreme, and sensitive situations. Already the signs of a shift are visible—if you know where to look.


It was the police force’s worst nightmare. A military-trained sniper had got himself in a secure second-floor position at a local community college in Dallas, Texas. Then, overlooking a peaceful protest, he’d begun shooting police officers. After forty-five minutes, two were dead, more injured. Later it would emerge that five officers had been killed, seven wounded, the deadliest incident for American law enforcement since 9/11. The gunman taunted the police, laughing, singing, and fired with chilling accuracy. Tense negotiations, over two hours, were going nowhere. The police were pinned. It wasn’t clear how many more would die attempting to resolve the situation.

Then the SWAT team came up with a new idea. The police department had a bomb disposal robot, the $150,000 Remotec Andros Mark 5A-1 made by Northrop Grumman. In fifteen minutes they hatched a plan to attach a large blob of C-4 explosive to its arm and send it into the building with the intention of incapacitating the shooter. The police chief, David Brown, quickly signed off on the plan. It went into action, the robot rumbling through the building, where it positioned the explosive in an adjacent room, next to a wall with the shooter on the other side. The explosive detonated, blasting apart the wall and killing the gunman. It was the first time a robot had used targeted lethal force in the United States. In Dallas, it saved the day. A horrific event was brought to a conclusion.

Still, some were disquieted. The concerning potential of lethal police robots hardly needed emphasizing. We’ll return to the implications of all this in part 3. But above all it signified how robots are gradually working their way into society, poised to play a far greater role in daily life than has been the case before. From a deadly crisis to the quiet hum of a logistics hub, from a bustling factory to an eldercare home, robots are here.

AIs are products of bits and code, existing within simulations and servers. Robots are their bridge, their interface with the real world. If AI represents the automation of information, robotics is the automation of the material, the physical instantiations of AI, a step change in what it is possible to do. Mastery of bits comes full circle, directly reconfiguring atoms, rewriting the bounds not just of what can be thought or said or calculated but what can be built in the most tangible physical sense. And yet the remarkable thing about the coming wave is that this kind of blunt atomic manipulation is nothing compared with what’s on the horizon.

QUANTUM SUPREMACY

 

In 2019, Google announced that it had reached “quantum supremacy.” Researchers had built a quantum computer, one using the peculiar properties of the subatomic world. Chilled to a temperature colder than the coldest parts of outer space, Google’s machine used an understanding of quantum mechanics to complete a calculation in seconds that would, it said, have taken a conventional computer ten thousand years. It had just fifty-three “qubits,” or quantum bits, the core units of quantum computing. To store equivalent information on a classical computer, you would need seventy-two billion gigabytes of memory. This was a key moment for quantum computers. From theoretical underpinnings dating to the 1980s, quantum computing has gone from hypothetical to working prototype in four decades.

While very much a nascent technology, there are huge implications when quantum computing does materialize. Its key attraction is that each additional qubit doubles a machine’s total computing power. Start adding qubits and it gets exponentially more powerful. Indeed, a relatively small number of particles could have more computing power than if the entire universe was converted into a classical computer. It’s the computational equivalent of moving from a flat, black-and-white film into full color and three dimensions, unleashing a world of algorithmic possibility.

Quantum computing has far-reaching implications. For instance, the cryptography underlying everything from email security to cryptocurrencies would suddenly be at risk, in an impending event those in the field call “Q-Day.” Cryptography rests on the assumption that an attacker will never have sufficient computing power to try all the different combinations needed to break it and unlock access. With quantum computing that changes. A fast and uncontained rollout of quantum computing could have catastrophic implications for banking or government communications. Both are already spending billions to head off the possibility.

Although much discussion of quantum computing has focused on its perils, the field also promises tremendous benefits, including the ability to explore frontiers in mathematics and particle physics. Researchers at Microsoft and Ford used nascent quantum approaches to model Seattle’s traffic to find better ways of navigating rush hour, routing and flowing traffic on optimal paths—a surprisingly tricky mathematical problem. In theory, solving any optimization problem could be greatly sped up—almost anything that involves minimizing costs in complex circumstances, whether that’s efficiently loading a truck or running a national economy.

Arguably, quantum computing’s most significant near-term promise is in modeling chemical reactions and the interaction of molecules in previously impossible detail. This could let us understand the human brain or materials science with extraordinary granularity. Chemistry and biology will become fully legible for the first time. Discovering new pharmaceutical compounds or industrial chemicals and materials, a costly, painstaking process of tricky lab work, could be greatly sped up—gotten right on the first go. New batteries and drugs made more likely, more efficient and realizable. The molecular becomes “programmable,” as supple and manipulable as code.

Quantum computing is, in other words, yet another foundational technology still in very early development, still further from hitting those critical moments of cost decreases and widespread proliferation, let alone the technical breakthroughs that will make it fully feasible. But as with AI and synthetic biology, albeit at an earlier stage, it appears to be at a point where funding and knowledge are escalating, progress on fundamental challenges is growing, and a range of valuable uses are coming into view. Like AI and biotech, quantum computing helps speed up other elements of the wave. And yet even the mind-bending quantum world is not the limit.

THE NEXT ENERGY TRANSITION

 

Energy rivals intelligence and life in its fundamental importance. Modern civilization relies on vast amounts of it. Indeed, if you wanted to write the crudest possible equation for our world it would be something like this:

(Life + Intelligence) x Energy = Modern Civilization

Increase any or all of those inputs (let alone supercharge their marginal cost toward zero) and you have a step change in the nature of society.

Endless growth in energy consumption was neither possible nor desirable in the era of fossil fuels, and yet while the boom lasted, the development of almost everything we take for granted—from cheap food to effortless transport—rested on it. Now, a huge boost of cheap, clean power has implications for everything from transport to buildings, not to mention the colossal power needed to run the data centers and robotics that will be at the heart of the coming decades. Energy—expensive and dirty as it often is—is at present a limiter on technology’s rate of progress. Not for too much longer.

Renewable energy will become the largest single source of electricity generation by 2027. This shift is occurring at an unprecedented pace, with more renewable capacity set to be added in the next five years than in the previous two decades. Solar power in particular is experiencing rapid growth, with costs falling significantly. In 2000, solar energy cost $4.88 per watt, but by 2019 it had fallen to just 38 cents. Energy isn’t just getting cheaper; it’s more distributed, potentially localizable from specific devices to whole communities.

Behind it all lies the dormant behemoth of clean energy, this time inspired if not directly powered by the sun: nuclear fusion. Fusion power involves the release of energy when isotopes of hydrogen collide and fuse to form helium, a process long considered the holy grail of energy production. Early pioneers in the 1950s predicted that it would take about a decade to develop. Like so many of the technologies described here, that was a significant underestimation.

However, recent breakthroughs have sparked renewed hope. Researchers at the Joint European Torus near Oxford, England, achieved a record power output, double the previous high recorded in 1997. At the National Ignition Facility in Livermore, California, scientists have been working on a method known as inertial confinement, which involves compressing pellets of hydrogen-rich material with lasers and heating them to 100 million degrees to create a fleeting fusion reaction. In 2022 they created a reaction demonstrating net energy gain for the first time, a critical milestone of producing more energy than the lasers put in. With meaningful private capital now flowing into at least thirty fusion start-ups alongside major international collaborations, scientists are talking about “when and not if” fusion arrives. It may still be a decade or more, but a future with this clean and virtually limitless energy source is looking increasingly real.

Fusion and solar offer the promise of immense centralized and decentralized energy grids, with implications we will explore in part 3. This is a time of huge optimism. Including wind, hydrogen, and improved battery technologies, here is a brewing mix that can sustainably power the many demands of life both today and in the future and underwrite the wave’s full potential.

THE WAVE BEYOND THE WAVE

 

These technologies will dominate the next decades. But what about the second half of the twenty-first century? What comes after the coming wave?

As the elements of AI, advanced biotechnology, quantum computing, and robotics combine in new ways, prepare for breakthroughs like advanced nanotechnology, a concept that takes the ever-growing precision of technology to its logical conclusion. What if rather than being manipulated en masse, atoms could be manipulated individually? It would be the apotheosis of the bits/atoms relationship. The ultimate vision of nanotechnology is one where atoms become controllable building blocks, capable of automatically assembling almost anything.

Practical challenges are immense, but they are the subject of increasing research intensity. A team at the University of Oxford, for example, produced a self-replicating assembler gesturing toward the multifunctional versions imagined by nanotech pioneers: devices capable of endlessly engineering and recombining at the atomic scale.

Nanomachines would work at speeds far beyond anything at our scale, delivering extraordinary outputs: an atomic-scale nanomotor, for example, could rotate forty-eight billion times a minute. Scaled up, it could power a Tesla with material equivalent in volume to about twelve grains of sand. This is a world of gossamer structures made of diamond, space suits that cling to and protect the body in all environments, a world where compilers can create anything out of a basic feedstock. A world, in short, where anything can become anything with the right atomic manipulation. The dream of the physical universe rendered a completely malleable platform, the plaything of tiny, dexterous nanobots or effortless replicators, is still the province, like superintelligence, of science fiction. It’s a techno-fantasia, many decades away, but one that will steadily come into focus as the coming wave plays out.


At its core, the coming wave is a story of the proliferation of power. If the last wave reduced the costs of broadcasting information, this one reduces the costs of acting on it, giving rise to technologies that go from sequencing to synthesis, reading to writing, editing to creating, imitating conversations to leading them. In this, it is qualitatively different from every previous wave, despite all the big claims made about the transformative power of the internet. This kind of power is even harder to centralize and oversee; this wave is not just a deepening and acceleration of history’s pattern, then, but also a sharp break from it.

Not everyone agrees these technologies are either as locked on or as consequential as I think they are. Skepticism and pessimism aversion are not unreasonable responses, given there is much uncertainty. Each technology is subject to a vicious hype cycle, each is uncertain in development and reception, each is surrounded by challenges technical, ethical, and social. None is complete. There are certain to be setbacks, and many of the harms—and indeed benefits—are still unclear.

But each is also growing more concrete, developed, and capable by the day. Each is becoming more accessible and more powerful. We are reaching the decisive point of what, in geological or human evolutionary timescales, is a technological explosion unfolding in successive waves, a compounding, accelerating cycle of innovation steadily getting faster and more impactful, breaking first over a period of thousands of years, then hundreds of years, and now single years or even months. See these technologies in the context of press releases and op-eds, at the mayfly pace of social media, and they might look like hype and froth; see the long view, and their true potential becomes clear.

Humanity has of course experienced epic technological change before as part of this process. To understand the unique challenges of the coming wave, however—just why it’s so especially hard to contain, just why its immense promise must be balanced with sober-minded caution—we have to first break down its key features, some of which are without historical precedent, and all of which are being felt already.