For most of history, for most people, personal transportation meant one thing: walking. Or if you were lucky, two: being carried or pulled by horses, oxen, elephants, or other beasts of burden. Just moving between neighboring settlements—forget about continents—was hard and slow.
In the early nineteenth century, the railway revolutionized transport, its biggest innovation in thousands of years, but most journeys could never be taken by rail, and those that could weren’t very personalized. Railways did make one thing clear: engines were the future. The steam engines capable of propelling rail carriages required massive external boilers. But if you could whittle them down to a manageable, portable size, you would have radical new means for individuals to get around.
Innovators tried various approaches. As early as the eighteenth century, a French inventor called Nicolas-Joseph Cugnot built a kind of steam-powered car. It plodded along at a stately two miles an hour and featured a huge, pendulous boiler hanging off the front. In 1863, the Belgian inventor Jean Joseph Étienne Lenoir powered the first vehicle with an internal combustion engine, driving it seven miles out of Paris. But the engine was heavy, the speed limited. Others experimented with electricity and hydrogen. Nothing was catching on, but the dream of self-propelled personal transportation persisted.
Then things started to change, at first slowly. A German engineer called Nicolaus August Otto spent years working on a gas engine, much smaller than a steam engine. By 1876, in a Deutz AG factory in Cologne, Otto produced the first functional internal combustion engine, the “four-stroke” model. It was ready for mass production, but not before Otto fell out with his business partners, Gottlieb Daimler and Wilhelm Maybach. Otto wanted to use his engine in stationary settings like water pumps or factories. His partners had seen another use for the increasingly powerful engines: transport.
Yet it was another German engineer, Carl Benz, who pipped them to the post. Using his version of a four-stroke internal combustion engine, in 1886 he patented the Motorwagen, now seen as the world’s first proper car. This strange three-wheel contraption debuted to a skeptical public. It was only when Benz’s wife and business partner, Bertha, drove the car from Mannheim to her mother’s, sixty-five miles away in Pforzheim, that the car started to catch on. She took it supposedly without his knowledge, refueling it along the way with a solvent bought from local pharmacies.
A new age had dawned. But cars, and the internal combustion engines that powered them, remained inordinately expensive, beyond the means of all but the very richest. No network of roads and fueling stations yet existed. By 1893, Benz had sold a measly 69 vehicles; by 1900, just 1,709. Twenty years after Benz’s patent, there were still only 35,000 vehicles on German roads.
The turning point was Henry Ford’s 1908 Model T. His simple but effective vehicle was built using a revolutionary approach: the moving assembly line. An efficient, linear, and repetitive process enabled him to slash the price of personal vehicles, and the buyers followed. Most cars at the time cost around $2,000. Ford priced his at $850.
In the early years Model T sales numbered in the thousands. Ford kept ramping up production and further lowering prices, arguing, “Every time I reduce the charge for our car by one dollar, I get a thousand new buyers.” By the 1920s Ford was selling millions of cars every year. Middle-class Americans could, for the first time, afford motorized transport. Automobiles proliferated with immense speed. In 1915 only 10 percent of Americans had a car; by 1930 this number had reached an astonishing 59 percent.
Today some 2 billion combustion engines are in everything from lawnmowers to container ships. Around 1.4 billion of them are in cars. They have grown steadily more accessible, efficient, powerful, and adaptable. A whole way of life, arguably a whole civilization, developed around them, from sprawling suburbs to industrial farms, drive-thru restaurants to car mod culture. Vast highways were built, sometimes right through cities, severing neighborhoods but connecting far-flung regions. The previously challenging notion of moving from place to place in search of prosperity or fun became a regular feature of human life.
Engines weren’t just powering vehicles; they were driving history. Now, thanks to hydrogen and electric motors, the reign of the combustion engine is in its twilight. But the era of mass mobility it unleashed is not.
All of this would have seemed impossible in the early nineteenth century, when self-propelled transport was still the stuff of dreamers playing with fire, flywheels, and chunks of metal. But from those early tinkerers began a marathon of invention and production that transformed the world. Once there was momentum, the spread of the internal combustion engine became unstoppable. From a few oil-soaked German workshops grew a technology that has affected every human being on earth.
This isn’t, however, just a story of engines and cars. It is the story of technology itself.
Technology has a clear, inevitable trajectory: mass diffusion in great roiling waves. This is true from the earliest flint and bone tools to the latest AI models. As science produces new discoveries, people apply these insights to make cheaper food, better goods, and more efficient transport. Over time demand for the best new products and services grows, driving competition to produce cheaper versions bursting with yet more features. This in turn drives yet more demand for the technologies that create them, and they also become easier and cheaper to use. Costs continue to fall. Capabilities rise. Experiment, repeat, use. Grow, improve, adapt. This is the inescapable evolutionary nature of technology.
These waves of technology and innovation are at the center of this book. More important, they are at the center of human history. Understand these complex, chaotic, and accumulating waves, and the challenge of containment becomes clear. Understand their history and we can start to sketch their future.
So, what is a wave? Put simply, a wave is a set of technologies coming together around the same time, powered by one or several new general-purpose technologies with profound societal implications. By “general-purpose technologies,” I mean those that enable seismic advances in what human beings can do. Society unfolds in concert with these leaps. We see it over and over; a new piece of technology, like the internal combustion engine, proliferates and transforms everything around it.
The human story can be told through these waves: our evolution from being vulnerable primates eking out an existence on the savanna to becoming, for better or worse, the planet’s dominant force. Humans are an innately technological species. From the very beginning, we are never separate from the waves of technology we create. We evolve together, in symbiosis.
The earliest stone tools date back three million years, long before the dawn of Homo sapiens, as evidenced by battered hammerstones and rudimentary knives. The simple hand ax forms part of history’s first wave of technology. Animals could be killed more efficiently, carcasses butchered, rivals fought. Eventually, early humans learned to manipulate these tools finely, giving rise to sewing, painting, carving, and cooking.
Another wave was equally pivotal: fire. Wielded by our ancestor Homo erectus, it was a source of light, warmth, and safety from predators. It had a pronounced impact on evolution: cooking food meant faster release of its energy, allowing the human digestive tract to shrink and the brain to enlarge. Our ancestors, whose strong jaws constrained skull growth, spent their time relentlessly chewing and digesting food like primates today. Liberated from this mundane necessity by fire, they could spend more time doing interesting things like hunting energy-rich foods, fashioning tools, or building complex social networks. The campfire became a central hub of human life, helping establish communities and relationships and organizing labor. The evolution of Homo sapiens rode these waves. We are not just the creators of our tools. We are, down to the biological, the anatomical level, a product of them.
Stonework and fire were proto-general-purpose technologies, meaning they were pervasive, in turn enabling new inventions, goods, and organizational behaviors. General-purpose technologies ripple out over societies, across geographies, and throughout history. They open the doors of invention wide, enabling scores of downstream tools and processes. They are often built on some kind of general-purpose principle, whether the power of steam to do work or the information theory behind a computer’s binary code.
The irony of general-purpose technologies is that, before long, they become invisible and we take them for granted. Language, agriculture, writing—each was a general-purpose technology at the center of an early wave. These three waves formed the foundation of civilization as we know it. Now we take them for granted. One major study pegged the number of general-purpose technologies that have emerged over the entire span of human history at just twenty-four, naming inventions ranging from farming, the factory system, the development of materials like iron and bronze, through to printing presses, electricity, and of course the internet. There aren’t many of them, but they matter; it’s why in the popular imagination we still use terms like the Bronze Age and the Age of Sail.
Throughout history, population size and innovation levels are linked. New tools and techniques give rise to larger populations. Bigger and more connected populations are more potent crucibles for tinkering, experimentation, and serendipitous discovery, a more powerful “collective brain” for making new things. Large populations give rise to greater levels of specialization, new classes of people like artisans and scholars whose livelihood isn’t tied to the land. More people whose lives do not revolve around subsistence means more possible inventors, and more possible reasons for having inventions, and those inventions mean more people in turn. From the earliest civilizations, like Uruk in Mesopotamia, the birthplace of cuneiform, the first known writing system, to today’s megalopolises, cities have driven technological development. And more technology meant more—and bigger—cities. At the dawn of the Agricultural Revolution the worldwide human population numbered just 2.4 million. At the start of the Industrial Revolution, it approached 1 billion, a four-hundred-fold increase that was predicated on the waves of the intervening period.
The Agricultural Revolution (9000 –7500 BCE), one of history’s most significant waves, marked the arrival of two massive general-purpose technologies that gradually replaced the nomadic, hunter-gatherer way of life: the domestication of plants and animals. These developments changed not only how food was found but how it might be stored, how transport would work, and the very scale at which a society could operate. Early crops like wheat, barley, lentils, chickpeas, and peas and animals like pigs, sheep, and goats became subject to human control. Eventually, this coupled with a new revolution in tools—hoes and plows. These simple innovations marked the beginning of modern civilizations.
The more tools you have, the more you can do and the more you can imagine new tools and processes beyond them. As the Harvard anthropologist Joseph Henrich points out, the wheel arrived surprisingly late in human life. But once invented, it became a building block of everything from chariots and wagons to mills, presses, and flywheels. From the written word to sailing vessels, technology increases interconnectedness, helping to boost its own flow and spread. Each wave hence lays the groundwork for successive waves.
Over time, this dynamic accelerated. Beginning around the 1770s in Europe, the first wave of the Industrial Revolution combined steam power, mechanized looms, the factory system, and canals. In the 1840s came the age of railways, telegraphs, and steamships, and a bit later steel and machine tools; together they formed the First Industrial Revolution. Then, just a few decades later, came the Second Industrial Revolution. You’ll be familiar with its greatest hits: the internal combustion engine, chemical engineering, powered flight, and electricity. Flight needed combustion, and mass production of combustion engines demanded steel and machine tools, and so on. Beginning with the Industrial Revolution, immense change became measured in decades rather than centuries or millennia.
This isn’t, however, an orderly process. Technological waves don’t arrive with the neat predictability of the tides. Over the long term, waves erratically intersect and intensify. The ten thousand years up to 1000 BCE saw seven general-purpose technologies emerge. The two hundred years between 1700 and 1900 marked the arrival of six, from steam engines to electricity. And in the last hundred years alone there were seven. Consider that children who grew up traveling by horse and cart and burning wood for heat in the late nineteenth century spent their final days traveling by airplane and living in houses warmed by the splitting of the atom.
Waves—pulsating, emergent, successive, compounding, and cross-pollinating—define an era’s horizon of technological possibility. They are part of us. There is no such thing as a non-technological human being.
This conception of history as a series of waves of innovation is not novel. Sequential and disruptive clusters of technologies recur in discussions of technology. For the futurist Alvin Toffler, the information technology revolution was a “third wave” in human society following the Agricultural and Industrial revolutions. Joseph Schumpeter saw waves as explosions of innovation igniting new businesses in bursts of “creative destruction.” The great philosopher of technology Lewis Mumford believed the “machine age” was actually more like a thousand-year unfolding of three major successive waves. More recently the economist Carlota Perez has talked about “techno-economic paradigms” rapidly shifting amid technological revolutions. Moments of booming disruption and wild speculation regear economies. Suddenly everything relies on railways, cars, or microprocessors. Eventually, the technology matures, becoming embedded and widely available.
Most people in technology are stuck in the minutiae of today and dreaming of tomorrow. It is tempting to think of inventions in discrete and lucky moments. But do so and you’ll miss the stark patterns of history, the sheer, almost innate tendency for technology’s waves to come again and again.
For most of lived history, proliferation of new technology was rare. Most humans were born, lived, and died surrounded by the same set of tools and technologies. Zoom out, though, and it becomes clear that proliferation is the default.
General-purpose technologies become waves when they diffuse widely. Without an epic and near-uncontrolled global diffusion, it’s not a wave; it’s a historical curiosity. Once diffusion starts, however, the process echoes throughout history, from agriculture’s spread throughout the Eurasian landmass to the slow scattering of water mills out from the Roman Empire across Europe. Once a technology gets traction, once a wave starts building, the historical pattern we saw with cars is clear.
When Gutenberg invented the printing press around 1440, there was only a single example in Europe: his original in Mainz, Germany. But just fifty years later a thousand presses spread across the Continent. Books themselves, one of the most influential technologies in history, multiplied with explosive speed. In the Middle Ages manuscript production was on the order of hundreds of thousands per major country per century. One hundred years after Gutenberg, countries like Italy, France, and Germany produced around 40 million books per half century, and the pace of acceleration was still increasing. In the seventeenth century Europe printed 500 million books. As demand soared, costs plummeted. One analysis estimates that the introduction of the printing press in the fifteenth century caused a 340-fold decrease in the price of a book, further driving adoption and yet more demand.
Or take electricity. The first electricity power stations debuted in London and New York in 1882, Milan and St. Petersburg in 1883, and Berlin in 1884. Their rollout gathered pace from there. In 1900, 2 percent of fossil fuel production was devoted to producing electricity, by 1950 it was above 10 percent, and in 2000 it reached more than 30 percent. In 1900 global electricity generation stood at 8 terawatt-hours; fifty years later it was at 600, powering a transformed economy.
The Nobel Prize–winning economist William Nordhaus calculated that the same amount of labor that once produced fifty-four minutes of quality light in the eighteenth century now produces more than fifty years of light. As a result, the average person in the twenty-first century has access to approximately 438,000 times more “lumen-hours” per year than our eighteenth-century cousins.
Unsurprisingly, consumer technologies exhibit a similar trend. Alexander Graham Bell introduced the telephone in 1876. By 1900, America had 600,000 telephones. Ten years later there were 5.8 million. Today America has many more telephones than people.
Increasing quality joins decreasing prices in this picture. A primitive TV costing $1,000 in 1950 would cost just $8 in 2023, though, of course, TVs today are infinitely better and so cost more. You can find almost identical price (and adoption) curves for cars, or microwaves, or washing machines. Indeed, the twentieth and twenty-first centuries saw remarkably consistent adoption of new consumer electronics. Again and again, the pattern is unmistakable.
Proliferation is catalyzed by two forces: demand and the resulting cost decreases, each of which drives technology to become even better and cheaper. The long and intricate dialogue of science and technology produces a chain of insights, breakthroughs, and tools that build and reinforce over time, productive recombinations that drive the future. As you get more and cheaper technology, it enables new and cheaper technologies downstream. Uber was impossible without the smartphone, which itself was enabled by GPS, which was enabled by satellites, which were enabled by rockets, which were enabled by combustion techniques, which were enabled by language and fire.
Of course, behind technological breakthroughs are people. They labor at improving technology in workshops, labs, and garages, motivated by money, fame, and often knowledge itself. Technologists, innovators, and entrepreneurs get better by doing and, crucially, by copying. From your enemy’s superior plow to the latest cell phones, copying is a critical driver of diffusion. Mimicry spurs competition, and technologies improve further. Economies of scale kick in and reduce costs.
Civilization’s appetite for useful and cheaper technologies is boundless. This will not change.
If you want a hint of what’s coming next, consider the foundation of the last mature wave. From the start, computers were driven by new-frontier mathematics as well as the urgencies of great power conflict.
Like the internal combustion engine, computing began as the stuff of obscure academic papers and laboratory tinkerers. Then came the War. In the 1940s, Bletchley Park, Britain’s top secret World War II code-breaking hub, started to realize a true computer for the first time. Racing to crack Germany’s supposedly unbreakable Enigma machines, an extraordinary team turned theoretical insights into a practical device capable of doing just that.
Others were also on the case. By 1945, an important precursor to computers called the ENIAC, an eight-foot-tall behemoth of eighteen thousand vacuum tubes capable of three hundred operations a second, was developed at the University of Pennsylvania. Bell Labs initiated another significant breakthrough in 1947: the transistor, a semiconductor creating “logic gates” to perform calculations. This crude device, comprising a paper clip, a scrap of gold foil, and a crystal of germanium that could switch electronic signals, laid the basis for the digital age.
As with cars, it was by no means obvious to contemporary observers that computing would spread fast. In the late 1940s there were still only a few devices. Early in that decade IBM’s president, Thomas J. Watson, had allegedly (and notoriously) said, “I think there is a world market for about five computers.” Popular Mechanics magazine made a forecast typical of its time in 1949: “Computers in the future may have only 1000 vacuum tubes,” it argued, “and perhaps weigh only 1½ tons.” A decade after Bletchley, there were still only hundreds of computers around the world.
We know what happened next. Computing transformed society faster than anyone predicted and proliferated faster than any invention in human history. Robert Noyce invented the integrated circuit at Fairchild Semiconductor in the late 1950s and the 1960s, imprinting multiple transistors on silicon wafers to produce what came to be called silicon chips. Shortly after, a researcher called Gordon Moore proposed his eponymous “law”: every twenty-four months, the number of transistors on a chip would double. That implied that chips, and by extension the world of digital and computational technology, would be subject to the upward curve of an exponential process.
The results are astounding. Since the early 1970s the number of transistors per chip has increased ten-million-fold. Their power has increased by ten orders of magnitude—a seventeen-billion-fold improvement. Fairchild Semiconductor sold one hundred transistors for $150 each in 1958. Transistors are now produced in the tens of trillions per second, at billionths of a dollar per transistor: the fastest, most extensive proliferation in history.
And of course this rise in computational power underpinned a flowering of devices, applications, and users. In the early 1970s there were about half a million computers. Back in 1983, only 562 computers total were connected to the primordial internet. Now the number of computers, smartphones, and connected devices is estimated at 14 billion. It took smartphones a few years to go from niche product to utterly essential item for two-thirds of the planet.
With this wave came email, social media, online videos—each a fundamentally new experience enabled by the transistor and another general-purpose technology, the internet. This is what pure, uncontained technological proliferation looks like. It created a yet more mind-boggling proliferation: data, up twenty times in the decade 2010–2020 alone. Just a few decades ago data storage was the domain of books and dusty archives. Now humans produce hundreds of billions of emails, messages, images, and videos daily and store them in the cloud. Eighteen million gigabytes of data are added to the global sum every single minute of every day.
Billions of hours of raw human life are consumed, shaped, distorted, and enriched by these technologies. They dominate our businesses and our leisure time. They occupy our minds and every crevice of our worlds, from fridges, timers, garage doors, and hearing aids to wind turbines. They form the very architecture of modern life. Our phones are the first thing we see in the morning and the last at night. Every aspect of human life is affected: they help us find love and new friends while turbocharging supply chains. They influence who gets elected and how, where our money is invested, our children’s self-esteem, our music tastes, our fashion, our food, and everything in between.
Someone from the postwar world would be staggered by the scale and reach of what had seemed a niche technology. Computing’s remarkable ability to spread and improve at exponential rates, to enter and envelop almost every aspect of life, has become the dominant fact of contemporary civilization. No previous wave has mushroomed as quickly, but the historical pattern nonetheless repeats. At first it seems impossible and unimaginable. Then it appears inevitable. And each wave grows bigger and stronger still.
It’s easy to get lost in the details, but step back and you can see waves gathering speed, scope, accessibility, and consequence. Once they gather momentum, they rarely stop. Mass diffusion, raw, rampant proliferation—this is technology’s historical default, the closest thing to a natural state. Think of agriculture, bronze work, the printing press, the automobile, the television, the smartphone, and the rest. There are then what appear to be laws of technology, something like an inherent character, emergent properties that stand the test of time.
History tells us that technology diffuses, inevitably, eventually to almost everywhere, from the first campfires to the fires of the Saturn V rocket, from the first scrawled letters to the endless text of the internet. Incentives are overwhelming. Capabilities accumulate; efficiencies increase. Waves get faster and more consequential. Access to technology grows as it gets cheaper. Technology proliferates, and with every successive wave that proliferation accelerates and penetrates deeper, even as the technology gets more powerful.
This is technology’s historical norm. As we gaze toward the future, this is what we can expect.
Or can we?