Conclusion: Looking Forward

Economic calamities such as the 2007–2008 global financial crisis should not obscure the bigger picture: life for most people today is vastly better than life for most people in the past.

A century ago, global average life expectancy at birth was just thirty-five; when I was born in the 1970s, it was sixty; recently it rose above seventy.1 A baby born in the least propitious countries today—such as Burma, Haiti, and the Democratic Republic of the Congo—has a better chance of surviving infancy than any baby born in 1900.2 The proportion of the world’s population living in the most extreme poverty has fallen from about 95 percent two centuries ago to about 60 percent fifty years ago to about 10 percent today.3

Ultimately, the credit for this progress rests with the invention of new ideas like the ones described in this book. And yet few of the stories we’ve had to tell about these inventions have been wholly positive. Some inventions did great harm. Others would have done much more good if we had used them wisely.

It’s reasonable to assume that future inventions will deliver a similar pattern: broadly, they will solve problems and make us richer and healthier, but the gains will be uneven and there will be blunders and missed opportunities.

It’s fun to speculate about what those inventions might be, but history cautions against placing much faith in futurology. Fifty years ago, Herman Kahn and Anthony J. Wiener published The Year 2000: A Framework for Speculation. Their crystal-ball gazing got a lot right about information and communication technology. They predicted color photocopying, multiple uses for lasers, “two-way pocket phones,” and automated real-time banking. That’s impressive. But Kahn and Wiener also predicted undersea colonies, silent helicopter-taxis, and cities lit by artificial moons.4 Nothing looks more dated than yesterday’s technology shows and yesterday’s science fiction.

We can make two predictions, though. First, the more human inventiveness we encourage, the better that’s likely to work out for us. And, second, with any new invention, it makes sense to at least ask ourselves how we might maximize the benefits and mitigate the risks.

What lessons have we learned from the forty-nine inventions so far?

We’re already a long way toward learning one big lesson about encouraging inventiveness: most societies have realized that it isn’t sensible to waste the talents of half their populations. It will not have escaped your notice that most of the inventors we’ve encountered have been male, and no wonder—who knows how many brilliant women, like Clara Immerwahr, were lost to history after having their ambitions casually crushed.

Education matters, too—just ask Leo Baekeland’s mother, or Grace Hopper’s father. Here, again, there are reasons to be optimistic. There’s probably a lot more we could do to improve schooling through technology: indeed, that’s a plausible candidate for future economy-changing inventions. But already any child in an urban slum who has an Internet connection has greater potential access to knowledge than I had in a university library in the 1990s.

Other lessons seem easier to forget, like the value of allowing smart people to indulge their intellectual curiosity without a clear idea of where it might lead. In bygone days, this implied a wealthy man like Leo Baekeland tinkering in his lab; in the more recent past, it’s meant government funding for basic research—producing the kinds of technologies that enabled Steve Jobs and his team to invent the iPhone. Yet basic research is inherently unpredictable. It can take decades before anybody makes money putting into action what’s been learned. That makes it a tough sell to private investors, and an easy target for government cutbacks in times of austerity.5

Sometimes inventions just bubble up without any particular use in mind—the laser is a famous example, and paper was originally used for wrapping, not writing. But many of the inventions we’ve encountered have resulted from efforts to solve a specific problem, from Willis Carrier’s air-conditioning to Frederick McKinley Jones’s refrigerated truck. That suggests that if we want to encourage more good ideas, we can concentrate minds by offering prizes for problem solving. Remember how the Longitude Prize inspired Harrison to create his remarkable clocks?

There’s recently been fresh interest in this idea: for example, the DARPA Grand Challenge, which began in 2004, helped kick-start progress in self-driving cars; on the 300th anniversary of the original Longitude Prize, the UK’s innovation foundation Nesta launched a new Longitude Prize for progress in testing microbial resistance to antibiotics; and perhaps the biggest initiative is the pneumococcal Advanced Market Commitment, which has rewarded the development of vaccines with a $1.5 billion pot, supplied by five donor governments and the Gates Foundation.

The promise of profit is a constant motivator, of course. And we’ve seen how intellectual property rights can add credibility to that promise by rewarding the successful inventor with a time-limited monopoly. But we also saw that this is a double-edged sword, and there’s an apparently inexorable trend toward making intellectual property rights even longer and broader despite a widespread view among economists that they are already so overreaching that they’re strangling innovation.

More broadly, what kinds of laws and regulations encourage innovation is a question with no easy answers. The natural assumption is that bureaucrats should err on the side of getting out of the way of inventors, and we’ve seen this pay dividends. A laissez-faire approach gave us M-Pesa. But it also gave us the slow-motion disaster of leaded gasoline; there are some inventions that governments really should be stepping in to prevent. And the process that produced the technology inside the iPhone was anything but laissez-faire.

Some hotbeds of research and development, such as medicine, have well-established governance structures that are arguably sometimes too cautious. In other areas, from space to cyberspace, regulators are scrambling to catch up. And it’s not only premature or heavy-handed regulation that can undermine the development of an emerging technology—so, paradoxically, can a total lack of regulation. If you’re investing in drones, say, you want reassurance that irresponsible competitors won’t be able to rush their half-ready products to market, creating a spate of accidents and a public backlash that causes governments to ban the technology altogether.

Regulators’ task is complicated because, as we saw with public-key cryptography, most inventions can be used for either good or ill. How to manage the risks of “dual use” technologies could become an increasingly vexed dilemma: only big states can afford nuclear missile programs, but soon almost anyone might be able to afford a home laboratory that could genetically engineer bacteriological weapons—or innovative new medicines.6

Adding to the challenge, the potential of inventions often becomes clear only when they combine with other inventions: think of the elevator, air-conditioning, and reinforced concrete, which together gave us the skyscraper. Now imagine combining a hobbyist’s quadrocopter drone, facial recognition and geolocation software, and a 3-D printer with a digital template for a gun: you have, hey presto, a homemade autonomous assassination drone. How are we supposed to anticipate the countless possible ways future inventions will interact? It’s easy to demand that our politicians just get it right—but starry-eyed to expect that they will.

However, perhaps the biggest challenge that future inventions will create for governments is that new ideas tend to create losers as well as winners. Often, we regard that as just tough luck: nobody clamored for compensation for second-tier professional musicians whose work dried up because of the gramophone; nor did the bar code and shipping container lead to subsidies for mom-and-pop shops to keep their prices competitive with Walmart.

But when the losers are a wide-enough swath of population, the impact can be socially and politically tumultuous. The industrial revolution ultimately raised living standards beyond what anyone might have dreamed in the eighteenth century, but it took the military to subdue the Luddites, who correctly perceived that it was disastrous for them. It’s hardly fanciful to see echoes of Ned Ludd in the electoral surprises of 2016, from Brexit to President Trump. The technologies that enabled globalization have helped lift millions out of poverty in countries like China—one of the poorest places on earth fifty years ago, and now a solidly middle-income economy—but left whole communities in postindustrial regions of Western countries struggling to find new sources of stable, well-paid employment.

While populists surf the wave of anger by blaming immigrants and free trade, bigger long-term pressures always come from technological change. What will President Trump do if—when—self-driving vehicles replace 3.5 million American truck drivers?7 He doesn’t have an answer; few politicians do.

We’ve already discussed one possible approach: a universal basic income, payable to all citizens. That’s the sort of radical thinking we might need if artificial intelligence and robots really do live up to the hype and start outperforming humans at any job you care to name. But, like any new idea, it would cause new problems: notably, who’s in and who’s out? The welfare state and the passport prop each other up—and while universal basic income is an attractive idea in some ways, it looks less utopian when combined with impenetrable border walls.

In any case, my guess is that worries about the robot job-apocalypse are premature. It’s very much in our minds now, but a final lesson from our fifty inventions is not to get too dazzled by the hottest new thing. In 2006, MySpace surpassed Google as the most visited website in the United States;8 today, it doesn’t make the top thousand.9 Writing in 1967, Kahn and Wiener made grand claims for the future of the fax machine. They weren’t entirely wrong—but the fax machine is now close to being a museum piece.

Many of the inventions we’ve considered in these pages are neither new nor especially sophisticated, starting with the plow: it’s no longer the technological center of our civilization, but it’s still important and it has changed less than we might think. This old technology still works and it still matters.

This isn’t just a call for us to appreciate the value of old ideas, although it is partly that: an alien engineer visiting from Alpha Centauri might suggest it would be good if the enthusiasm we had for flashy new things was equally expressed for fitting more S-bends and pouring more concrete floors.

It’s also a reminder that systems have their own inertia, an idea we encountered with Rudolf Diesel’s engine: once fossil-fueled internal combustion engines reached a critical mass, good luck with popularizing peanut oil or getting investors to fund research into improving the steam engine. Some systems, like the shipping container, work so well it’s hard to see why anyone would want to rethink them. But even systems that most people agree could have been done better, for instance the QWERTY keyboard layout, are remarkably resistant to change.10

Bad decisions cast a long shadow. Yet the benefits of good decisions can last a surprisingly long time. And, for all the unintended consequences and unwelcome side effects of the inventions we’ve considered in these pages, overall they’ve have had vastly more good effects than bad.

Sometimes, as our last invention will show, they’ve improved our lives almost beyond our ability to measure.