1   Is the Digital Revolution the Next Big Thing?

When we take the long view of the digital computer, we ask what might make it a turning point in human history. We look beyond the buzz that accompanies the latest iPhone or the release of the Oculus Rift virtual reality headset. We know that these feelings of excitement tend to pass. We understand that our spectacular digital devices will seem to future people as Robert Stephenson’s 1829 “Rocket” train traveling at 40 kilometers per hour seems to passengers on today’s Shinkansens and Trains à Grande Vitesse regularly exceeding 300 kilometers per hour. The adjectives that we excitedly apply to our latest digital gadgets are unlikely to capture aspects of them that change the course of human history. What might make the Digital Revolution similar to other acknowledged turning points in history?

This book presents the introduction and spread of the digital computer and allied technologies as a technological revolution—the Digital Revolution. I take the idea of technological revolutions as drivers of history from the work of V. Gordon Childe, an iconoclastic early twentieth-century Australian anthropologist.1 Childe introduced the concept of the Neolithic Revolution to describe the diffuse and stop-start yet inexorable move from foraging to farming. It signaled the end of the Mesolithic Age—or middle stone age—a time when the foraging way of life was a human universal. The Neolithic Revolution’s earliest verified location was the Fertile Crescent—a region running from the areas around the Nile River to modern-day Iran and Turkey—of 12,000 years ago. But the transition from foraging to farming occurred independently in a variety of other locations. The Neolithic Revolution brought ploughs and other technologies that permitted farmers to make deliberate productive use of the fertility inherent in the ground under their feet. Crops were planted. Animals were husbanded. The Neolithic Revolution radically transformed both individual human lives and human societies. Land that once supported only a small number of foragers could support many more farmers. The new sedentary lifestyle prompted radical changes in the way humans organized themselves into societies.

Central to Childe’s discussion of technological revolutions was the idea of a technological package. The Neolithic package contained technologies related in some significant way to agriculture. The title of his 1936 book Man Makes Himself sums up his view of the human significance of the introduction of this technological package. The Neolithic package includes crops, domesticated animals, stone agricultural tools, pottery, and the permanent dwellings required by sedentary peoples. Fast-forwarding several millennia, we find the distinctive industrial package that brought the Industrial Revolution. The steam engine, the factory, and new production methods were elements of this package.

The notion of a technological package suggests a special relationship between constituent technologies, a functional interdependence. The technologies that comprise a package relate to each other in significant ways.2 Improvements in one technology belonging to a technological package suggest or motivate improvement of other components of the package. The cultivation of crops brought a more settled existence. This motivated the development of more substantial and permanent dwellings. A society that plants crops tends to benefit greatly from the invention of pottery containers. The technologies that comprise a technological package hang together as a coherent mutually supporting and interdependent whole.

Childe was guilty of oversimplification. As the archeologist Steve Mithen explains, Childe supposed that the Neolithic package was “always acquired as a single, indivisible whole.”3 This seems not to have been the case. People in different places got along for some time with incomplete Neolithic packages. There was no 2001: A Space Odyssey event in which a black box descended on a band of Mesolithic foragers and imparted to them the five or so basic technologies Childe presented as comprising the Neolithic package. Nor did any Moses-like Big Man of a forager band ever go up a mountain and return with knowledge of crops, animal husbandry, pottery, and the principles of stone masonry.

Some foragers acquired pottery and showed little disposition to acquire other elements of the Neolithic package. Some societies made significant headway into agriculture without the multiple benefits of pottery.4 But these alternative trajectories of development do not prevent pottery from joining other elements of the Neolithic package as part of a functionally coherent whole. Pottery-making techniques enable the creation of items with great spiritual significance for foragers. But their impact on individuals and societies is so much greater when combined with crops. There is, to use a word whose impact has been blunted by overuse, a synergy between pottery-making and the cultivation of cereals. A pottery container filled with grain is something significantly more than just a piece of pottery. Combining pottery with crops significantly boosts the benefits conferred by each.

The networked computer is the Digital Revolution’s technological package. As we saw with pottery and farming, the technologies that comprise this package are separable. Telephones were networked before there were computers. The invention of the digital computer that inaugurated the Digital Revolution preceded the insight that it was possible to network them. Computers deliver great benefits to a society without having to be networked. Some societies today would like to bestow powerful computational abilities on workers in economically or politically significant industries while limiting the social disruption that they view as emerging from the unrestricted networking of those computing devices. The rulers of North Korea want selected workers to be able to calculate the yields of nuclear bombs, and to use their Kwangmyong “walled garden” national intranet, but they don’t want them to go on Facebook. The fact that networking and computers are part of a mutually interdependent package of technologies suggests that unusual conditions are likely to be required to prevent the one from bringing the other. Computers tend to bring networking with them. When combined, the technologies bring greater benefits than the sum of the benefits each produces independently. Unusual conditions, such as those containing vigilant and repressive police forces, may prevent the digital computer from bringing the benefits of networking. The synergistic relations between the technologies that comprise a package do not prevent variations in local conditions from causing some elements of the package to be present in locations where others are absent. But this should not prevent us from acknowledging elements of a technological package as mutually interdependent. They tend to go together. When found with other members of a package, technologies amplify the impacts of other elements.

Childe’s notion of a technological revolution suggests the idea of a technological age. The Neolithic Revolution inaugurated the Neolithic Age. The Industrial Revolution inaugurated an Industrial Age. There is an important sense in which the package of technologies introduced by a technological revolution sets some of the ground rules for the age it initiates. Advances during a technological age tend not to have the rule-making significance of the introduction of a new technological package.

Now is not the ideal time to make assessments of comparative historical significance. We currently find ourselves at a stage of peak excitement about the life-altering power of the networked computer. When the same people who brought us the iPhone and the Google search engine talk about driverless cars and using machine learning to cure cancer we seem licensed to treat these possibilities with a credulity that didn’t seem appropriate for talk by 1960s futurists about colonies on the planet Mars. We know that they can do this kind of stuff. The expression “and pigs will fly” no longer seems to describe a complete absurdity so long as we append the word “digitally.” This excitement has a distorting effect on our assessments of the historical significance of the Digital Revolution. We should ask ourselves whether today’s assessments of the earth-shaking impact of the latest digital technologies are akin to a teenager’s pronouncement of the band that currently captures her interest as “the best band ever!” a label promptly appropriated by next week’s musical obsession. Our current historical location at the epicenter of excitement about everything digital is far from the ideal standpoint for sober comparative judgments about the historical significance of the Digital Revolution. But reasoned assessments of comparative historical significance are possible. We can speculate about the considered judgments of posterity. How will people who view Apple’s product launches as historical marketing curiosities assess the significance of the Digital Revolution?

Will the Digital Revolution Fizzle?

Suppose that we accept Childe’s general framework. Technological revolutions centered on technological packages set the ground rules for societies of the subsequent technological age. The Neolithic and Industrial Revolutions were such events. Does the Digital Revolution belong with them? It’s useful to place current enthusiasm about the Digital Revolution in the context of a recent skeptical assessment of it advanced by Northwestern University economist Robert Gordon. Gordon argues that the Digital Revolution is no turning point in human history and he seems to have the numbers to back him up.

Gordon argues that the Digital Revolution’s boosters overstate the human and economic significance of the networked digital computer.5 He offers this assessment in the context of an impressive history of the economic consequences of technological innovation. Gordon’s 2016 book, The Rise and Fall of American Growth, makes the case that the century from 1870 to 1970 was unique in human history for its rapid improvement of human standards of living and rates of economic growth. According to Gordon, this growth occurred as the result of a range of unrepeatable technological innovations. These innovations radically changed life at home and at work. Gordon says “The economic revolution of 1870 to 1970 was unique in human history, unrepeatable because so many of its achievements could happen only once.”6

Gordon’s language indicates his modest opinion of the networked computer. In Gordon’s terminology, the Digital Revolution becomes the Third Industrial Revolution—or, more belittlingly, IR #3. In Gordon’s long view, there was the First Industrial Revolution (IR #1), “based on the steam engine and its offshoots particularly the railroad, steamships, and the shift from wood to iron and steel,” and the Second Industrial Revolution (IR #2), which “reflected the effects of inventions of the late nineteenth century—particularly electricity and the internal combustion engine.”7 For Gordon, IR #3 is the third of the industrial revolutions not only in temporal order, but also in overall significance. IR #3 is a disappointment after IR #2’s epic effects on human well-being and economic growth.

Gordon summarizes the human significance of the one-off improvements of IR #2 at home under the heading of networking. By “networking” Gordon means something whose human significance was much more expansive than connecting computers to each other. It describes a radical transformation of home life in the hundred years beginning at 1870. Gordon says “the 1870 house was isolated from the rest of the world, but 1940 houses were ‘networked,’ most having the five connections of electricity, gas, telephone, water, and sewer.”8 There have been many improvements to the manner of networking but none of these match the significance of the initial connection to electricity, gas, telephone, water, and sewer. The hundred-year period that is Gordon’s principal focus took many people from nothing to something. The years after 1970 have, in general, taken us from something to something a bit better. The human and economic magnitude of the leap from houses lit by oil lamps to houses lit by electricity is greater than the transition from houses lit by electricity to houses with safer and more reliable supplies of electricity. None of the many improvements to the electrical grid seem to match the significance to householders of their homes’ initial connection.

There were unrepeatable innovations outside of the home too. Cars replaced horses bringing suburbs into existence. Passenger jets promptly and cheaply transported people over distances that could formerly be traversed only with great effort and at great cost. The cars and jets of 2018 are superior to those of 1970 but the human and economic significance of these 48 years of improvement cannot match the significance of the original introduction of the car and the passenger jet. This pattern of technological transformation is evident in the workplace too. There was a general shift form hard outdoor labor to work in air-conditioned offices. The experiences of workers have changed since they relocated to a temperature-controlled indoors. But none of the changes since 1970 match the significance of the initial move inside.

Gordon proposes that the one-off nature of these advances explains a plateauing of progress since 1970. He maintains that digital technologies have failed to correct a deceleration of economic growth noticeable since 1970. He allows that there was a ten-year period of rapid progress in productivity associated with IR #3 from 1994 to 2004 “when the invention of the Internet, web browsing, search engines, and e-commerce produced a pervasive change in every aspect of business practice.”9 But after this brief surge, growth reverted to a more pedestrian pace, far from that characteristic of growth unleashed by IR #2. Says Gordon: “Though there has been continuous innovation since 1970, it has been less broad in its scope than before, focused on entertainment and information and communication technology (ICT).” He continues “Like IR #2, it achieved revolutionary change but in a relatively narrower sphere of human activity.” While IR #2 “covered virtually the entire span of human wants and needs, including food, clothing, housing, transportation, entertainment, communication, information, health, medicine, and working conditions,” the effects of IR #3 are restricted to “only a few of these dimensions, in particular entertainment, communications, and information.”10 In effect, when we compare IR #1 and IR #3, we are comparing railroads and internal combustion engines with posting status updates to your Facebook friends, targeting advertisements for holidays in Fiji to those who Google search “three-star hotel on Denarau Island,” and streaming high-definition The Walking Dead episodes.

The problem for IR #3 lies not so much in any limitations inherent in digital technologies themselves. Gordon acknowledges that some of these are becoming more powerful at an exponential pace. The deceleration in the economic effects of technological progress seems to say something about us. The technologies that grew out of IR #2 satisfied many longstanding human needs and wants, leaving comparatively little scope for the shiny tech advances of IR #3 to measurably improve the human condition. Someone today can load apps to their smartphone that warn them when their fridge’s supply of milk is getting low, but this piece of technological wizardry cannot match the significance of the original introduction of the domestic electric refrigerator. When data flows down fiber-optic cables it brings especially crisp images to televisions. These images are appreciably better than those they replaced. Gordon allows that “People could now watch a station devoted entirely to music videos or, if they were willing to pay extra for premium cable, could see movies on HBO, free from advertising interruption, twenty-four hours a day.”11 Home Box Office did well off its “It’s Not TV, It’s HBO” slogan. Really, however, it is TV, just a bit better. A 2016 episode of HBO’s fantasy epic Game of Thrones with its CGI dragons, sprawling battle scenes, and occasional nudity seems to differ greatly from a fully-clothed, black-and-white 1955 episode of CBS’s western Gunsmoke, but they satisfy essentially similar viewer needs. There’s certainly a big difference between Gunsmoke and Game of Thrones and anything happening in the living rooms of 1894. The years 1870 to 1970 took us from “household drudgery, darkness, isolation, and early death” to a time when human fundamentals differed little from today.12

This is not to say that once we got the refrigerator, the color television, and the passenger jet that we pronounced ourselves fully satisfied. But perhaps it is to say that that portion of human needs that are readily accessible by technological progress has largely been sated. We must look beyond the domain of technological innovation to satisfy extant human needs. Gordon identifies inequality as a “headwind” to economic growth.13 Inequality is partly a matter of how access to technologies is shared out rather than facts about the technologies themselves. Technological advances often lead to more unequal outcomes. A society in which everyone travels by horse-drawn wagon is more equal, in respect of transport, than one in which half of its members get to travel in automobiles.

One of the impressive things about Gordon’s book is the wealth of data on which he founds his claims about the impact of technological change on economic growth. Philosophers are often reduced to speculation to support their suggestions about trends and future directions. Gordon is the beneficiary of a large amount of data about economic growth over the miraculous century 1870 to 1970 and thereafter. Gordon’s measure of the economic impact of technological progress is Total Factor Productivity (TFP). Gordon defines TFP as “output divided by a weighted average of labor and capital input.”14 Holding fixed the contributions of labor and capital permits us to isolate other influences on the growth of an economy’s output. Technological improvements are high on the list of reasons why a later time’s output is higher than an earlier time’s. When Gordon complains about a slackening of growth since 2004, he is doing more than reporting on a general impression that things have gotten better in the years since 2004 but at a considerably slower pace than the century from 1870 to 1970. He has the data. But an overreliance on data can mislead about the future if there is good reason to think that data up to the present day fail to adequately describe some impending change.

The Magic Combination of Artificial Intelligence and Data

Why might the recent past of digital innovation fail to predict its future effects on economic growth and human well-being? Some of the most eloquent popularizers of the Digital Revolution point to the exponential pace of improvement of some of its technologies.15 We observe exponential improvement in changes to integrated circuits, Internet bandwidth, and the storage of data on magnetic hard disks, to name just a few. What separates the past of exponential improvement from its future is that we are about to go through a significant transition. Those who write about exponential technological change are especially impressed by the hockey-stick shape of the graphs that describe it. After a slow and unspectacular beginning, the line of exponential growth goes near vertical. Erik Brynjolfsson and Andrew McAfee call the transition from slow to rapid growth the “inflection point.”16 They join other advocates of exponential improvement including the New York Times journalist Thomas Friedman in suggesting that a wide range of digital technologies are entering their inflection points.17 Friedman’s preferred verb for this change is “explode” and his favored adjective “explosive.” Gordon’s data could be misleading if he is sampling from the unimpressive, slow growth phase of IR #3’s curve. A general loss of business confidence in the years after the bursting of the Dot-com bubble in the early 2000s might explain the dip in economic growth reported by Gordon. But if digital technologies are on the point of entering their inflection points, then their impacts on the economy could become increasingly emphatic.

In what follows, I prefer an explanation that points not to accelerating changes to the power of digital hardware but rather to what we are increasingly doing with this hardware. The difference between the IR #3 up until now and the results it will soon produce lies in the data that digital machines are gathering and processing. The magic ingredient that we are increasingly mixing with this data is artificial intelligence. Digital machines are becoming intelligent. This intelligence is permitting them to do mind work—the work that humans traditionally do with our minds.

Remember that Gordon suggests that a key difference between IR #2 and IR #3 lies in the breadth of human wants and needs that they address. He says that IR #2 “covered virtually the entire span of human wants and needs, including food, clothing, housing, transportation, entertainment, communication, information, health, medicine, and working conditions.” IR #3 has thus far satisfied our needs and wants in “only a few of these dimensions, in particular entertainment, communications, and information.”18 The miraculous combination of data and artificial intelligence suggests that IR #3 will begin to satisfy needs and wants outside of those dimensions.

Gordon’s diagnosis does seem to describe the economic impacts of the networked digital computer up until now. Google’s advertising business comprises the bulk of the revenue of its parent company, Alphabet.19 The idea that it is possible to make money from advertising is not new. People in the advertising industry have long known that messages generate more sales when they are directed at people more likely to buy. Google’s AdSense and AdWords permit advertisements to be very precisely targeted. They give advertisers an unprecedented access to the minds of potential buyers. They seem to be a significant step toward what technology writer Tim Wu calls “advertising’s holy grail”—“pitches so aptly keyed to one’s interest that they would be as welcome as morning sunshine.”20 It’s easy to see why Toyota might be prepared to pay quite a lot of money to display its ads on the web browsers of people who type “best compact car” into Google. It’s also easy to see how people who type those words might have a different experience of Toyota’s advertising than they have of most Internet sales pitches.

Gordon’s pessimism about the Digital Revolution would be justified if targeted advertising, email, online news, and streamed movies were basically it.21 But there is a difference between a description of current uses of data and uses we can confidently predict. The entrepreneur and data scientist Jeff Hammerbacher offers the following complaint about the dominance of advertising: “The best minds of my generation are thinking about how to make people click ads. That sucks.”22 Hammerbacher’s complaint is best interpreted as impatience about the delay in getting on to explore the digital package’s more ennobling applications. He should draw confidence from the realization that clicking on ads is just the beginning for our exploitation of data.

Yogi Berra warned “It’s tough to make predictions, especially about the future.” The following discussions about transportation and health involve predictions about the future. They present conjectures that could turn out to be false. They are, nevertheless, reasonable extrapolations of current or expected digital technologies. Artificial intelligence is the Digital Revolution’s killer app. There have been some vivid demonstrations of the power of AI. In the 1990s, chess computers went rapidly from being able to beat amateurs while being far inferior to the best human players, to being far superior to them. In 2011, the IBM computer Watson sorted through four terabytes of data, including the entirety of Wikipedia, to beat two human Jeopardy! champions. AlphaGo, an application of Google’s DeepMind AI, was fed data on 30 million Go board positions from 160,000 real-life games.23 Go is an exceedingly demanding game once thought to be an insurmountable challenge for computers. In 2016, AlphaGo won a match against Lee Sedol, one of the world’s top-ranked human players.

Victories in chess, Jeopardy!, and Go suggest a wide range of future achievements for AI. Artificial intelligence conveys the transformative power of data into domains of human experience that benefitted from IR #2, but are thus far relatively untouched by IR #3. Remember Gordon’s list of aspects of human life thus far comparatively undisturbed by the networked computer, including food, clothing, housing, transportation, health, medicine, and working conditions. In the following pages, I explore how the combination of data and AI should be expected to transform transportation, health, and medicine. Hammerbacher’s complaint about any bias toward ad clicking will no longer apply.

How AI Could Transform Transportation

Globally, cars kill about 1.2 million people every year.24 Compare this figure with the 28,328 deaths attributed to terrorism in 2015.25 Like soldiers in their third year of seemingly interminable trench warfare, we have inured ourselves to the mayhem on our roads. We’ve come to accept these deaths as the price for convenient commutes to work and visits to out-of-town family. Driverless cars promise safety improvements unachievable by advances in driver education programs or by grafting additional safety features onto human-driven cars. Somewhat depressingly, human drivers respond to sophisticated driver assist technologies by permitting their eyes to wander from the road more frequently. This was horridly borne out in a July 2016 fatal crash in which a driver placed his Model S Tesla into autopilot mode and took the opportunity to watch a Harry Potter movie.26 For those seeking to make a serious dent on that 1.2 million figure, the best option is to eliminate the human altogether. We should convert ourselves from dangerously distractible drivers to full-time passengers whose control over our cars is limited to linking our smartphones to their computers and speaking the names of our destinations.

Today’s experimental driverless cars combine a variety of sensors. They detect nearby objects with LIDAR (Light Detection And Ranging), a system that bounces a laser at nearby objects and analyses its reflections. A radar system focuses especially on fast moving nearby objects by bouncing radio waves off them. To these are added sonar systems that bounce sounds off nearby objects and detect their echoes, and optical cameras. The cars make longer-range plans with the help of mapping software such as Google Maps. This provides a driverless car with satellite imagery, street maps, and real-time updates on traffic conditions.

These sensors generate a great deal of data, which would be of limited value without the capacity to analyze it. In their book, Driverless: Intelligent Cars and the Road Ahead, Hod Lipson and Melba Kurman explain how machine learning allows driverless cars to make significantly better use of data than could a human driver fed edited highlights on a dashboard display. Driverless cars engage in a “deep learning” that is radical departure from the traditional AI strategy of trying to program in rules covering all the situations that a car might encounter. The cars are more trained than programmed. The training of a driverless car begins with laborious Darwinian process in which responses that conform to the assessments of human trainers—for example, noticing and avoiding a pedestrian—are preserved, and responses that fail to conform to them are discontinued. Lipson and Kurman describe the decreasing importance of human trainers:

At some point, the deep-learning software will reach the point where it can guide a driverless car on its own, enabling the car to drive alone and collect a steady stream of new training data as it goes. The new data will be applied to train the deep-learning software to reach even higher levels of accuracy in recognizing objects, further improving its performance. As cars’ guiding software becomes even more capable, more driverless cars can be dispatched to the streets, collecting yet more training data.27

The longer a driverless car is on the road, the more data it accumulates and the better it handles the road. “Fleet learning” further expands the gap between human drivers and driverless cars by transmitting the results of a particular car’s training to all the cars in a fleet. It’s as if an irascible parent could just transfer into the mind of their child learner all their many years of driving wisdom without the need for frequent expostulations of “Did you notice that stop sign?!” There are depressing patterns in the fatal distractions and overconfident passing maneuvers of human drivers. The mistakes of driverless cars tend not to be repeated. It is thought that the Tesla autopilot in the crash mentioned above failed to distinguish the whiteness of an 18-wheel truck and trailer crossing the highway from the bright white sky of a spring day. It’s unlikely that any Tesla car will make that specific mistake again. Lipson and Kurman present the benefits of fleet learning in the following way.

As cars pool their driving “experience” in the form of data, each car will benefit from the combined experiences of all other cars. Within a few years, the operating system that guides a driverless car will accumulate a driving experience equivalent to more than a thousand human lifetimes.28

Robert Gordon is skeptical about driverless cars. In his dismissal of projected IR #3 developments he awards last place to driverless cars, on a list that includes medical and pharmaceutical advances, small robots and 3D printing, and Big Data and artificial intelligence. Gordon offers two lines of criticism. First, epoch-making though the driverless car may seem to us, it is a comparatively minor tweaking of the hugely significant IR #2 advance of the car. Gordon says “This category of future progress is demoted to last place because it offers benefits that are minor compared to the invention of the car itself or the improvements in safety that have created a tenfold improvement in fatalities per vehicle mile since 1950.” He is also unimpressed by the potential economic impacts of driverless cars; he says “The additions to consumer surplus of being able to commute without driving are relatively minor. Instead of listening to the current panoply of options, including Bluetooth phone calls, radio news, or Internet-provided music, drivers will be able to look at a computer screen or their smartphones, read a book, or keep up with their e-mail.”29

Gordon is unwarrantedly pessimistic about the economic consequences of the driverless car. Currently a huge percentage of the space in our cities is given over to cars. A KPMG survey revealed that half of today’s car owners do not expect to own a car when driverless technology is fully realized.30 As human drivers are eliminated we should see a steep reduction in the number of cars congesting our cities. Many people will liberate themselves from what former generations viewed as the quintessence of liberation. There will be no more frustrated searches for parking. The car that ferried you to your downtown appointment will simply return to its base. Gordon grants that this could have “positive effects on quality of life if not on productivity growth.”31 But the potential economic benefits that accompany these quality of life effects could be considerable. Driverless cars could revitalize city centers currently scarred by crisscrossing highways that transport workers between their downtown jobs and their suburban homes in stressful rush hour traffic. People could frequent businesses that offer a wider range of services than take out coffee and fast food.32

The second theme of Gordon’s skepticism about driverless technology concerns whether it can be realized. Gordon says “The enthusiasm of techno-optimists for driverless cars leaves numerous issues unanswered.”33 Current driverless prototypes encounter situations that flummox them. They aren’t particularly good at deciding when it’s safe to pass on a two-lane road. There are glitches in the software that controls current voice-activated control systems. And so on.

David Autor conforms to the theme of economists underwhelmed by digital innovation. He says of machine learning’s apparent capacity to find useful patterns in large amounts of data:

My general observation is that the tools are inconsistent: uncannily accurate at times; typically only so-so; and occasionally unfathomable. IBM’s Watson computer famously triumphed in the trivia game of Jeopardy against champion human opponents. Yet Watson also produced a spectacularly incorrect answer during its winning match. Under the category of US Cities, the question was, “Its largest airport was named for a World War II hero; its second largest, for a World War II battle.” Watson’s proposed answer was Toronto, a city in Canada.34

Autor’s crack about Watson is somewhat mean spirited. Even the most commanding tennis performance by Serena Williams includes some bad shots. We should consider Watson’s geographical confusion in a context that includes the human quiz contestant who, when asked “How many kings of England have been called Henry?” ventured “Well, I know Henry VIII. So, um, three?”35

Autor acknowledges the debates about the prospects for machine learning. He says “Some researchers expect that as computing power rises and training databases grow, the brute force machine learning approach will approach or exceed human capabilities. Others suspect that machine learning will only ever ‘get it right’ on average, while missing many of the most important and informative exceptions.”36 But Autor’s summation of machine learning favors the pessimists. He notes that many objects are defined by their purposes—a chair is something designed for humans to sit on and that these purposes present “fundamental problems” for machine learners even if they have large stores of data from which to learn. Current machine learners struggle to distinguish chairs from objects with similar dimensions that no human would try to sit on. Autor presents his estimation of the magnitude of the challenge confronting machine learning by citing Carl Sagan’s observation: “If you wish to make an apple pie from scratch, you must first invent the universe.”37 Difficult!

It’s clear that today’s most powerful machine minds can do stupid things and that machine learning faces difficult challenges. The question is how these challenges feature in the long view of machine learning. Gordon and Autor approach this question in the wrong kind of way. The psychologist Gerd Gigerenzer has written about the ways in which forecasts may fail because they are too informed. You can go wrong if you include facts relevant to the past performance but with minimal relevance for the future. The resulting explanations tend to connect broad phenomena too closely to the specifics of past situations. Gigerenzer says that “in an uncertain world a complex strategy can fail exactly because it explains too much in hindsight. Only part of the information is valuable for the future, and the art of intuition is to focus on that part and ignore the rest.”38 We are better able to predict the future of driverless technology if we stand back from the detail of the very challenging problems currently faced by the engineers at Google and Tesla and instead focus on broad patterns in the evolution of driverless cars.

Focusing too much on the detail of currently unsolved challenges in machine learning is a bit like looking at the limitations of the Airco DH.4 biplane, a World War I bomber repurposed as a passenger plane after the war, describing some of the difficult challenges that confronted the aeronautical engineers of that time in improving reliability, safety, and carrying capacity and concluding that the long-term prospects for commercial aviation were bleak.

No one should be surprised that there are unanswered questions about future technological developments. The driverless cars that exist in 2018 are prototypes for the cars that will go into production at some point in the future. There are currently unsolved technological problems that we can never be certain we will solve. It could be that there are as yet undiscovered physical laws that block solutions. We should acknowledge that a rational expectation of solutions is no logical guarantee. But we can nevertheless recognize these problems as belonging to a broader category of problems, many of which we have solved in the past. We can be confident that some unsolved problems will be solved so long as we keep trying to solve them. If we telescope down to the problems that cause today’s Google engineers to scratch their heads and seek to understand these problems in the way that they do, then we are likely to view them as very difficult indeed. Neither I, a philosopher, nor the economists Gordon and Autor, have much to offer here. But the perspective that it is appropriate for us to take should make us confident of solutions. There is a big difference between challenged Google engineers and clueless ones. If you ask a Google engineer to design a time machine capable of preventing the assassination of John F. Kennedy, then you are likely to elicit a blank stare. You will get a different response from the engineers of Waymo, the company spun off from Google in 2016 to focus on driverless technology, to a request that they make progress on currently unsolved problems in the design of driverless cars. Waymo’s engineers are likely to be advancing and testing many conjectures about how to solve the problems separating today’s prototypes from tomorrow’s production models. There’s an important sense in which, to solve the problem, the engineers of driverless cars have only to keep on doing the kinds things that they are currently doing. Nonspecialists are better judges of the future if they take a couple of steps back from the details of difficult problems and instead focus on trends. We should have the same kind of confidence that we have about the plumber who turns up and successfully unblocks a drain that would remain resistant to our efforts.

How AI Could Transform Health

Now consider the items “health” and “medicine” on Gordon’s list of the facets of human existence transformed by IR #2 but thus far comparatively untouched by IR #3. The personal genomics and biotechnology company 23andMe applies DNA sequencing technologies to the spittle of its customers. They learn things about their propensities for certain diseases and facts about their ancestry. Much of the value of 23andMe lies in its accumulation of the genetic information of, as of April 2017, two million customers.39 The accumulation of this information in digital form permits powerful analytical tools to be applied to it. 23andMe passes on to its customers discoveries about DNA made elsewhere. But it is increasingly able to use its database to make its own discoveries. Among the discoveries listed on the website as 23andMe Research Discoveries are hitherto unknown genetic contributors to hypothyroidism and Parkinson’s disease.40 As the number of customers and power of its analytical tools increase, the site promises an understanding of disease unachievable by other means. This new knowledge could be an indispensable guide to the application of other digital technologies that permit the modification of human genetic material. We significantly misjudge 23andMe if we think that its database of disease susceptibilities is merely a tool for more effective pharmaceutical advertising. 23andMe is about more than targeting asthma medication at people revealed to be genetically susceptible to the disease.

To see the potential for machine learning to improve human health, consider the speculations of the machine learning expert Pedro Domingos on how it might tackle the especially challenging problem of cancer. Domingos presents machine learning’s goal as the search for an ultimate master algorithm, where a master algorithm is “a general-purpose learner that you can in principle use to discover knowledge from data in any domain.”41 Machine learning seeks build on strategies that humans use to learn. But machine learners are not content to merely replicate the performances of human learners. Domingos and other specialists in machine learning seek machines capable of discovering patterns invisible to unenhanced human intellects. Domingos allows that we do not yet possess the ultimate master algorithm. But he asserts that we are making progress toward it.

Cancer is an exceptionally challenging disease. The history of human engagement with it is a story of disappointed ambitions. Reflection on this depressing past and awareness of the disease’s complexity has prompted a sense of resignation among those who are best informed about it. In his award-winning “biography” of cancer, The Emperor of All Maladies, Siddhartha Mukherjee draws the following conclusion from our history of crushed hopes about final victory over the disease: “Cancer is a flaw in our growth, but this flaw is deeply entrenched in ourselves. We can rid ourselves of cancer, then, only as much as we can rid ourselves of the processes in our physiology that depend on growth—aging, regeneration, healing, reproduction.”42

Mukherjee’s resignation is premature. Perhaps our history of disappointment says more about the limitations of human intellects and imaginations than about the objective incurability of cancer. It took human intuition a long time to hypothesize and demonstrate a link between smoking and lung cancer. Cancer may be a problem we won’t solve by ourselves, but one that we can solve with some digital help. Domingos calls his machine learning solution to cancer “CanceRx.”43 CanceRx applies the techniques of machine learning in search of patterns in the vast quantities of data about cancer that we are beginning to accumulate. Human oncologists are compelled to approach this data by simplifying it. This simplification tends to limit them to only the most obvious patterns. It’s difficult for a human researcher to go far beyond the now obvious observation that people who smoke are more likely to get lung cancer and that people who accumulate too much exposure to the sun are more likely to get skin cancer. These are big statistical effects. The full story of cancer requires an understanding of a profusion of influences whose effects are smaller than smoking or too much sun.

CanceRx “combines knowledge of molecular biology with vast amounts of data from DNA sequencers, microarrays, and many other sources.”44 We might add to this information about the lifestyles—diets, vocations, levels and manner of physical activity of people who get cancer, don’t get cancer, and respond differently to treatments for cancer. CanceRx searches for patterns in the vastness of this data beyond the reach of human intellect and imagination. With each additional data point CanceRx gains more power over cancer. Domingos says: “The model is continually evolving, incorporating the results of new experiments, data sources, and patient histories. Ultimately, it will know every pathway, regulatory mechanism, and chemical reaction in every type of human cell—the sum total of human molecular biology.”45 CanceRx is best viewed as a technological thought experiment. Domingos isn’t close to building it. He offers CanceRx as a conjecture about where progress in machine learning could take us.

We shouldn’t expect to ever arrive at the dreamed of “Cure for Cancer”—a miraculous pill that, once swallowed, cures any cancer which you might currently be suffering and offers a guarantee of no future cancer. We can be confident that there will be no such pills, just as reflective contemporaries of the Spanish exploration of Central and South America in the 1500s might have expressed skepticism about the conquistadors finding a fountain of youth. Mukherjee’s point about cancer as a flaw in our growth deeply entrenched in basic biological natures explains why. Any being that grows from a fertilized egg is subject to lethal glitches in the process that turns one cell into trillions. The evil genius of cancer lies in its mutability. Cancers are constantly evolving responses to therapeutic countermeasures. If a drug targets a particular mechanism that the cancer uses to grow, then it can find alternative ways to achieve this end. CanceRx opposes the evil genius of cancer with the superintelligence of the master algorithm. It extracts patterns from the totality of data about human experience of cancer. Suppose your cancer mutates in a way that is new to it. CanceRx responds by applying its impressive learning abilities to come up with a response. Domingos says “Because every cancer is different, it takes machine learning to find the common patterns. And because a single tissue can yield billions of data points, it takes machine learning to figure out what to do for each new patient.”

CanceRx lies, at the time of writing, in the realm of technological conjecture. As with other enticing visions of the future, it comes with no guarantee. Domingos goes into marketing mode when he describes the universal learning algorithm as “one of the greatest scientific achievements of all time.” He goes on to say “In fact, the Master Algorithm is the last thing we’ll ever have to invent because, once we let it loose, it will go on to invent everything else that can be invented. All we need to do is provide it with enough of the right kind of data, and it will discover the corresponding knowledge.” It is important to understand that the master algorithm is not an all-or-nothing proposition. We can expect benefits that are very great even if not quite as stupendous as those described by Domingos. Perhaps CanceRx won’t treat all cancer. But the partial achievement of this goal would still be justly celebrated.

There are indicators of a desire and capacity to apply machine learning to human disease. In September 2016 Mark Zuckerberg and his wife Priscilla Chan announced an investment of US $3 billion over ten years, putting money where Domingos’s mouth is.46 Zuckerberg and Chan seek new technologies that will tackle not just cancer, but all diseases. In their announcement of the investment Zuckerberg asked “Can we cure, prevent or manage all disease by the end of this century?” The venture would create a Biohub that would bring together scientists and engineers to develop new tools to prevent, treat, or cure diseases. A second focus would be on developing the new transformative medical technologies made possible by the Digital Revolution. Zuckerberg and Chan propose that advances in artificial intelligence could lead to the deployment of new brain imaging technologies against neurological diseases, the application of machine learning to the genetics of cancer, and the development of chips that quickly identify disease.

If AI could successfully treat and prevent cancer, then that would be enormously exciting. But its civilizational impact comes from its applicability to a wide range of areas of human endeavor. Thomas Newcomen originally conceived of the steam engine as a technology for pumping water out of mines. But the steam engine was rapidly revealed as a protean technology that resisted relegation to the task of removing water from flooded mine shafts. In short order, steam engines were installed in factories, trains, and ocean-going vessels. AI is similarly protean. It isn’t just a technology for driving cars or treating cancer. It should demonstrate its value anywhere there are patterns in nature that are useful for humans to know about but seem beyond the power of human intellects and imaginations.

Concluding Comments

This chapter takes the long view of the Digital Revolution. I propose that the Digital Revolution belongs with other acknowledged turning points in human history—the Neolithic and Industrial Revolutions. I counter the skepticism of the economist Robert Gordon, who uses the unlovely acronym IR #3 to indicate a low estimate of the human consequences of the networked computer. I propose that we can see in advances in artificial intelligence good grounds to reject his pessimism. The next chapter focuses on the transforming novelty of the Digital Revolution—artificial intelligence. I describe an important ambiguity in our understanding of what it means for a machine to be intelligent.

Notes