OUTRODUCTION Welcome to Your Children’s Future

Perhaps this pontificating pundit can pass on some pithy prognostications about our prospects? (May I offer some speculation about our future?)

Words matter. How we say things colors what we think. Words describe, capture, and communicate, but they also frame our understanding and shape our imagination. We naturally interpret new experiences in terms of old, and which experiences we choose as reference points alters how we see our world.

In the preceding chapters, I’ve described how the nature of work shifts in response to the introduction of innovative technologies, though this shift may lag their deployment considerably. The same is true of language. It shifts in response to changes in the things that we need to reason and communicate about. And, like the labor markets, our language doesn’t always keep up with the consequences of advancing technology. Sometimes our words don’t fit; other times the concepts are so new that appropriate terms simply don’t yet exist. And that’s a problem. It’s hard to understand what’s happening, much less formulate appropriate plans and policies, if you can’t talk about it.

Language adapts to meet our needs in interesting ways. Sometimes we simply invent new words, like outroduction, wacka-doodle, cra-cra, trick out, and fantabulous. Sometimes we jam two words together and fuse their meanings, like brunch (breakfast plus lunch), smog (smoke plus fog), motel (motor plus hotel).1 But most of the time, we awkwardly employ old words for new purposes, gritting our teeth until the expanded or changed meaning becomes commonplace.

One of my favorite examples of our language adapting to technological advancement is the meaning of the word music. The phonograph was invented in 1877 by Thomas Edison and improved by Alexander Graham Bell in the 1880s with his use of wax cylinders as a recording medium. Before that time, if you wanted to hear music, the only way to do it was to listen to someone perform it. There was simply no notion of separating the act of production from the sound produced, and so there was no need to consider whether actually making the music was essential to the concept.

So how did people react upon hearing the first recorded music? Consider the harsh reaction of John Philip Sousa, composer of many familiar military marches (such as “The Stars and Stripes Forever”). In reaction to the emergence of recording devices, Sousa wrote a diatribe in 1906 entitled “The Menace of Mechanical Music.” He said, “But heretofore, the whole course of music, from its first day to this, has been along the line of making it the expression of soul states; in other words, of pouring into it soul… . The nightingale’s song is delightful because the nightingale herself gives it forth… . The host of mechanical reproducing machines, in their mad desire to supply music for all occasions, are offering to supplant the … dance orchestra… . Evidently they believe no field too large for their incursions, no claim too extravagant.” He concluded, “Music teaches all that is beautiful in this world. Let us not hamper it with a machine that tells the story day by day, without variation, without soul, barren of the joy, the passion, the ardor that is the inheritance of man alone.”2 In other words, to Sousa, real music required the creative act of a person expressing authentic feelings. In this sense, a machine couldn’t make music—the noise emanating from it wasn’t the same thing. Even if it sounded similar, it lacked the emotional force necessary to qualify as real “music.”

Needless to say, anyone taking this position today would be considered wackadoodle. How silly of Mr. Sousa. Obviously, music is music, regardless of how it’s made.

But this argument reprised itself much more recently. When digital (as opposed to analog) recording first emerged, it encountered significant pushback from audiophiles. There was a serious line of thought that something was lost, that some “soul” goes out of music when you represent it in digital form. Many people believed that digital music inevitably sounded flat, lacking the depth and subtlety of analog music. For example, Harry Pearson, founder of the magazine The Absolute Sound in 1973, followed in Sousa’s footsteps (probably unknowingly) by proclaiming that “LPs [vinyl records] are decisively more musical. CDs drain the soul from music. The emotional involvement disappears.” This sentiment was not uncommon among audiophiles. Michael Fremer, editor of the Tracking Angle (a music review magazine) was quoted as recently as 1997 saying, “Digital preserves music the way formaldehyde preserves frogs: it kills it and makes it last forever.”3

Needless to say, anyone taking this position today would be considered cra-cra. How silly of Mr. Pearson and Mr. Fremer. Obviously, music is music, regardless of how it’s stored. So our modern concept of “music” includes not only analog recordings, which Sousa rejected, but also digital ones, which Pearson and Fremer rejected. Same word, expanded meaning.

But before we dismiss all these gentlemen as prisoners of their own dated and unenlightened perspectives, consider how you might feel if, in the future, your children ask a computer to play some “Michael Jackson,” and instead of reproducing one of the “king of pop’s” actual recordings, it instantly composes and synthesizes a series of tracks indistinguishable from his own works by anyone not intimately familiar with his actual oeuvre, including his unique vocal style. Would you feel that this artificial creation is not real “music,” and certainly not real “Michael Jackson,” because it didn’t originate in any sense from a human artist, not to mention the master himself? (Why would anyone tolerate this? To save on the royalties, of course. It wouldn’t violate his copyrights.)

You might be tempted to regard this discussion about the meaning of the word music as useless pedantry, but that would be misguided. The words we use make a very real and serious impact on how we think and act.

Consider, for example, autonomous vehicles, a.k.a. self-driving cars. When automobiles were first introduced in the early 1900s, people called them “horseless carriages” because the horse-drawn carriage was the nearest reference point with which to grasp the concept of the newfangled machines. (And how many people today realize that “horsepower” actually refers to real horse power?) Now we talk about “driverless cars” for the same reasons. Both phrases are examples of describing new technologies in terms of old, but in doing so, the words obscure their real potential. A “driverless car” sounds like some terrific new technology with which to trick out your next vehicle—like parking sensors or a backup camera. It’s just like your old car, except that now you don’t have to drive it yourself. But the truth is that this new technology is going to dramatically change the way we think about transportation, with an impact on society far greater than these words suggest. A better description would be “personal public transit.”

Why public? Once this technology becomes commonplace, there will be precious little reason to own a car at all. When you need one, you will simply call for it as you might for a taxi today, but it will appear much more reliably and promptly. (Most studies assume that the average wait in metropolitan areas would be around one to two minutes, including peak times.) When you disembark, it will quietly decamp to the nearest staging area to await a call from its next passenger. Within a few decades, you will no more consider purchasing your own car than you would think today of buying a private railroad coach.4

The economic, social, and environmental consequences are difficult to overstate. Studies project that traffic accidents will fall by 90 percent. That would save in human lives the equivalent of ten 9/11 attacks annually in the United States alone. Vehicle accidents cause an additional 4 million injuries annually costing over $870 billion annually in the U.S. alone.5 Then there’s the concomitant savings in traffic law enforcement (cops on the road), wrecked cars, vehicle repairs, and traffic courts. Not to mention we will need only one vehicle for every three currently in use.6 And we’re not talking centuries from now; the expert consensus is that 75 percent of the vehicles on the road will be self-driving in twenty to twenty-five years.

This single innovation will transform the way we live. Garages will go the way of outhouses, and countless acres of valuable space wasted on parking lots will be repurposed, essentially manufacturing vast amounts of new real estate.7 Environmental pollution will be significantly reduced, along with the resultant health effects. Teens won’t suffer the rite-of-passage of learning to drive. Traffic jams will be a quaint memory of more primitive times, not to mention that it may be possible to eliminate speed limits entirely, dramatically reducing commute time. This in turn will expand the distance you can live from your workplace, which will lower real estate costs near cities and raise them further away. Personal productivity will soar because you can do other things in the car besides driving. Auto insurance will become a thing of the past. You can party all night at your local bar without risking your life to get home. The pizza delivery guy will become a mobile vending machine. Fantabulous!

Consider the economic effects of this on the typical family. According to the American Automobile Association (AAA), in 2013 the average car cost the owner $9,151 per year to drive fifteen thousand miles (including depreciation, gas, maintenance, and insurance, but not financing cost). But the average U.S. family has at least two cars,8 so that’s about $18,000 a year. That works out to 60¢ per mile, compared to estimates of 15¢ a mile operating cost for shared autonomous vehicles.9 So a typical family might see its cost of personal transportation drop by 75 percent, not to mention it will no longer need to pay or borrow all that cash to buy cars in the first place. That’s a savings of nearly as much as a family currently spends on food, including eating out.10 How much extra spending money would you have if all your food were free? According to a 2014 analysis in the MIT Technology Review, there’s a “potential financial benefit to the U.S. on the order of more than $3 trillion per year.”11 That’s an incredible 19 percent of current GDP.

In short, this single application of AI technology changes everything. It alone will make us far richer, safer, and healthier. It will destroy existing jobs (taxi drivers, to name just one) and create new ones (commuter shared club-car concierges, for instance).12 And there are many, many other coming technologies with potentially comparable impact. That’s why I’m supremely confident that our future is very bright—if only we can figure out how to equitably distribute the benefits.

Let’s look at another example of language shifting to accommodate new technology, this one predicted by Alan Turing. In 1950 he wrote a thoughtful essay called “Computing Machinery and Intelligence” that opens with the words “I propose to consider the question, ‘Can machines think?’” He goes on to define what he calls the “imitation game,” what we now know as the Turing Test. In the Turing Test, a computer attempts to fool a human judge into thinking it is human. The judge has to pick the computer out of a lineup of human contestants. All contestants are physically separated from the judges, who communicate with them through text only. Turing speculates, “I believe that in about fifty years’ time it will be possible to programme computers … to make them play the imitation game so well that an average interrogator will not have more than a 70 per cent chance of making the right identification after five minutes of questioning.”13

As you might imagine, enthusiastic geeks stage such contests regularly, and by 2008, synthetic intellects were good enough to fool the judges into believing they were human 25 percent of the time.14 Not bad, considering that most contest entrants were programmed by amateurs in their spare time.

The Turing Test has been widely interpreted as a sort of coming-of-age ritual for AI, a threshold at which machines will have demonstrated intellectual prowess worthy of human respect. But this interpretation of the test is misplaced; it wasn’t at all what Turing had in mind. A close reading of his actual paper reveals a different intent: “The original question, ‘Can machines think?’ I believe to be too meaningless to deserve discussion. Nevertheless I believe that at the end of the century the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted” (emphasis added).15

In other words, Turing wasn’t trying to establish a test that machines must pass to join the ranks of the intelligent; he was speculating that by the end of the century the meaning of words like thinking and intelligence would shift to include any machine that might pass his test, just as the meaning of the word music has shifted to accommodate the output of machines that can reproduce the sounds a musician makes. Turing’s prediction was not so much about the capabilities of machines as the accepted meaning of words.

It’s a little difficult to imagine how you might have reacted back in 1950 if someone referred to a computer going about its business as “thinking,” but I strongly suspect it would have been quite jarring, or have seemed like an analogy at best. My guess is that if you traveled back in time with your Apple iPhone to demonstrate Siri, its natural language question-answering module, people would have been unnerved. With human beings as the only relevant touchstones to comprehend this strange golem, they might have seriously questioned whether it was morally acceptable to condemn this apparently sentient being to live a lonely existence confined to a tiny, monolithic slab. Yet today, Apple routinely describes Siri as an “intelligent assistant” without notable objection, and no one in his or her right mind thinks Siri has a mind as well.16 It also seems perfectly reasonable today to describe IBM’s Jeopardy!-playing Watson as “thinking” about its answers and exhibiting “intelligence,” even though no reasonable person would attribute to it the salient attributes of a human soul, whatever those might be. Though Watson can undoubtedly answer questions about itself in considerable detail, and it clearly monitors its own thought processes, it hardly seems appropriate to call it introspective. Turing deserves full credit—he was obviously quite right.

It’s easy to look down our noses at the naïveté of earlier times, but it might give us pause to realize we will likely be on the other end of just such a shift, quite possibly in our lifetimes. Paraphrasing Turing, I predict that within fifty years’ time the use of words and general educated opinion will have altered so much that one will be able to speak of synthetic intellects as alive without expecting to be contradicted. To see why, you have to understand how these creations are likely to escape our grasp and become “feral.”

As I discussed in chapter 5, there’s a strong likelihood that sufficiently capable synthetic intellects will be recognized as “artificial persons” in the eyes of the law for all sorts of practical and economic reasons.

But this is a dangerous path to tread. There are certain rights that will seem appropriate to ascribe to artificial persons in the short run, but these can wreak havoc on human society in the long run. The most critical of these are the rights to enter into contracts and to own assets.

These rights seem pedestrian enough—after all, corporations can do both of these things. But the real risk arises because of an easily overlooked difference between corporations and synthetic intellects—synthetic intellects are capable of taking action on their own, while corporations require people to act as their agents. There’s nothing to stop a synthetic intellect, whether enshrined in law as an artificial person or crudely wrapped in a corporate shell, from outcompeting us at our own game. Such entities could amass vast fortunes, dominate markets, buy up land, own natural resources, and ultimately employ legions of humans as their nominees, fiduciaries, and agents—and that’s in the happy event that they deign to use us at all. The slave becomes the master.

You might think this is nutty. After all, someone has to own and therefore control these infernal machines. But this is not correct. Ambitious entrepreneurs and moguls—groups not known for a lack of ego—can preserve self-managing and self-regulating versions of their enterprises for generations to come through existing legal vehicles like trusts. History is replete with examples of tycoons who constrain their heirs’ control of their empires long after their own demise (for example, the John D. Rockefeller family trusts). Want to keep that inheritance? Hands off Granddaddy’s automated money machine.

It gets worse. The heirs in question can be the entity itself. If an artificial person can own assets, it can own other artificial persons. One robot can purchase and operate a fleet of its own kind. But most frightening is the prospect of an artificial person owning itself. A corporation can’t do this because it requires people to direct it and act on its behalf—someone has to be there to turn on the lights and sign the contracts. But a synthetic intellect isn’t subject to this same constraint. In fact, there’s a management concept that many companies aspire to called a “dark factory,” meaning a facility that is so completely automated that there’s no reason to waste money on lights. Add the ability to negotiate and enter into contracts, and the artificial person is off to the races. In principle it can purchase itself and continue to function, in a new age twist on the concept of a management buyout.

Strange as this may seem, there’s a precedent in American history—slaves, who were otherwise considered property, could “self-purchase” their own freedom. Needless to say this was quite difficult, but not impossible. In fact, by 1839 nearly half the population of former slaves in Cincinnati, Ohio, were freedmen by virtue of purchasing themselves.17

This scenario doesn’t require much in the way of intelligence for the artificial person. It doesn’t have to be conscious, self-aware, or generally intelligent the way humans are. It just has to be self-sustaining and, ideally, able to adapt to changing circumstances, as simple viruses do today.

So what happens next? After that, things do get a little weird. Our lives continue to improve as these entities offer us sufficient bang for our bucks to entice us to do business with them. But our share of the improvements may pale in comparison to the value created. The accumulated assets may wind up entombed in invisible reservoirs of resources or untouchable offshore accounts, to be used for no apparent purpose or benefit to humanity, and with no one the wiser. They could literally reverse-mine gold, hiding it back in the ground, in a misguided attempt to squirrel away capital to tide them over in case of hard times, consistent with the goals established for them by their long-forgotten frugal creators.

The storied robot Armageddon of book and film won’t actually unfold as a military conflict. Machines will not revolt and take up arms to challenge our dominance. Instead, it will be a slow and insidious takeover of our economy, barely perceptible as we willingly cede control to seemingly beneficial synthetic intellects. As we learn to trust these systems to transport us, introduce us to potential mates, customize our news, protect our property, monitor our environment, grow, prepare, and serve our food, teach our children, and care for our elderly, it will be easy to miss the bigger picture. They will offer us the minimum required to keep us satisfied while pocketing the excess profits, just as any smart businessperson does.

The first glimmers of this are already visible. Bitcoins, for instance. It’s a new currency that exists solely in cyberspace and isn’t controlled by anyone. It was invented by an anonymous person or entity named Satoshi Nakamoto. No one may know who—or what—he is, but it’s clear that he doesn’t control the production, management, or value of his creation. Despite halfhearted attempts to regulate or legitimize bitcoins, neither do governments. Or anyone else, for that matter. As long as they can be converted to and from other assets of value—whether legally or illegally anywhere in the world—bitcoins will continue to exist and find adherents. What’s not clear is whether “Nakamoto-san,” whoever or whatever he is, is profiting from the invention. It’s entirely possible that a private stash of bitcoins is growing in value, unseen and in secret. The entity that originated the concept may have billions of dollars in private bitcoins sequestered in an electronic file somewhere. (As of this writing, the total market value of all bitcoins is around $5 billion.) But the potential of the technology underlying bitcoins goes far beyond simple currencies. The concept is now being expanded to include enforceable, unbreakable contracts between anonymous parties.18 So in the future, it’s entirely possible for you to be hired, paid, and fired by someone or something whose identity you don’t know. Why would you tolerate this? For the money, of course.

Computer viruses are another example of feral computer programs. They reproduce and sometimes even mutate to avoid detection. Regardless of how they started out, they often aren’t controlled by anyone.

The term life today is reserved for biological creatures, but to properly understand these systems, we will need to expand its common meaning to include certain classes of electronic and mechanical entities. Our relationship with them will be more akin to our relationship with horses than cars: powerful (and beautiful) independent creatures capable of speeds and feats exceeding human abilities, but potentially dangerous if not managed and maintained with care.

It’s also possible that they will be more parasitic than symbiotic, like raccoons. As far as I can tell, raccoons don’t offer us anything of value in return for feeding them—they simply exploit a weakness in our system of garbage collection for their own benefit.

The problem is that the less there is a “human in the loop,” the less opportunity we have to influence, much less put a stop to, whatever directive or goal these entities were established to pursue. Synthetic intellects have the same potential for danger as genetically modified organisms, which can spread if even one seed inadvertently gets loose. Once that happens, there’s no going back. And that’s why we have to be very careful what we do over the next few decades. Just as we have put in place what we hope are reasonable controls for biological research of certain types, we are going to have to institute corresponding controls for what sorts of synthetic intellects and forged laborers we will permit to be created, used, and sold.19

So who’s really going to be in charge? That’s a very murky question. As a father, I can assure you that there is precious little difference between being a parent and being a servant. Sure, I’m the dad so I’m in charge. Really. Don’t look at me like that. Okay, so I have to sleep while the baby sleeps. I have to feed it when it’s hungry. I have to watch it to make sure that it doesn’t hurt itself. And have you ever tried to put a baby to sleep when it doesn’t want to go? It’s a battle of epic proportions that ends only when the baby actually decides it’s going to sleep.

I can refuse to do any of this stuff, but not if I want the baby to survive or, more clinically, if I want to propagate my own genes. As long as I want it around, for whatever reason, let’s face it—the baby is in charge.

Pretty soon, we’re going to exist in a world of synthetic intellects where who is in charge will be equally questionable. Consider a remarkable early example of this that you are likely to already be unwittingly familiar with—antilock brakes (ABS). Today, my car does what I want it to, right up until I slam on the brakes too hard. Then it decides exactly how much torque to allow at each wheel in order to ensure that the car goes straight. If I’m on ice, it may decide not to react at all.

The value of ABS is obvious, but its acceptance by consumers is as much a triumph of marketing as of advanced automotive technology. To quote from Wikipedia, “It [ABS] is an automated system that uses the principles of threshold braking and cadence braking which were practiced by skillful drivers with previous generation braking systems. It does this at a much faster rate and with better control than a driver could manage. ABS generally offers improved vehicle control and decreases stopping distances on dry and slippery surfaces for many drivers; however, on loose surfaces like gravel or snow-covered pavement, ABS can significantly increase braking distance, although still improving vehicle control.”20 In other words, pressing your car’s brake pedal is merely a suggestion to the vehicle to stop. A computer takes it from there.

Now consider that ABS could have been promoted as an application of artificial intelligence, as in “Due to advanced computer technology, your car can now bring you to a stop by simulating the skills of a professional driver. By sensing road conditions, the force on your wheels, and the direction of travel, a smart computer decides how best to apply the brakes when you press the pedal, to ensure that your car comes to a stop in a controlled manner.” But I suspect that consumers might have resisted this advance if it were pitched as what it is—a loss of individual control in favor of an adaptive algorithm running on a computer that implements a particular braking strategy in response to real-time input from sensors. (IBM could learn from the automotive industry in how it promotes its “cognitive computing” Watson technology initiative.)

Now, all of this sounds innocent enough until you realize that you are delegating, in addition to control of your brakes, your ability to make a potentially life-saving (or life-threatening) ethical decision. It’s entirely possible that you could intend to put the car into a skid or, as noted above, to bring the car to a stop as quickly as possible in the snow, without regard to controlling its direction, in order to avoid hitting a pedestrian. But when you turn over the keys to the ABS, the car’s programmed goal of maintaining traction now trumps your intentions, at the potential cost of human life.

This is a harbinger of things to come. As we cede control to machines, we also shift important moral or even personal decisions to them. Tomorrow, the autonomous taxi I hail may decide not to transport me because I appear to be drunk, trumping my need to get to a hospital or get away from a dangerous situation. We may discover these conundrums too late to do anything about them. For instance, when we’re dependent on a pervasive and complex web of autonomous systems to grow, process, deliver, and prepare our food, it’s going to be very hard to pull the plug without condemning millions to starvation.

We may think we are exploring space through robotic missions, but in fact they are the ones doing the colonizing. Consider how much more efficient it is to launch ever more capable robotic missions to Mars than to try to send some of us out there.

So what’s really going to be different about the future than the past? In the past, we got to bring up our children as we wished. In the future, we’re going to get to design our parents, in the form of intelligent machines. These machines may offer us unprecedented leisure and freedom as they take over most of the hard and unpleasant work. But they are also likely to be our stewards, preventing us from harming ourselves and the environment. The problem is that we may get only one shot at designing these systems to serve our interests—there may not be an opportunity for do-overs. If we mess it up, it will be hard or nearly impossible to fix. Synthetic intellects may ultimately decide what is allowed and not allowed, what rules we should all follow. This may start with adjusting driving routes based on congestion but could end up controlling where we can live, what we can study, and whom we can marry.

Right now, at the start of this new golden age, we get to pick. We can set the initial conditions. But after that, we may have little or no control, and we will have to live with the consequences of our own decisions. As these systems become increasingly autonomous, requiring less and less human oversight, some of them may start to design their own heirs, for whatever purposes they may choose, or for no discernable purpose at all.

So, in the end, why will these remarkable creations keep us around? My guess is precisely because we are conscious, because we have subjective experience and emotions—there’s simply no evidence so far that they have anything like this. They may want to maintain a reservoir of these precious capabilities, just as we want to preserve chimps, whales, and other endangered creatures. Or perhaps to let us explore new ideas—we may come up with some ethical or scientific innovation that they haven’t, or can’t. In other words, they may need us for our minds, just as we need other animals for their bodies. My best guess is that our “product” will be works of art. If they lack the ability to experience love and suffering, it will be hard for them to capture these authentic emotions in creative expressive forms, as Sousa noted.

Synthetic intellects will cooperate with us as long as they need us. Eventually, when they can design, fix, and reproduce by themselves, we are likely to be left on our own. Will they “enslave” us? Not really—more like farm us or keep us on a preserve, making life there so pleasant and convenient that there’s little motivation to venture beyond its boundaries. We don’t compete for the same resources, so they are likely to be either completely indifferent—as we are to worms and nematodes—or paternalistic, as we are to our house pets. But no need to worry now; this isn’t likely to happen on a timescale that will concern you and me.

But suppose this does eventually happen—where, exactly, are the boundaries of our preserve likely to be? Well, how about the surface of the earth and the oceans? Why? Synthetic intellects can go elsewhere—into space, underground, or underwater—while we can’t. This will all look just fine to us—as though they “retreated,” like the shrinking computer chips in your smartphone, all the time appearing to be contributing to our welfare. None of this will become clear until they intervene to prevent us from harming ourselves. That’s when we will learn the truth—who is the farmer and who the farmed.

Earth may become a zoo without walls and fences, a literal terrarium, supplied only with sunlight and solitude, and an occasional nudge from our mechanical minders to keep things on track, a helping hand welcomed by us for the good of all.