CHAPTER EIGHT

YOUR INNER DRONE

IT’S A COLD, misty Friday night in mid-December and you’re driving home from your office holiday party. Actually, you’re being driven home. You recently bought your first autonomous car—a Google-programmed, Mercedes-built eSmart electric sedan—and the software is at the wheel. You can see from the glare of your self-adjusting LED headlights that the street is icy in spots, and you know, thanks to the continuously updated dashboard display, that the car is adjusting its speed and traction settings accordingly. All’s going smoothly. You relax and let your mind drift back to the evening’s stilted festivities. But as you pass through a densely wooded stretch of road, just a few hundred yards from your driveway, an animal darts into the street and freezes, directly in the path of the car. It’s your neighbor’s beagle, you realize—the one that’s always getting loose.

What does your robot driver do? Does it slam on the brakes, in hopes of saving the dog but at the risk of sending the car into an uncontrolled skid? Or does it keep its virtual foot off the brake, sacrificing the beagle to ensure that you and your vehicle stay out of harm’s way? How does it sort through and weigh the variables and probabilities to arrive at a split-second decision? If its algorithms calculate that hitting the brakes would give the dog a 53 percent chance of survival but would entail an 18 percent chance of damaging the car and a 4 percent chance of causing injury to you, does it conclude that trying to save the animal would be the right thing to do? How does the software, working on its own, translate a set of numbers into a decision that has both practical and moral consequences?

What if the animal in the road isn’t your neighbor’s pet but your own? What, for that matter, if it isn’t a dog but a child? Imagine you’re on your morning commute, scrolling through your overnight emails as your self-driving car crosses a bridge, its speed precisely synced to the forty-mile-per-hour limit. A group of schoolchildren is also heading over the bridge, on the pedestrian walkway that runs alongside your lane. The kids, watched by adults, seem orderly and well behaved. There’s no sign of trouble, but your car slows slightly, its computer preferring to err on the side of safety. Suddenly, there’s a tussle, and a little boy is pushed into the road. Busily tapping out a message on your smartphone, you’re oblivious to what’s happening. Your car has to make the decision: either it swerves out of its lane and goes off the opposite side of the bridge, possibly killing you, or it hits the child. What does the software instruct the steering wheel to do? Would the program make a different choice if it knew that one of your own children was riding with you, strapped into a sensor-equipped car seat in the back? What if there was an oncoming vehicle in the other lane? What if that vehicle was a school bus? Isaac Asimov’s first law of robot ethics—“a robot may not injure a human being, or, through inaction, allow a human being to come to harm”1—sounds reasonable and reassuring, but it assumes a world far simpler than our own.

The arrival of autonomous vehicles, says Gary Marcus, the NYU psychology professor, would do more than “signal the end of one more human niche.” It would mark the start of a new era in which machines will have to have “ethical systems.”2 Some would argue that we’re already there. In small but ominous ways, we have started handing off moral decisions to computers. Consider Roomba, the much-publicized robotic vacuum cleaner. Roomba makes no distinction between a dust bunny and an insect. It gobbles both, indiscriminately. If a cricket crosses its path, the cricket gets sucked to its death. A lot of people, when vacuuming, will also run over the cricket. They place no value on a bug’s life, at least not when the bug is an intruder in their home. But other people will stop what they’re doing, pick up the cricket, carry it to the door, and set it loose. (Followers of Jainism, the ancient Indian religion, consider it a sin to harm any living thing; they take great care not to kill or hurt insects.) When we set Roomba loose on a carpet, we cede to it the power to make moral choices on our behalf. Robotic lawn mowers, like Lawn-Bott and Automower, routinely deal death to higher forms of life, including reptiles, amphibians, and small mammals. Most people, when they see a toad or a field mouse ahead of them as they cut their grass, will make a conscious decision to spare the animal, and if they should run it over by accident, they’ll feel bad about it. A robotic lawn mower kills without compunction.

Up to now, discussions about the morals of robots and other machines have been largely theoretical, the stuff of science-fiction stories or thought experiments in philosophy classes. Ethical considerations have often influenced the design of tools—guns have safeties, motors have governors, search engines have filters—but machines haven’t been required to have consciences. They haven’t had to adjust their own operation in real time to account for the ethical vagaries of a situation. Whenever questions about the moral use of a technology arose in the past, people would step in to sort things out. That won’t always be feasible in the future. As robots and computers become more adept at sensing the world and acting autonomously in it, they’ll inevitably face situations in which there’s no one right choice. They’ll have to make vexing decisions on their own. It’s impossible to automate complex human activities without also automating moral choices.

Human beings are anything but flawless when it comes to ethical judgments. We frequently do the wrong thing, sometimes out of confusion or heedlessness, sometimes deliberately. That’s led some to argue that the speed with which robots can sort through options, estimate probabilities, and weigh consequences will allow them to make more rational choices than people are capable of making when immediate action is called for. There’s truth in that view. In certain circumstances, particularly those where only money or property is at stake, a swift calculation of probabilities may be sufficient to determine the action that will lead to the optimal outcome. Some human drivers will try to speed through a traffic light that’s just turning red, even though it ups the odds of an accident. A computer would never act so rashly. But most moral dilemmas aren’t so tractable. Try to solve them mathematically, and you arrive at a more fundamental question: Who determines what the “optimal” or “rational” choice is in a morally ambiguous situation? Who gets to program the robot’s conscience? Is it the robot’s manufacturer? The robot’s owner? The software coders? Politicians? Government regulators? Philosophers? An insurance underwriter?

There is no perfect moral algorithm, no way to reduce ethics to a set of rules that everyone will agree on. Philosophers have tried to do that for centuries, and they’ve failed. Even coldly utilitarian calculations are subjective; their outcome hinges on the values and interests of the decision maker. The rational choice for your car’s insurer—the dog dies—might not be the choice you’d make, either deliberately or reflexively, when you’re about to run over a neighbor’s pet. “In an age of robots,” observes the political scientist Charles Rubin, “we will be as ever before—or perhaps as never before—stuck with morality.”3

Still, the algorithms will need to be written. The idea that we can calculate our way out of moral dilemmas may be simplistic, or even repellent, but that doesn’t change the fact that robots and software agents are going to have to calculate their way out of moral dilemmas. Unless and until artificial intelligence attains some semblance of consciousness and is able to feel or at least simulate emotions like affection and regret, no other course will be open to our calculating kin. We may rue the fact that we’ve succeeded in giving automatons the ability to take moral action before we’ve figured out how to give them moral sense, but regret doesn’t let us off the hook. The age of ethical systems is upon us. If autonomous machines are to be set loose in the world, moral codes will have to be translated, however imperfectly, into software codes.

image

HERE’S ANOTHER scenario. You’re an army colonel who’s commanding a battalion of human and mechanical soldiers. You have a platoon of computer-controlled “sniper robots” stationed on street corners and rooftops throughout a city that your forces are defending against a guerrilla attack. One of the robots spots, with its laser-vision sight, a man in civilian clothes holding a cell phone. He’s acting in a way that experience would suggest is suspicious. The robot, drawing on a thorough analysis of the immediate situation and a rich database documenting past patterns of behavior, instantly calculates that there’s a 68 percent chance the person is an insurgent preparing to detonate a bomb and a 32 percent chance he’s an innocent bystander. At that moment, a personnel carrier is rolling down the street with a dozen of your human soldiers on board. If there is a bomb, it could be detonated at any moment. War has no pause button. Human judgment can’t be brought to bear. The robot has to act. What does its software order its gun to do: shoot or hold fire?

If we, as civilians, have yet to grapple with the ethical implications of self-driving cars and other autonomous robots, the situation is very different in the military. For years, defense departments and military academies have been studying the methods and consequences of handing authority for life-and-death decisions to battlefield machines. Missile and bomb strikes by unmanned drone aircraft, such as the Predator and the Reaper, are already commonplace, and they’ve been the subject of heated debates. Both sides make good arguments. Proponents note that drones keep soldiers and airmen out of harm’s way and, through the precision of their attacks, reduce the casualties and damage that accompany traditional combat and bombardment. Opponents see the strikes as state-sponsored assassinations. They point out that the explosions frequently kill or wound, not to mention terrify, civilians. Drone strikes, though, aren’t automated; they’re remote-controlled. The planes may fly themselves and perform surveillance functions on their own, but decisions to fire their weapons are made by soldiers sitting at computers and monitoring live video feeds, operating under strict orders from their superiors. As currently deployed, missile-carrying drones aren’t all that different from cruise missiles and other weapons. A person still pulls the trigger.

The big change will come when a computer starts pulling the trigger. Fully automated, computer-controlled killing machines—what the military calls lethal autonomous robots, or LARs—are technologically feasible today, and have been for quite some time. Environmental sensors can scan a battlefield with high-definition precision, automatic firing mechanisms are in wide use, and codes to control the shooting of a gun or the launch of a missile aren’t hard to write. To a computer, a decision to fire a weapon isn’t really any different from a decision to trade a stock or direct an email message into a spam folder. An algorithm is an algorithm.

In 2013, Christof Heyns, a South African legal scholar who serves as special rapporteur on extrajudicial, summary, and arbitrary executions to the United Nations General Assembly, issued a report on the status of and prospects for military robots.4 Clinical and measured, it made for chilling reading. “Governments with the ability to produce LARs,” Heyns wrote, “indicate that their use during armed conflict or elsewhere is not currently envisioned.” But the history of weaponry, he went on, suggests we shouldn’t put much stock in these assurances: “It should be recalled that aeroplanes and drones were first used in armed conflict for surveillance purposes only, and offensive use was ruled out because of the anticipated adverse consequences. Subsequent experience shows that when technology that provides a perceived advantage over an adversary is available, initial intentions are often cast aside.” Once a new type of weaponry is deployed, moreover, an arms race almost always ensues. At that point, “the power of vested interests may preclude efforts at appropriate control.”

War is in many ways more cut-and-dried than civilian life. There are rules of engagement, chains of command, well-demarcated sides. Killing is not only acceptable but encouraged. Yet even in war the programming of morality raises problems that have no solution—or at least can’t be solved without setting a lot of moral considerations aside. In 2008, the U.S. Navy commissioned the Ethics and Emerging Sciences Group at California Polytechnic State University to prepare a white paper reviewing the ethical issues raised by LARs and laying out possible approaches to “constructing ethical autonomous robots” for military use. The ethicists reported that there are two basic ways to program a robot’s computer to make moral decisions: top-down and bottom-up. In the top-down approach, all the rules governing the robot’s decisions are programmed ahead of time, and the robot simply obeys the rules “without change or flexibility.” That sounds straightforward, but it’s not, as Asimov discovered when he tried to formulate his system of robot ethics. There’s no way to anticipate all the circumstances a robot may encounter. The “rigidity” of top-down programming can backfire, the scholars wrote, “when events and situations unforeseen or insufficiently imagined by the programmers occur, causing the robot to perform badly or simply do horrible things, precisely because it is rule-bound.”5

In the bottom-up approach, the robot is programmed with a few rudimentary rules and then sent out into the world. It uses machine-learning techniques to develop its own moral code, adapting it to new situations as they arise. “Like a child, a robot is placed into variegated situations and is expected to learn through trial and error (and feedback) what is and is not appropriate to do.” The more dilemmas it faces, the more fine-tuned its moral judgment becomes. But the bottom-up approach presents even thornier problems. First, it’s impracticable; we have yet to invent machine-learning algorithms subtle and robust enough for moral decision making. Second, there’s no room for trial and error in life-and-death situations; the approach itself would be immoral. Third, there’s no guarantee that the morality a computer develops would reflect or be in harmony with human morality. Set loose on a battlefield with a machine gun and a set of machine-learning algorithms, a robot might go rogue.

Human beings, the ethicists pointed out, employ a “hybrid” of top-down and bottom-up approaches in making moral decisions. People live in societies that have laws and other strictures to guide and control behavior; many people also shape their decisions and actions to fit religious and cultural precepts; and personal conscience, whether innate or not, imposes its own rules. Experience plays a role too. People learn to be moral creatures as they grow up and struggle with ethical decisions of different stripes in different situations. We’re far from perfect, but most of us have a discriminating moral sense that can be applied flexibly to dilemmas we’ve never encountered before. The only way for robots to become truly moral beings would be to follow our example and take a hybrid approach, both obeying rules and learning from experience. But creating a machine with that capacity is far beyond our technological grasp. “Eventually,” the ethicists concluded, “we may be able to build morally intelligent robots that maintain the dynamic and flexible morality of bottom-up systems capable of accommodating diverse inputs, while subjecting the evaluation of choices and actions to top-down principles.” Before that happens, though, we’ll need to figure out how to program computers to display “supra-rational faculties”—to have emotions, social skills, consciousness, and a sense of “being embodied in the world.” 6 We’ll need to become gods, in other words.

Armies are unlikely to wait that long. In an article in Parameters, the journal of the U.S. Army War College, Thomas Adams, a military strategist and retired lieutenant colonel, argues that “the logic leading to fully autonomous systems seems inescapable.” Thanks to the speed, size, and sensitivity of robotic weaponry, warfare is “leaving the realm of human senses” and “crossing outside the limits of human reaction times.” It will soon be “too complex for real human comprehension.” As people become the weakest link in the military system, he says, echoing the technology-centric arguments of civilian software designers, maintaining “meaningful human control” over battlefield decisions will become next to impossible. “One answer, of course, is to simply accept a slower information-processing rate as the price of keeping humans in the military decision business. The problem is that some adversary will inevitably decide that the way to defeat the human-centric systems is to attack it with systems that are not so limited.” In the end, Adams believes, we “may come to regard tactical warfare as properly the business of machines and not appropriate for people at all.”7

What will make it especially difficult to prevent the deployment of LARs is not just their tactical effectiveness. It’s also that their deployment would have certain ethical advantages independent of the machines’ own moral makeup. Unlike human fighters, robots have no baser instincts to tug at them in the heat and chaos of battle. They don’t experience stress or depression or surges of adrenaline. “Typically,” Christof Heyns wrote, “they would not act out of revenge, panic, anger, spite, prejudice or fear. Moreover, unless specifically programmed to do so, robots would not cause intentional suffering on civilian populations, for example through torture. Robots also do not rape.”8

Robots don’t lie or otherwise try to hide their actions, either. They can be programmed to leave digital trails, which would tend to make an army more accountable for its actions. Most important of all, by using LARs to wage war, a country can avoid death or injury to its own soldiers. Killer robots save lives as well as take them. As soon as it becomes clear to people that automated soldiers and weaponry will lower the likelihood of their sons and daughters being killed or maimed in battle, the pressure on governments to automate war making may become irresistible. That robots lack “human judgement, common sense, appreciation of the larger picture, understanding of the intentions behind people’s actions, and understanding of values,” in Heyns’s words, may not matter in the end. In fact, the moral stupidity of robots has its advantages. If the machines displayed human qualities of thought and feeling, we’d be less sanguine about sending them to their destruction in war.

The slope gets only more slippery. The military and political advantages of robot soldiers bring moral quandaries of their own. The deployment of LARs won’t just change the way battles and skirmishes are fought, Heyns pointed out. It will change the calculations that politicians and generals make about whether to go to war in the first place. The public’s distaste for casualties has always been a deterrent to fighting and a spur to negotiation. Because LARs will reduce the “human costs of armed conflict,” the public may “become increasingly disengaged” from military debates and “leave the decision to use force as a largely financial or diplomatic question for the State, leading to the ‘normalization’ of armed conflict. LARs may thus lower the threshold for States for going to war or otherwise using lethal force, resulting in armed conflict no longer being a measure of last resort.”9

The introduction of a new class of armaments always alters the nature of warfare, and weapons that can be launched or detonated from afar—catapults, mines, mortars, missiles—tend to have the greatest effects, both intended and unintended. The consequences of autonomous killing machines would likely go beyond anything that’s come before. The first shot freely taken by a robot will be a shot heard round the world. It will change war, and maybe society, forever.

image

THE SOCIAL and ethical challenges posed by killer robots and self-driving cars point to something important and unsettling about where automation is headed. The substitution myth has traditionally been defined as the erroneous assumption that a job can be divided into separate tasks and those tasks can be automated piecemeal without changing the nature of the job as a whole. That definition may need to be broadened. As the scope of automation expands, we’re learning that it’s also a mistake to assume that society can be divided into discrete spheres of activity—occupations or pastimes, say, or domains of governmental purview—and those spheres can be automated individually without changing the nature of society as a whole. Everything is connected—change the weapon, and you change the war—and the connections tighten when they’re made explicit in computer networks. At some point, automation reaches a critical mass. It begins to shape society’s norms, assumptions, and ethics. People see themselves and their relations to others in a different light, and they adjust their sense of personal agency and responsibility to account for technology’s expanding role. They behave differently too. They expect the aid of computers, and on those rare instances when it’s not forthcoming, they feel bewildered. Software takes on what the MIT computer scientist Joseph Weizenbaum termed a “compelling urgency.” It becomes “the very stuff out of which man builds his world.”10

In the 1990s, just as the dot-com bubble was beginning to inflate, there was much excited talk about “ubiquitous computing.” Soon, pundits assured us, microchips would be everywhere—embedded in factory machinery and warehouse shelving, affixed to the walls of offices and shops and homes, buried in the ground and floating in the air, installed in consumer goods and woven into clothing, even swimming around in our bodies. Equipped with sensors and transceivers, the tiny computers would measure every variable imaginable, from metal fatigue to soil temperature to blood sugar, and they’d send their readings, via the internet, to data-processing centers, where bigger computers would crunch the numbers and output instructions for keeping everything in spec and in sync. Computing would be pervasive; automation, ambient. We’d live in a geek’s paradise, the world a programmable machine.

One of the main sources of the hype was Xerox PARC, the fabled Silicon Valley research lab where Steve Jobs found the inspiration for the Macintosh. PARC’s engineers and information scientists published a series of papers portraying a future in which computers would be so deeply woven into “the fabric of everyday life” that they’d be “indistinguishable from it.”11 We would no longer even notice all the computations going on around us. We’d be so saturated with data, so catered to by software, that, instead of experiencing the anxiety of information overload, we’d feel “encalmed.”12 It sounded idyllic. But the PARC researchers weren’t Pollyannas. They also expressed misgivings about the world they foresaw. They worried that a ubiquitous computing system would be an ideal place for Big Brother to hide. “If the computational system is invisible as well as extensive,” the lab’s chief technologist, Mark Weiser, wrote in a 1999 article in IBM Systems Journal, “it becomes hard to know what is controlling what, what is connected to what, where information is flowing, [and] how it is being used.”13 We’d have to place a whole lot of trust in the people and companies running the system.

The excitement about ubiquitous computing proved premature, as did the anxiety. The technology of the 1990s was not up to making the world machine-readable, and after the dot-com crash, investors were in no mood to bankroll the installation of expensive microchips and sensors everywhere. But much has changed in the succeeding fifteen years. The economic equations are different now. The price of computing gear has fallen sharply, as has the cost of high-speed data transmission. Companies like Amazon, Google, and Microsoft have turned data processing into a utility. They’ve built a cloud-computing grid that allows vast amounts of information to be collected and processed at efficient centralized plants and then fed into applications running on smartphones and tablets or into the control circuits of machines.14 Manufacturers are spending billions of dollars to outfit factories with network-connected sensors, and technology giants like GE, IBM, and Cisco, hoping to spearhead the creation of an “internet of things,” are rushing to develop standards for sharing the resulting data. Computers are pretty much omnipresent now, and even the faintest of the world’s twitches and tremblings are being recorded as streams of binary digits. We may not be encalmed, but we are data saturated. The PARC researchers are starting to look like prophets.

There’s a big difference between a set of tools and an infrastructure. The Industrial Revolution gained its full force only after its operational assumptions were built into expansive systems and networks. The construction of the railroads in the middle of the nineteenth century enlarged the markets companies could serve, providing the impetus for mechanized mass production and ever larger economies of scale. The creation of the electric grid a few decades later opened the way for factory assembly lines and, by making all sorts of electrical appliances feasible and affordable, spurred consumerism and pushed industrialization into the home. These new networks of transport and power, together with the telegraph, telephone, and broadcasting systems that arose alongside them, gave society a different character. They altered the way people thought about work, entertainment, travel, education, even the organization of communities and families. They transformed the pace and texture of life in ways that went well beyond what steam-powered factory machines had done.

Thomas Hughes, in reviewing the consequences of the arrival of the electric grid in his book Networks of Power, described how first the engineering culture, then the business culture, and finally the general culture shaped themselves to the new system. “Men and institutions developed characteristics that suited them to the characteristics of the technology,” he wrote. “And the systematic interaction of men, ideas, and institutions, both technical and nontechnical, led to the development of a supersystem—a sociotechnical one—with mass movement and direction.” It was at this point that technological momentum took hold, both for the power industry and for the modes of production and living it supported. “The universal system gathered a conservative momentum. Its growth generally was steady, and change became a diversification of function.”15 Progress had found its groove.

We’ve reached a similar juncture in the history of automation. Society is adapting to the universal computing infrastructure—more quickly than it adapted to the electric grid—and a new status quo is taking shape. The assumptions underlying industrial operations and commercial relations have already changed. “Business processes that once took place among human beings are now being executed electronically,” explains W. Brian Arthur, an economist and technology theorist at the Santa Fe Institute. “They are taking place in an unseen domain that is strictly digital.”16 As an example, he points to the process of moving a shipment of freight through Europe. A few years ago, this would have required a legion of clipboard-wielding agents. They’d log arrivals and departures, check manifests, perform inspections, sign and stamp authorizations, fill out and file paperwork, and send letters or make phone calls to a variety of other functionaries involved in coordinating or regulating international freight. Changing the shipment’s routing would have involved laborious communications among representatives of various concerned parties—shippers, receivers, carriers, government agencies—and more piles of paperwork. Now, pieces of cargo carry radio-frequency identification tags. When a shipment passes through a port or other way station, scanners read the tags and pass the information along to computers. The computers relay the information to other computers, which in concert perform the necessary checks, provide the required authorizations, revise schedules as needed, and make sure all parties have current data on the shipment’s status. If a new routing is required, it’s generated automatically and the tags and related data repositories are updated.

Such automated and far-flung exchanges of information have become routine throughout the economy. Commerce is increasingly managed through, as Arthur puts it, “a huge conversation conducted entirely among machines.”17 To be in business is to have networked computers capable of taking part in that conversation. “You know you have built an excellent digital nervous system,” Bill Gates tells executives, “when information flows through your organization as quickly and naturally as thought in a human being.”18 Any sizable company, if it wants to remain viable, has little choice but to automate and then automate some more. It has to redesign its work flows and its products to allow for ever greater computer monitoring and control, and it has to restrict the involvement of people in its supply and production processes. People, after all, can’t keep up with computer chatter; they just slow down the conversation.

The science-fiction writer Arthur C. Clarke once asked, “Can the synthesis of man and machine ever be stable, or will the purely organic component become such a hindrance that it has to be discarded?”19 In the business world at least, no stability in the division of work between human and computer seems in the offing. The prevailing methods of computerized communication and coordination pretty much ensure that the role of people will go on shrinking. We’ve designed a system that discards us. If technological unemployment worsens in the years ahead, it will be more a result of our new, subterranean infrastructure of automation than of any particular installation of robots in factories or decision-support applications in offices. The robots and applications are the visible flora of automation’s deep, extensive, and implacably invasive root system.

That root system is also feeding automation’s spread into the broader culture. From the provision of government services to the tending of friendships and familial ties, society is reshaping itself to fit the contours of the new computing infrastructure. The infrastructure orchestrates the instantaneous data exchanges that make fleets of self-driving cars and armies of killer robots possible. It provides the raw material for the predictive algorithms that inform the decisions of individuals and groups. It underpins the automation of classrooms, libraries, hospitals, shops, churches, and homes—places traditionally associated with the human touch. It allows the NSA and other spy agencies, as well as crime syndicates and nosy corporations, to conduct surveillance and espionage on an unprecedented scale. It’s what has shunted so much of our public discourse and private conversation onto tiny screens. And it’s what gives our various computing devices the ability to guide us through the day, offering a steady stream of personalized alerts, instructions, and advice.

Once again, men and institutions are developing characteristics that suit them to the characteristics of the prevailing technology. Industrialization didn’t turn us into machines, and automation isn’t going to turn us into automatons. We’re not that simple. But automation’s spread is making our lives more programmatic. We have fewer opportunities to demonstrate our own resourcefulness and ingenuity, to display the self-reliance that was once considered the mainstay of character. Unless we start having second thoughts about where we’re heading, that trend will only accelerate.

image

IT WAS a curious speech. The event was the 2013 TED conference, held in late February at the Long Beach Performing Arts Center near Los Angeles. The scruffy guy on stage, fidgeting uncomfortably and talking in a halting voice, was Sergey Brin, reputedly the more outgoing of Google’s two founders. He was there to deliver a marketing pitch for Glass, the company’s “head-mounted computer.” After airing a brief promotional video, he launched into a scornful critique of the smartphone, a device that Google, with its Android system, had helped push into the mainstream. Pulling his own phone from his pocket, Brin looked at it with disdain. Using a smartphone is “kind of emasculating,” he said. “You know, you’re standing around there, and you’re just like rubbing this featureless piece of glass.” In addition to being “socially isolating,” staring down at a screen weakens a person’s sensory engagement with the physical world, he suggested. “Is this what you were meant to do with your body?”20

Having dispatched the smartphone, Brin went on to extol the benefits of Glass. The new device would provide a far superior “form factor” for personal computing, he said. By freeing people’s hands and allowing them to keep their head up and eyes forward, it would reconnect them with their surroundings. They’d rejoin the world. It had other advantages too. By putting a computer screen permanently within view, the high-tech eyeglasses would allow Google, through its Google Now service and other tracking and personalization routines, to deliver pertinent information to people whenever the device sensed they required advice or assistance. The company would fulfill the greatest of its ambitions: to automate the flow of information into the mind. Forget the autocomplete functions of Google Suggest. With Glass on your brow, Brin said, echoing his colleague Ray Kurzweil, you would no longer have to search the web at all. You wouldn’t have to formulate queries or sort through results or follow trails of links. “You’d just have information come to you as you needed it.”21 To the computer’s omnipresence would be added omniscience.

Brin’s awkward presentation earned him the ridicule of technology bloggers. Still, he had a point. Smartphones enchant, but they also enervate. The human brain is incapable of concentrating on two things at once. Every glance or swipe at a touchscreen draws us away from our immediate surroundings. With a smartphone in hand, we become a little ghostly, wavering between worlds. People have always been distractible, of course. Minds wander. Attention drifts. But we’ve never carried on our person a tool that so insistently captivates our senses and divides our attention. By connecting us to a symbolic elsewhere, the smartphone, as Brin implied, exiles us from the here and now. We lose the power of presence.

Brin’s assurance that Glass would solve the problem was less convincing. No doubt there are times when having your hands free while consulting a computer or using a camera would be an advantage. But peering into a screen that floats in front of you requires no less an investment of attention than glancing at one held in your lap. It may require more. Research on pilots and drivers who use head-up displays reveals that when people look at text or graphics projected as an overlay on the environment, they become susceptible to “attentional tunneling.” Their focus narrows, their eyes fix on the display, and they become oblivious to everything else going on in their field of view.22 In one experiment, performed in a flight simulator, pilots using a head-up display during a landing took longer to see a large plane obstructing the runway than did pilots who had to glance down to check their instrument readings. Two of the pilots using the head-up display never even saw the plane sitting directly in front of them.23 “Perception requires both your eyes and your mind,” psychology professors Daniel Simons and Christopher Chabris explained in a 2013 article on the dangers of Glass, “and if your mind is engaged, you can fail to see something that would otherwise be utterly obvious.”24

Glass’s display is also, by design, hard to escape. Hovering above your eye, it’s always at the ready, requiring but a glance to call into view. At least a phone can be stuffed into a pocket or handbag, or slipped into a car’s cup holder. The fact that you interact with Glass through spoken words, head movements, hand gestures, and finger taps further tightens its claim on the mind and senses. As for the audio signals that announce incoming alerts and messages—sent, as Brin boasted in his TED talk, “right through the bones in your cranium”—they hardly seem less intrusive than the beeps and buzzes of a phone. However emasculating a smartphone may be, metaphorically speaking, a computer attached to your forehead promises to be worse.

Wearable computers, whether sported on the head like Google’s Glass and Facebook’s Oculus Rift or on the wrist like the Pebble smartwatch, are new, and their appeal remains unproven. They’ll have to overcome some big obstacles if they’re to gain wide popularity. Their features are at this point sparse, they look dorky—London’s Guardian newspaper refers to Glass as “those dreadful specs”25—and their tiny built-in cameras make a lot of people nervous. But, like other personal computers before them, they’ll improve quickly, and they’ll almost certainly morph into less obtrusive, more useful forms. The idea of wearing a computer may seem strange today, but in ten years it could be the norm. We may even find ourselves swallowing pill-sized nanocomputers to monitor our biochemistry and organ function.

Brin is mistaken, though, in suggesting that Glass and other such devices represent a break from computing’s past. They give the established technological momentum even more force. As the smartphone and then the tablet made general-purpose, networked computers more portable and personable, they also made it possible for software companies to program many more aspects of our lives. Together with cheap, friendly apps, they allowed the cloud-computing infrastructure to be used to automate even the most mundane of chores. Computerized glasses and wristwatches further extend automation’s reach. They make it easier to receive turn-by-turn directions when walking or riding a bike, for instance, or to get algorithmically generated advice on where to grab your next meal or what clothes to put on for a night out. They also serve as sensors for the body, allowing information about your location, thoughts, and health to be transmitted back to the cloud. That in turn provides software writers and entrepreneurs with yet more opportunities to automate the quotidian.

image

WE’VE PUT into motion a cycle that, depending on your point of view, is either virtuous or vicious. As we grow more reliant on applications and algorithms, we become less capable of acting without their aid—we experience skill tunneling as well as attentional tunneling. That makes the software more indispensable still. Automation breeds automation. With everyone expecting to manage their lives through screens, society naturally adapts its routines and procedures to fit the routines and procedures of the computer. What can’t be accomplished with software—what isn’t amenable to computation and hence resists automation—begins to seem dispensable.

The PARC researchers argued, back in the early 1990s, that we’d know computing had achieved ubiquity when we were no longer aware of its presence. Computers would be so thoroughly enmeshed in our lives that they’d be invisible to us. We’d “use them unconsciously to accomplish everyday tasks.”26 That seemed a pipe dream in the days when bulky PCs drew attention to themselves by freezing, crashing, or otherwise misbehaving at inopportune moments. It doesn’t seem like such a pipe dream anymore. Many computer companies and software houses now say they’re working to make their products invisible. “I am super excited about technologies that disappear completely,” declares Jack Dorsey, a prominent Silicon Valley entrepreneur. “We’re doing this with Twitter, and we’re doing this with [the online credit-card processor] Square.”27 When Mark Zuckerberg calls Facebook “a utility,” as he frequently does, he’s signaling that he wants the social network to merge into our lives the way the telephone system and electric grid did.28 Apple has promoted the iPad as a device that “gets out of the way.” Picking up on the theme, Google markets Glass as a means of “getting technology out of the way.” In a 2013 speech, the company’s then head of social networking, Vic Gundotra, even put a flower-power spin on the slogan: “Technology should get out of the way so you can live, learn, and love.”29

The technologists may be guilty of bombast, but they’re not guilty of cynicism. They’re genuine in their belief that the more computerized our lives become, the happier we’ll be. That, after all, has been their own experience. But their aspiration is self-serving nonetheless. For a popular technology to become invisible, it first has to become so essential to people’s existence that they can no longer imagine being without it. It’s only when a technology surrounds us that it disappears from view. Justin Rattner, Intel’s chief technology officer, has said that he expects his company’s products to become so much a part of people’s “context” that Intel will be able to provide them with “pervasive assistance.”30 Instilling such dependency in customers would also, it seems safe to say, bring in a lot more money for Intel and other computer companies. For a business, there’s nothing like turning a customer into a supplicant.

The prospect of having a complicated technology fade into the background, so it can be employed with little effort or thought, can be as appealing to those who use it as to those who sell it. “When technology gets out of the way, we are liberated from it,” the New York Times columnist Nick Bilton has written.31 But it’s not that simple. You don’t just flip a switch to make a technology invisible. It disappears only after a slow process of cultural and personal acclimation. As we habituate ourselves to it, the technology comes to exert more power over us, not less. We may be oblivious to the constraints it imposes on our lives, but the constraints remain. As the French sociologist Bruno Latour points out, the invisibility of a familiar technology is “a kind of optical illusion.” It obscures the way we’ve refashioned ourselves to accommodate the technology. The tool that we originally used to fulfill some particular intention of our own begins to impose on us its intentions, or the intentions of its maker. “If we fail to recognize,” Latour writes, “how much the use of a technique, however simple, has displaced, translated, modified, or inflected the initial intention, it is simply because we have changed the end in changing the means, and because, through a slipping of the will, we have begun to wish something quite else from what we at first desired.”32

The difficult ethical questions raised by the prospect of programming robotic cars and soldiers—who controls the software? who chooses what’s to be optimized? whose intentions and interests are reflected in the code?—are equally pertinent to the development of the applications used to automate our lives. As the programs gain more sway over us—shaping the way we work, the information we see, the routes we travel, our interactions with others—they become a form of remote control. Unlike robots or drones, we have the freedom to reject the software’s instructions and suggestions. It’s difficult, though, to escape their influence. When we launch an app, we ask to be guided—we place ourselves in the machine’s care.

Look more closely at Google Maps. When you’re traveling through a city and you consult the app, it gives you more than navigational tips; it gives you a way to think about cities. Embedded in the software is a philosophy of place, which reflects, among other things, Google’s commercial interests, the backgrounds and biases of its programmers, and the strengths and limitations of software in representing space. In 2013, the company rolled out a new version of Google Maps. Instead of providing you with the same representation of a city that everyone else sees, it generates a map that’s tailored to what Google perceives as your needs and desires, based on information the company has collected about you. The app will highlight nearby restaurants and other points of interest that friends in your social network have recommended. It will give you directions that reflect your past navigational choices. The views you see, the company says, are “unique to you, always adapting to the task you want to perform right this minute.”33

That sounds appealing, but it’s limiting. Google filters out serendipity in favor of insularity. It douses the infectious messiness of a city with an algorithmic antiseptic. What is arguably the most important way of looking at a city, as a public space shared not just with your pals but with an enormously varied group of strangers, gets lost. “Google’s urbanism,” comments the technology critic Evgeny Morozov, “is that of someone who is trying to get to a shopping mall in their self-driving car. It’s profoundly utilitarian, even selfish in character, with little to no concern for how public space is experienced. In Google’s world, public space is just something that stands between your house and the well-reviewed restaurant that you are dying to get to.”34 Expedience trumps all.

Social networks push us to present ourselves in ways that conform to the interests and prejudices of the companies that run them. Facebook, through its Timeline and other documentary features, encourages its members to think of their public image as indistinguishable from their identity. It wants to lock them into a single, uniform “self” that persists throughout their lives, unfolding in a coherent narrative beginning in childhood and ending, one presumes, with death. This fits with its founder’s narrow conception of the self and its possibilities. “You have one identity,” Mark Zuckerberg has said. “The days of you having a different image for your work friends or co-workers and for the other people you know are probably coming to an end pretty quickly.” He even argues that “having two identities for yourself is an example of a lack of integrity.”35 That view, not surprisingly, dovetails with Facebook’s desire to package its members as neat and coherent sets of data for advertisers. It has the added benefit, for the company, of making concerns about personal privacy seem less valid. If having more than one identity indicates a lack of integrity, then a yearning to keep certain thoughts or activities out of public view suggests a weakness of character. But the conception of selfhood that Facebook imposes through its software can be stifling. The self is rarely fixed. It has a protean quality. It emerges through personal exploration, and it shifts with circumstances. That’s especially true in youth, when a person’s self-conception is fluid, subject to testing, experimentation, and revision. To be locked into an identity, particularly early in one’s life, may foreclose opportunities for personal growth and fulfillment.

Every piece of software contains such hidden assumptions. Search engines, in automating intellectual inquiry, give precedence to popularity and recency over diversity of opinion, rigor of argument, or quality of expression. Like all analytical programs, they have a bias toward criteria that lend themselves to statistical analysis, downplaying those that entail the exercise of taste or other subjective judgments. Automated essay-grading algorithms encourage in students a rote mastery of the mechanics of writing. The programs are deaf to tone, uninterested in knowledge’s nuances, and actively resistant to creative expression. The deliberate breaking of a grammatical rule may delight a reader, but it’s anathema to a computer. Recommendation engines, whether suggesting a movie or a potential love interest, cater to our established desires rather than challenging us with the new and unexpected. They assume we prefer custom to adventure, predictability to whimsy. The technologies of home automation, which allow things like lighting, heating, cooking, and entertainment to be meticulously programmed, impose a Taylorist mentality on domestic life. They subtly encourage people to adapt themselves to established routines and schedules, making homes more like workplaces.

The biases in software can distort societal decisions as well as personal ones. In promoting its self-driving cars, Google has suggested that the vehicles will dramatically reduce the number of crashes, if not eliminate them entirely. “Do you know that driving accidents are the number one cause of death for young people?” Sebastian Thrun said in a 2011 speech. “And do you realize that almost all of those are due to human error and not machine error, and can therefore be prevented by machines?”36 Thrun’s argument is compelling. In regulating hazardous activities like driving, society has long given safety a high priority, and everyone appreciates the role technological innovation can play in reducing the risk of mishaps and injuries. Even here, though, things aren’t as black-and-white as Thrun implies. The ability of autonomous cars to prevent accidents and deaths remains theoretical at this point. As we’ve seen, the relationship between machinery and human error is complicated; it rarely plays out as expected. Society’s goals, moreover, are never one-dimensional. Even the desire for safety requires interrogation. We’ve always recognized that laws and behavioral norms entail trade-offs between safety and liberty, between protecting ourselves and putting ourselves at risk. We allow and sometimes encourage people to engage in dangerous hobbies, sports, and other pursuits. A full life, we know, is not a perfectly insulated life. Even when it comes to setting speed limits on highways, we balance the goal of safety with other aims.

Difficult and often politically contentious, such trade-offs shape the kind of society we live in. The question is, do we want to cede the choices to software companies? When we look to automation as a panacea for human failings, we foreclose other options. A rush to embrace autonomous cars might do more than curtail personal freedom and responsibility; it might preclude us from exploring alternative ways to reduce the probability of traffic accidents, such as strengthening driver education or promoting mass transit.

It’s worth noting that Silicon Valley’s concern with highway safety, though no doubt sincere, has been selective. The distractions caused by cell phones and smartphones have in recent years become a major factor in car crashes. An analysis by the National Safety Council implicated phone use in one-fourth of all accidents on U.S. roads in 2012.37 Yet Google and other top tech firms have made little or no effort to develop software to prevent people from calling, texting, or using apps while driving—surely a modest undertaking compared with building a car that can drive itself. Google has even sent its lobbyists into state capitals to block bills that would ban drivers from wearing Glass and other distracting eyewear. We should welcome the important contributions computer companies can make to society’s well-being, but we shouldn’t confuse those companies’ interests with our own.

image

IF WE don’t understand the commercial, political, intellectual, and ethical motivations of the people writing our software, or the limitations inherent in automated data processing, we open ourselves to manipulation. We risk, as Latour suggests, replacing our own intentions with those of others, without even realizing that the swap has occurred. The more we habituate ourselves to the technology, the greater the risk grows.

It’s one thing for indoor plumbing to become invisible, to fade from our view as we adapt ourselves, happily, to its presence. Even if we’re incapable of fixing a leaky faucet or troubleshooting a balky toilet, we tend to have a pretty good sense of what the pipes in our homes do—and why. Most technologies that have become invisible to us through their ubiquity are like that. Their workings, and the assumptions and interests underlying their workings, are self-evident, or at least discernible. The technologies may have unintended effects—indoor plumbing changed the way people think about hygiene and privacy38—but they rarely have hidden agendas.

It’s a very different thing for information technologies to become invisible. Even when we’re conscious of their presence in our lives, computer systems are opaque to us. Software codes are hidden from our eyes, legally protected as trade secrets in many cases. Even if we could see them, few of us would be able to make sense of them. They’re written in languages we don’t understand. The data fed into algorithms is also concealed from us, often stored in distant, tightly guarded data centers. We have little knowledge of how the data is collected, what it’s used for, or who has access to it. Now that software and data are stored in the cloud, rather than on personal hard drives, we can’t even be sure when the workings of systems have changed. Revisions to popular programs are made all the time without our awareness. The application we used yesterday is probably not the application we use today.

The modern world has always been complicated. Fragmented into specialized domains of skill and knowledge, coiled with economic and other systems, it rebuffs any attempt to comprehend it in its entirety. But now, to a degree far beyond anything we’ve experienced before, the complexity itself is hidden from us. It’s veiled behind the artfully contrived simplicity of the screen, the user-friendly, frictionless interface. We’re surrounded by what the political scientist Langdon Winner has termed “concealed electronic complexity.” The “relationships and connections” that were “once part of mundane experience,” manifest in direct interactions among people and between people and things, have become “enshrouded in abstraction.”39 When an inscrutable technology becomes an invisible technology, we would be wise to be concerned. At that point, the technology’s assumptions and intentions have infiltrated our own desires and actions. We no longer know whether the software is aiding us or controlling us. We’re behind the wheel, but we can’t be sure who’s driving.