8This Car Won’t Drive Itself

The unreasonably effective data-driven approach works well enough for electronic search, for simple translation, and for simple navigation. Given enough training data, algorithms indeed will do a good job at a variety of mundane tasks, and human ingenuity usually fills in the blanks. With search, most of us by now have learned how to use increasingly complex or specific search terms (or at least synonyms) to find the specific web pages we’re looking for when using a search box. Machine translation between languages is better than ever. It’s still not as good as human translation, but human brains are magnificent at figuring out the meaning of garbled sentences. A stilted, awkward translation of a web page is usually all the casual web surfer needs. GPS systems that provide directions from point A to point B are terribly handy. They don’t always give the best directions to the airport, if you ask any professional taxi or ride-share driver, but they will get you there and they will mostly show the traffic on the route for sufficiently busy areas.

However, the unreasonably effective data-driven approach has enough problems that I’m skeptical about using AI to fully replace humans for actual, life-threatening situations, like driving. The best case for considering how artificial intelligence works both really well and not at all is the case of the self-driving car.

The first time I rode in a self-driving car, in 2007, I thought I was going to die. Or vomit. Or both. So, when I heard in 2016 that self-driving cars were coming to market, that Tesla had created software called Autopilot and that Uber was testing self-driving cars in Pittsburgh, I wondered: What had changed? Did the reckless engineers I met in 2007 actually manage to embed an ethical decision-making entity inside a two-ton killing machine?

It turned out that perhaps not as much has changed as I might have thought. The story of the race to build a self-driving car is a story about the fundamental limits of computing. Looking at what worked—and what didn’t—during the first decade of autonomous vehicles is a cautionary tale about how technochauvinism can lead to magical thinking about technology and can create a public health hazard.

My first ride happened on an autonomous vehicle test track: the weekend-empty parking lot of the Boeing factory in South Philadelphia. The Ben Franklin Racing Team, a group of engineering students at the University of Pennsylvania, was building an autonomous vehicle for a competition. I was writing a story about them for the University of Pennsylvania alumni magazine. I met the members of the Ben Franklin Racing Team on campus at dawn on a Sunday morning, and I followed them down the highway for self-driving-car racing practice.

The team had to practice at times when there was little traffic and few people around. Their car, a tricked-out Toyota Prius, wasn’t exactly street legal. There are rules about what has to be in a car: a steering wheel, for example. They were OK to practice in the parking lot, or on Penn property, but the drive down I-95 from a West Philadelphia garage to the practice space in South Philly was a risk. They were less likely to get pulled over early on a Sunday morning because fewer police cars were patrolling the highway. The university lawyers were working on the state level to change legislation so that the car could legally drive itself. Until then, the racing team practiced at odd hours and hoped for the best.

I pulled up behind the Prius, which they had christened Little Ben, in the parking lot. The car was packed with engineers: mechanical-engineering student Tully Foote at the wheel, with electrical- and systems-engineering PhD candidate Paul Vernaza in the backseat next to Alex Stewart, a doctoral candidate in electrical engineering. Heteen Choxi, a Lockheed Martin employee and recent Drexel computer science grad, rode shotgun, wearing a bright yellow and black team jacket. As the car rolled to a gentle stop, Foote got out and popped the hatchback to reveal a mess of wires snaking over the back seat and onto the roof. The car looked like something out of a postapocalyptic movie, with sensors and miscellaneous parts bolted to the roof. The students had ripped a hole in the plastic console covering the dashboard. A tangle of wires spilled out, connected to a large, serious-looking laptop. Half of the trunk’s floor was covered by Plexiglas, and more wires and boxes were visible in the wheel well. Foote pulled up a command prompt on an LCD screen installed in the trunk, and soon a satellite image of the parking lot popped into view. The three passengers remained belted in the car, each hunched over a laptop. Driving practice began.

In the competition, the 2007 Grand Challenge, Little Ben would need to drive itself through an empty “city” made out of a decommissioned military base. No remote controls, no preprogrammed paths through the city: just eighty-nine autonomous vehicles trying to drive down streets, around corners, through intersections, and around each other. The sponsor, the Defense Advanced Research Projects Agency (DARPA), promised a $2 million prize to the fastest finisher, plus $1 million and $500,000 prizes to the runners-up.

Robot-car technology was already assisting everyday drivers in 2007. By then, Lexus had released a car that could parallel-park itself under specific conditions. “Today, all of the high-end cars have features like adaptive cruise control or parking assistance. It’s getting more and more automated,” explained Dan Lee, associate professor of engineering and the team’s adviser. “Now, to do it fully, the car has to have a complete awareness of the surrounding world. These are the hard problems of robotics: computer vision, having computers ‘hear’ sounds, having computers understand what’s happening in the world around them. This is a good environment to test these things.”

For Little Ben to “see” an obstacle and drive around it, the automated driving and GPS navigation had to work properly, and the laser sensors on the roof rack had to observe the object. Then, Little Ben had to identify the object as an obstacle and develop a path around it. One of the goals for that day’s practice was to work on the subroutines that would eventually allow Little Ben to steer clear of other cars.

“The system is complicated enough that there are a lot of unforeseen consequences,” said Foote. “If one thing runs slow, something else crashes. In software development in general, the standard is that you spend three-fourths of your time debugging. In a project like this, it’s more like nine-tenths of the time debugging.”

The 2007 challenge was more complicated than its predecessor. In the 2005 challenge, the task was to create a robot that could navigate 175 miles through the desert without human intervention in less than ten hours. On October 9, 2005, the Stanford Racing Team and their car Stanley won this competition (and the $2 million prize) for sending Stanley across 132 miles of the Mojave Desert. Stanley averaged 19 mph on the course, and finished in six hours, fifty-four minutes. In a desert, “it didn’t really matter whether an obstacle was a rock or a bush because either way you’d drive around it,” said Sebastian Thrun, then a Stanford associate professor of computer science and electrical engineering.1 In the urban challenge, however, cars had to negotiate right-of-way and obey conventional rules of the road. “The challenge is to move from just sensing the environment to understanding the environment,” said Thrun. Stanford’s new Challenge car, Junior, a 2006 Volkswagen Passat, was considered a major rival to Little Ben. So was Boss, a 2007 Chevy Tahoe being developed by the Carnegie Mellon University (CMU) team. Carnegie Mellon fielded two cars in 2005, Sandstorm and H1ghlander, which placed second and third, respectively. The robotics rivalry between CMU and Stanford was akin to the basketball rivalry between University of North Carolina and Duke. Stanford poached Thrun, formerly CMU’s star robotics professor, in 2003.

Back in the Boeing lot, senior electrical-engineering major Alex Kushleyev pulled up in his own brand-new car, a Nissan Altima. He had gone out to buy a remote control of the type used for toy cars: this was the emergency stop button. Every robot seems to include a large, cartoonish, red button. Two additional buttons were duct-taped to the rear side panels of the car and connected to the server rack of Mac Minis that made up the car’s electronic “brain.” The team had spent about $100,000 on the project to that point, through Penn’s General Robotics, Automation, Sensing and Perception (GRASP) Laboratory. Lockheed Martin Advanced Technology Laboratories in Cherry Hill, New Jersey, and Maryland-based Thales Communications also sponsored the team.

“The Prius gives us more maneuverability, and since it is a hybrid car, it has a big on-board battery. We run a lot of computers, a lot of sensors, a lot of motors in addition to the car, so we need that extra power,” said Lee. An electric motor controlled Little Ben’s gas, brakes, and steering; all functions, from turn signals to wipers, could be controlled by buttons on a panel mounted above the gearshift, not unlike the customization used by disabled drivers who use their hands instead of their legs to drive. The car could be driven in the ordinary manner, or it could be driven using the hand controls. When the autopilot was engaged, they claimed there would be no need for a driver at all.

I watched the car travel short spurts through the parking lot. A safety driver sat in the passenger’s seat with one hand on the emergency stop button. It was unsettling, but thrilling, to see the car driving ahead, with its steering wheel moving in front of an empty driver’s seat.

As the battery made a low hum, Kushleyev took the wheel and drove across the parking lot at 15 mph. The day’s goal was to rehearse Little Ben around parking lot obstacles. In the competition, Little Ben would have to navigate intersections and curbs and make decisions about how to react to stop signs, other cars, and stray dogs at a maximum speed of 30 mph.

Finally, it was my turn to take the wheel. I sat in the driver’s seat. It felt curiously empty. Kushleyev turned on the automated-driving mechanism, and the car advanced a few feet—then lurched wildly to the left, then to the right, and jumped off its trajectory. “Gain control!” Stewart shouted from the backseat. The car headed toward a streetlight. As we neared the cement wall at the base of the light, the car accelerated. We were on a collision course. I jammed my foot down on what should have been the brake, only to find that it had been modified in a way I didn’t understand. “Shouldn’t this thing slow down?” I called out in a panic. I closed my eyes, certain that I was about to crash, and prepared to scream.

I heard murmurs and furious typing from the backseat. Kushleyev hit override, and the brakes. The car jerked to a stop, mere inches from the cement wall. My stomach felt like it had been left four feet behind.

I turned around to glare at the guy with the laptop. “There must be a bug in the program,” he said with a shrug. “It happens.”

“Only a GPS reading away from death,” Stewart announced cheerfully. The engineers debated the swerving problem: the car was making a big wiggle where it was supposed to make a small, smooth turn. The laser sensors were scanning the area ahead of the car, but the software wasn’t registering the light pole as an obstacle. This seemed to be affecting the steering, causing the car to jerk instead of turning smoothly.

Foote and Stewart conferred. They were robot-car veterans, having worked on two Grand Challenge robot cars as undergrads at Caltech. Their last autonomous vehicle was Alice, a Ford E350 van developed for the 2005 Grand Challenge. In the desert race, Alice drove herself about seven miles before heading into—and over—a barrier separating the media tent from the race course. Judges disabled Alice before she made headlines.

Little Ben’s steering wheel moved on its own a few times; Stewart and Vernaza were controlling it from the backseat. Code problem solved, Kushleyev drove the car across the parking lot again and engaged the autopilot. The steering jerked, and the car headed toward an enormous snowplow parked at the edge of the lot as a grating sound screamed out of the engine.

“Bugger,” said Stewart.

“Maybe it’s Sheep?” said Vernaza, naming one of the programs controlling the car.

“This is high up on my list of things I don’t want to fix today,” said Stewart.

What I thought (but didn’t write) at the time was that the experience did not inspire confidence in the technology. Riding in their car felt dangerous, like being in a car driven by a drunk toddler. If these were the folks making self-driving-car technology, their recklessness with my life did not augur well for the future. I couldn’t see trusting my own child to a machine built by these kids. I didn’t like the idea of this car being on the road; it seemed like a public menace. I wrote the story and assumed that the tech would fizzle out or be absorbed into another project, fading into tech obscurity like RealPlayer video or Macromedia Director or Jaz drives. After I filed my story, I forgot about the Penn robot car.

Meanwhile, Little Ben still had a race to win. On the morning of the DARPA Grand Challenge, November 3, 2007, the vehicles lined up at the starting gate. Their goal was to traverse the streets of George Air Force Base, a decommissioned military base in Nevada. There were roads and signs and escort vehicles. It was a motley crew of jerry-rigged vehicles lined up at the starting line. The task was to navigate sixty miles through the base, obeying street signs and avoiding other cars.

Pole position had been determined by qualifying runs the week before. Carnegie Mellon’s car, Boss, was the top seed, meaning it could go into the course first, followed at intervals by the other robot cars and by some chase cars driven by humans. At the starting line, the Boss team was ready to go—but Boss wasn’t. Its GPS wasn’t working. A flurry of activity ensued. Other cars entered the course as Carnegie Mellon team members swarmed the car. Eventually, the source of the problem was identified: radio-frequency interference from the jumbotron television monitor located next to the race start chute. The jumbotron was jamming the GPS signals. Someone turned off the television.

Boss hit the streets tenth, twenty minutes behind the Stanford car. It was not a high-speed endeavor: Boss averaged about 14 mph over the fifty-five-mile course. “Everything that I saw Boss do looked great,” said Chris Urmson, the team’s director of technology. “It was smooth. It was fast. It interacted with other traffic well. It did what it was supposed to do.”

Boss came in first. The Stanford team came in second, with a time about twenty minutes behind Boss. Little Ben finished the race, but not in the money. Teams from Cornell and MIT finished too, but not within the six-hour time limit of the race. It was clear that Pittsburgh and Palo Alto were the dominant powers in robot-car technology.

The difference between the Penn team’s approach and the Stanford/CMU approach was significant. Little Ben’s approach was knowledge-based. The team was trying to construct a machine that would decide what to do on the road based on a knowledge base and a set of programmed “experiences.” This knowledge-based approach was one of the two major strains of artificial intelligence thinking. The Ben Franklin Racing Team was going for the general AI solution. It didn’t work well enough.

Little Ben was trying to “see” obstacles the way a human might. The lidar, a laser radar mounted on the roof, would identify objects. Then, the software “brain” would identify the object based on criteria like shape, color, and size. It would go through a decision tree to decide what to do: if it is a living thing like a person or a dog, slow down; if it is a living thing like a bird, it will probably move out of the way, so no need to slow down. This required Little Ben to have a massive amount of information about objects in the real world. For example, consider a traffic cone. When upright, a traffic cone is notable for its triangular shape with a square base. Traffic cones are usually between twelve inches and 3.5 feet tall. We can write a rule that goes something like this:

identify object:
    IF object.color = orange AND object.shape = triangular_with_square_base
    THEN object = traffic_cone;
    IF object.identifier = traffic_cone
    THEN intitiate_avoid_sequence

What if the traffic cone’s knocked over? I live in Manhattan; I see traffic cones knocked over all the time. I’ve seen a street blocked off by traffic cones, and I’ve seen people get out of their cars and move the traffic cones aside so they can drive down the street anyway. I’ve seen traffic cones mashed flat in the middle of the street. So, the rule about traffic cones has to be modified. Let’s try something else:

identify object:
    IF object.color = orange AND object.shape is like triangular_with_square_base.rotated_in_3D
    THEN object = traffic_cone;
    IF object.identifier = traffic_cone
    THEN intitiate_avoid_sequence

Here, we run into a difference between human thought and computation. A human brain can rotate an object in space. When I say “traffic cone,” you can picture the cone in your head. If I say, “imagine the cone is knocked over on the ground,” you can probably imagine this too and can mentally rotate the object. Engineers are particularly good at imagining spatial manipulations in their heads. One popular math aptitude test for children involves showing them a 3-D shape on a 2-D plane, then presenting other pictures and asking them to choose which one represents the object rotated.

The computer has no imagination, however. To have a rotated image of the object, it needs a 3-D rendering of the object—a vector map, at the very least. The programmer needs to program in the 3-D image. A computer also isn’t good at guessing, the way a brain is. The object on the ground is either something in its list of known objects, or it isn’t.

Little Ben did two things when I rode in it: it drove in a circle, and it failed to avoid an obstacle. After I got over being terrified, I thought through the reasons that Little Ben didn’t avoid the obstacle. The obstacle was a pillar. Little Ben needed a rule like “if obstacle.exists_in_path and obstacle.type=stationary, obstacle.avoid.” However, this rule doesn’t work because not all stationary objects remain stationary. A person might appear stationary for a moment, then the person might move. Therefore, the rule might be “if obstacle.exists_in_path and obstacle.type = stationary, AND obstacle.is_not_person, avoid.” That doesn’t work either: now we need to define the difference between a person and a column, so we’re back to an object-classification problem. If the column can be recognized as a column, we could write a rule for columns and a rule for people. However, we don’t know it’s a column unless there’s vision or, at the very least, object recognition—which is why I almost died in a car that almost ran into a giant cement pillar.

The core problem is sentience. Because there was no way to program theory of mind, the car would never be able to respond to obstacles the way that a human might. A computer only “knows” what it’s been told. Without sentience, the cognitive capacity to reason about the future, it can’t make the split-second decisions necessary to identify a streetlight as an obstacle and take appropriate evasive measures.

This problem of sentience was the central challenge of AI from its inception. It was the problem that Minsky eventually declared to be one of the hardest ever attempted. This is perhaps why, over at Stanford and Carnegie Mellon, they didn’t even take it on. Their cars took a radically different approach to solving the problem of getting a vehicle through an obstacle course. Their narrow AI approach was purely mathematical, and relied on the unreasonable effectiveness of data. It worked better than anyone expected. I like to think of it as the Karel the Robot plan.

In 1981, Stanford professor Richard Pattis introduced an educational programming language called Karel the Robot.2 Karel was named after Karel Čapek, the writer who invented the word “robot.” Pattis’s Karel the robot was not a real robot; he was an arrow drawn on a grid inside a square on a piece of paper. Students were supposed to pretend that the arrow was a robot in order to learn basic programming concepts. The box had one or more exits. Karel could move in the grid like a pawn in a chess game. Helping Karel escape from the box was the task. This introductory programming exercise—which you worked through with a pencil and paper—was the first computational assignment in programming classes at MIT, Harvard, Stanford, and all the other tech powerhouses for years. The professor gave us the box with Karel inside it. There were various obstacles. Our job as students was to write commands to get Karel from his location to the exit, avoiding the obstacles. It was mildly fun, or at least more fun than calculus, which was the other class I was taking when I was introduced to Karel in my freshman year. Here’s an example of a Karel exercise (figure 8.1). The instructions for this puzzle read: “Every morning Karel is awakened in bed when the newspaper—represented by a beeper—is thrown on the front porch of the house. Program Karel to retrieve the paper and bring it back to bed. The newspaper is always thrown to the same spot, and Karel’s world, including his bed, is as pictured.”2 Karel is represented by the arrow, and he is assumed to be in an imaginary bed in his initial position. To get to the beeper, he needs to turn ninety degrees north, travel north two streets, travel west two avenues, and so forth, until he reaches the beeper’s address on the grid.

11022_008_fig_001.jpg

Figure 8.1 A typical Karel the Robot problem.

The key to solving Karel problems was knowing the obstacles in advance and routing Karel around them. The human programmer can see the grid, which is a map of Karel’s entire world. Karel also has the grid stored in his internal memory; he is “aware” of the grid. The CMU team took a Karel approach when they built their car. They used the laser radar, cameras, and sensors on the car to build a 3-D map of the space. The map wasn’t populated with “objects” that the car “recognized”; instead, it was populated with navigable and non-navigable areas that were identified using machine learning. Objects like other cars were rendered as 3-D blobs. The blobs were Karel-type obstacles.

This was brilliant because it cut down dramatically on the number of variables that Boss or Junior had to solve for. Little Ben had to identify all of the variables in sight—the road, the birds, the pedestrians, the buildings, the traffic cones—and then run a prediction for each variable’s likely future location. It had to run a complex equation for each hypothetical. Boss and Junior didn’t have to do this. Boss and Junior had been preloaded with a 3-D map of the landscape and the route they needed to navigate. Machine learning was used to identify in advance which parts of the 3-D map were navigable. The Junior/Boss approach was a narrow AI solution that relied on better mapping technology.

The car drove and created its own map of the environment. It made a grid, like Karel had a grid. Then, the car only had to consider the aberrations. If the traffic cone wasn’t on the original map, it had to be factored in. If it was on the original grid, it was a stationary object and had been precalculated so that the processor didn’t have to perform image recognition on the fly.

The CMU team had an edge over the other competitors. They had been working on computer-controlled vehicles for years already. ALVINN, a self-driving van, launched at CMU in 1989.3 There was a stroke of enormous good fortune during the development period. Google founder Larry Page happened to become very interested in digital mapping. He attached a bunch of cameras to the outside of a panel van and drove around Mountain View, California, filming the landscape and turning the images into maps. Google then turned the van project into its massive Google Street View mapping program. Page’s vision fit nicely with tech developed by the previously mentioned CMU professor Sebastian Thrun, who was active with the DARPA Challenge team. Thrun and his students developed a program that knit street photos together into maps. Thrun moved from CMU to Stanford. Google bought his tech and folded it into Google Street View.

Something important happened in hardware at this point too. Video and 3-D take up huge amounts of memory space. Moore’s law says that the number of transistors on an integrated circuit doubles every year, and this increase in capacity means that computer memory has been getting cheaper and cheaper. Suddenly, around 2005, storage was cheap and abundant enough that for the first time it was feasible to make a 3-D map of the entire city of Mountain View and store it in a car’s onboard memory. Cheap storage capacity was a game-changer.

Thrun and the other successful self-driving car engineers discovered that replicating the process of human perception and decision making is both devilishly complicated and impossible with current technology. They decided to ignore it. Usually, people invoke the Wright brothers at this point when talking about this kind of innovation. Before the Wright brothers, people thought that a flying machine had to mimic the action of a bird. The Wright brothers realized that they could make a flying machine without flapping—that gliding with wings was good enough.

The self-driving car programmers realized they could make a vehicle without sentience—that moving around in a grid is good enough. Their final design basically is a highly complicated remote-controlled car. It doesn’t need to have awareness or to know rules for driving. What it uses instead are statistical estimates and the unreasonable effectiveness of data. It’s an incredibly sophisticated cheat that’s very cool and is effective in many situations, but a cheat nonetheless. It reminds me of using cheats to beat a video game. Instead of making a car that could move through the world like a person, these engineers turned the real world into a video game and navigated the car through it.

The statistical approach turns everything into numbers and estimates probabilities. Items in the real world are translated not into items, but into geometric shapes that move in certain directions on a grid at a calculated rate. The computer estimates the probability that a moving object will continue on its trajectory and predicts when the object will intersect with the vehicle. The car slows down or stops if the trajectories will intersect. It’s an elegant solution. It gets approximately the correct result, but for the wrong reason.

This is a sharp contrast to how brains operate. From an Atlantic article in 2017: “Our brains today take in more than 11 million pieces of information at any given moment; because we can process only about 40 of those consciously, our nonconscious mind takes over, using biases and stereotypes and patterns to filter out the noise.”4

How you feel about the car’s autonomy depends on what you want to believe about AI. Lots of people, like Minsky and others, want to believe that computers can think. “We’ve had this A.I. fantasy for almost 60 years now,” said Dennis Mortensen, x.ai’s founder and CEO, to Slate in April 2016. “At every turn we thought the only outcome would be some human-level entity where we could converse with it like you and I are [conversing] right now. That’s going to continue to be a fantasy. I can’t see it in my lifetime or even my kids’ lifetime.”5

Mortensen said that what is possible is “extremely specialized, verticalized A.I.s that understand perhaps only one job, but do that job very well.” This is great—however, driving is not one job. Driving is many jobs simultaneously. The machine-learning approach is great for routine tasks inside a fixed universe of symbols. It’s not great for operating a two-ton killing machine on streets that are teeming with gloriously unpredictable masses of people.

Since the 2007 Grand Challenge, DARPA has moved on from autonomous vehicles. Their current funding priorities don’t include self-driving cars. “Life is by definition unpredictable. It is impossible for programmers to anticipate every problematic or surprising situation that might arise, which means existing ML systems remain susceptible to failures as they encounter the irregularities and unpredictability of real-world circumstances,” said DARPA’s Hava Siegelmann, program manager for the Lifelong Learning Machines Program, in 2017. “Today, if you want to extend an ML system’s ability to perform in a new kind of situation, you have to take the system out of service and retrain it with additional data sets relevant to that new situation. This approach is just not scalable.”6

However, the dream is alive in the commercial sphere. Today, decisions about autonomous vehicle rules are being left to the states. Nevada, California, and Pennsylvania are leading the pack, but at least nine other states have contemplated legislation that would permit some level of autonomous driving.

The fact that the decision is being left to the states is a huge problem. Having fifty different standards is practically impossible to program against. A programmer prefers to write once and run anywhere. If there are fifty states, plus Washington, DC, and the US territories, all of which have different traffic laws and standards for autonomous vehicles, programmers will have to rewrite traffic rules and operational rules for each state. We’ll very quickly end up in the same confused, scattered situation that we face with missing textbooks in schools. States’ rights are an important component of American democracy, but they are a monster to program against. Programmers don’t even like to type; it’s hard to imagine them being so detail-oriented that they voluntarily comply with fifty-plus different state traffic schemas and then manage to communicate these different operating procedures to each customer who buys an autonomous car.

The communication problem surfaces again when we talk about self-driving cars. The National Highway Traffic Safety Association (NHTSA), the government agency in charge of motor vehicle and highway safety, had to come up with a complex scale to describe autonomous driving so we could talk about it. For a long time, programmers and executives used the term self-driving car without defining specifically what they meant. Again—normal for language, problematic for policy. In an effort to wrangle the Wild West of autonomous vehicles, the NHTSA published a set of categories for autonomous vehicles. The September 2016 Federal Automated Vehicles Policy reads as follows:

There are multiple definitions for various levels of automation and for some time there has been need for standardization to aid clarity and consistency. Therefore, this Policy adopts the SAE International (SAE) definitions for levels of automation. The SAE definitions divide vehicles into levels based on “who does what, when.”

Generally:

These standards changed at least once, and possibly twice, while I was writing this book—again, reminiscent of the changing school standards. At levels 3 and 4, the vehicle needs to sense its surroundings with complex, expensive sensors. The sensors used are primarily lidar, GPS, IMU, and cameras. The sensor input needs to be turned into binary information that is processed by the computer hardware inside the car. The hardware in this process is the same hardware that formed a “layer” of the turkey sandwich in chapter 2, and the same hardware that the Penn engineers wired into Little Ben’s trunk. Each level requires increasing amounts of computing power to make driving decisions based on the input from the sensors. Nobody has managed yet to create hardware and software powerful enough to be safe for ordinary driving in all locations and weather conditions. “Currently, no vehicle commercially available today exceeds Level 2 autonomy,” wrote Junko Yoshida in an October 2017 article about state-of-the-art computer chips for driving.8 Level 5 doesn’t exist for ordinary driving conditions and probably never will.

The great part of self-driving car development is the fact that driver-assistance technology has flourished. At levels 0–2, there have been a number of helpful innovations. People really like the idea of a car that can parallel-park itself—and as a small, finite exercise in geometry, it’s a terrific use of technology.

Most of the autonomous vehicle research and some training data is available online in arXiv and in scholarly repositories.9 On GitHub, there is training data available, and there is code that people are using for the Udacity open-source car competition (Thrun’s latest venture). I looked at the Udacity image dataset. It had less information than I imagined it would. One major drawback to the data is that there’s no weirdness built in, and the algorithms can’t predict what isn’t built in. Like in the Titanic data, there’s no way to account for strategies of jumping off the sinking ship after all the lifeboats have departed.

In real life, weird stuff happens all the time. Former Waymo leader Chris Urmson, a Carnegie Mellon grad and Grand Challenge winner, laid out some of the strangest observations in a popular YouTube video. Waymo’s test versions of automated cars have been driving around Mountain View for years, collecting data. Urmson laughed as he showed a bunch of kids playing Frogger across the highway or a woman in an electric wheelchair chasing a duck in circles around the middle of the road. These are not common occurrences, but they do happen. People have intelligence; they can accommodate weirdness. Computers aren’t intelligent; they can’t.

We can all think of strange things that we’ve observed while in the car. My own weirdest experience was with an animal. I was driving down winding mountain roads in Vermont with my friend Sarah, on our way to see a waterfall. We turned a blind corner, and there was a huge moose in the middle of the road. I skidded to a stop, my heart racing. I wondered how a self-driving car would handle this kind of situation. I went onto YouTube and watched some of the most popular fan videos of people playing with driver-assistance features. The videos I found were all made by men showing off their cool cars. They were uniformly positive. “It lulls you into a sense of security,” a Wired writer said in a YouTube video about driving on a mostly empty highway in Nevada. He boasted about how little he had to do while using the Tesla Autopilot feature. Even though the directions clearly say to keep both hands on the wheel, he frequently bragged that he could take his hands off the wheel or just use one hand instead of two. He demonstrated some Easter eggs, jokes the programmers hid inside the code. He clicked six times on the steering wheel, and the display changed to show the rainbow road from Mario Kart. He showed a second Easter egg: the driver’s display dinged for “more cowbell,” a reference to a Saturday Night Live skit.

I watched some promotional videos for Waymo. In one, the narrator claimed Waymo’s technology could “see” 360 degrees around the car, plus two football fields ahead. The shape of the car is optimized to allow field of view for the sensors. One major design feature, which isn’t yet perfected, is that the computer must withstand vibrations and heat fluctuations. “We’ve been bolting things onto existing cars for a long time, and started to realize that’s very limiting in what we can do when you’re dealing with the constraint of an existing vehicle,” said Jaime Waydo, a Waymo systems engineer, in a 2014 video. “When it comes to the physical operation of the vehicle, the sensors and the software are really doing all the work. There’s no need for things like a steering wheel and a brake pedal, so all we really had to think about was a button to signal that we’re ready to go. There’s a lot of thought that goes into creating a prototype vehicle. We’re learning a lot about safety.”

Let’s talk about safety. The major argument in favor of self-driving cars is that they will make the roads “safer.” John Krafcik, the CEO of Waymo, has this on his LinkedIn page: “Globally, 1.2 million die each year in road accidents. 95% of the time, it’s human error. Right now, there are about 1 billion cars on the planet. 95% of the time, they sit idly, wasting capital and consuming valuable space in our cities. We need to do better ... Self-driving cars could save thousands of lives, give people greater mobility, and free us from things we find frustrating about driving today.”

Krafcik seems to be blaming drivers. Pesky humans, making human errors. This is technochauvinism. Of course humans are responsible for driving errors. Humans are the only ones driving cars! (Although I did once see what looked like a dog in a Yankees cap driving a miniature Mercedes down a sidewalk on Broadway in Lower Manhattan. I did a double take. The dog’s owner was following behind with a remote control. This led to a delightful afternoon of exploring the subgenre of people who post online videos of animals in remote-controlled vehicles.)

We’ve had cars for a very long time, and we know that humans are going to make mistakes driving cars. They are human. Humans make mistakes. We know this. Even the humans who make software make mistakes. Nobody is a perfect driver. Even the people who write the software for autonomous vehicles are not perfect drivers. When you consider that humans drive trillions of miles every year, and avoid accidents for most of that time, it’s quite impressive.

The same figure for human error comes up over and over. People dying is sad; I don’t mean to minimize death. However, when you see a single statistic like this repeated, it raises suspicions. It usually means that it comes from a single source, meaning that it comes from a special interest group trying to influence public opinion. The figure that Krafcik quoted, 95 percent human error, also appears in a February 2015 report prepared by Santokh Singh, a senior mathematical statistician at Bowhead Systems Management, Inc., who was working under contract with the Mathematical Analysis Division of the National Center for Statistics and Analysis, an office of the NHTSA.10 The report looks at a weighted sample of 5,470 crashes and assigns a cause to each one. The cause can be the driver, the car, or the environment (meaning the road or the weather).

Bowhead Systems Management is a subsidiary of Ukpeaġvik Iñupiat Corporation, a government contracting firm that manages the US Navy’s unmanned autonomous system (UAS) operations in Maryland and Nevada. In other words: Bowhead, a company that makes unmanned autonomous systems for military use, has created the official government statistic that justifies building unmanned autonomous systems (cars) for civilian use.

The National Center for Health Statistics reports the number of deaths from motor vehicle traffic was 35,398 in 2014, the most recent year available. This is a rate of eleven deaths per one hundred thousand people. Overall, the age-adjusted death rate, which accounts for the aging of the population, was 724.6 deaths per 100,000 people.

Lots of people die in vehicle accidents; it’s a major public health issue. In statistical lingo, dying from an injury is called injury mortality. Unintentional motor vehicle traffic–related injuries were the leading cause of injury mortality from 2002 to 2010, followed by unintentional poisoning. In 2015, the US Department of Transportation’s NHTSA showed a 7.7 percent increase in motor vehicle traffic deaths in 2015. An estimated 35,200 people died from vehicle accidents in 2015, up from the 32,675 reported fatalities in 2014.

We could speculate on the causes. Texting and distracted driving are certainly contributing to the uptick in deaths. One straightforward solution would be to invest more in public transportation. In the Bay Area of California, public transportation is woefully underfunded. The last time I tried to take a subway at rush hour in San Francisco, I had to wait for three trains to pass before I could squeeze into a jam-packed car. On the roads, the situation is even worse. I’m not surprised that Bay Area programmers want to make self-driving cars so that they can do something other than sit in traffic. Based on my limited observations, commuting in the Bay Area means spending an awful lot of time sitting in traffic. However, public transportation funding is a complex issue that requires massive, collaborative effort over a period of years. It involves government bureaucracy. This is exactly the kind of project that tech people don’t want to attack because it takes a really long time and it’s complicated and there aren’t easy fixes.

Meanwhile, the self-driving car remains a fantasy. In 2011, Sebastian Thrun launched Google X, the company’s “moonshot” division. In 2012, he founded Udacity, which also failed. “I’d aspired to give people a profound education—to teach them something substantial. But the data was at odds with this idea,” Thrun told Fast Company. “We have a lousy product.”11

Thrun has been honest about things he’s tried that don’t work—but it seems nobody’s listening. Why not? The simplest explanation may be greed. Tech investor Roger McNamee told the New Yorker: “Some of us actually, as naïve as it sounds, came here to make the world a better place. And we did not succeed. We made some things better, we made some things worse, and in the meantime the libertarians took over, and they do not give a damn about right or wrong. They are here to make money.”12

Finally, in 2017, curious to see how reality measured up to what I was reading, I tried to make an appointment to ride in a self-driving car. I tried Uber first; Pittsburgh isn’t too far away from where I live. The PR person said there weren’t any appointments available. I asked if I could just go to Pittsburgh and hail one. The PR person discouraged me. I realized why: the cars aren’t in wide use. They’re not ready for prime time.

Self-driving cars have problems: They don’t track the center line of the street well on ill-maintained roads. They don’t operate in snow and other bad weather because they can’t “see” in these conditions. The lidar guidance system in an autonomous car works by bouncing laser beams off nearby objects. It estimates how far the objects are by measuring the reflection time. In the rain or snow or dust, the beams bounce off the particles in the air instead of bouncing off obstacles like bicyclists. One self-driving car was spotted going the wrong way down a one-way street. The software apparently didn’t reflect that the street was one way. The cars are easy to confuse because they rely on the same mediocre image recognition algorithms that mislabel pictures of black people as gorillas.13 Most autonomous vehicles use algorithms called deep neural networks, which can be confused by simply putting a sticker or graffiti on a stop sign.14 GPS hacking is a very real danger for autonomous vehicles as well. Pocket-size GPS jammers are illegal, but they are easy to order online for about $50. Commercial truckers commonly use jammers in order to pass for free through GPS-enabled toll booths.15 Self-driving cars navigate by GPS; what happens when a self-driving school bus speeding down the highway loses its navigation system at 75 mph because of a jammer in the next lane?

In the scientific community, there is an undercurrent of skepticism. An AI researcher told me: “I have a Tesla. The Autopilot ... I only use it for highway driving. It doesn’t work for city driving. The technology isn’t there yet. At NVIDIA, they found that self-driving car algorithms mess up an average of every ten minutes.” This observation is consistent with the Tesla user’s manual, which states that the Autopilot should only be used for short periods of time on highways under driver supervision.

Uber received bad press in 2017 after its then-CEO, Travis Kalanick, was filmed yelling at Uber driver Fawzi Kamal. Kamal had lost $97,000 and said he was bankrupt because of Uber’s business strategy of cutting fares so that drivers make as little as ten dollars per hour. At the time, Kalanick’s net worth was $6.3 billion. Kamal told Kalanick about his struggle. Kalanick replied: “Some people don’t like to take responsibility for their own shit. They blame everything in their life on somebody else. Good luck!” The company launched self-driving cars in California in defiance of state regulations. They were shut down after a legal fight. Kalanick personally hired Anthony Levandowski, a DARPA Grand Challenge participant who worked with Thrun at Google X and later Waymo. Levandowski was fired from Uber in May 2017 for failing to cooperate with an investigation into whether he stole intellectual property from Waymo and used it to further Uber’s technology interests.16

In May 2016, Joshua D. Brown of Canton, Ohio, became the first person to die in a self-driving car. Brown, forty years old, was a Navy SEAL and master Explosive Ordinance Disposal (EOD) technician turned technology entrepreneur. He died in his Tesla while using the Autopilot self-driving feature. He had such faith in his car that he was letting the Autopilot do all of the work. It was a bright day, and the car’s sensors failed to pick up a white semi-tractor-trailer that was turning through an intersection. The Tesla drove under the truck. The entire top of the car sheared off. The base of the car kept going, coming to rest several hundred yards away.17

“When used in conjunction with driver oversight, the data is unequivocal that Autopilot reduces driver workload and results in a statistically significant improvement in safety,” Tesla said in a statement after the crash.18 The Associated Press wrote of the crash:

This is not the first time automatic braking systems have malfunctioned, and several have been recalled to fix problems. In November, for instance, Toyota had to recall 31,000 full-sized Lexus and Toyota cars because the automatic braking system radar mistook steel joints or plates in the road for an object ahead and put on the brakes. Also last fall, Ford recalled 37,000 F-150 pickups because they braked with nothing in the way. The company said the radar could become confused when passing a large, reflective truck.

The technology relies on multiple cameras, radar, laser and computers to sense objects and determine if they are in the car’s way, said Mike Harley, an analyst at Kelley Blue Book. Systems like Tesla’s, which rely heavily on cameras, “aren’t sophisticated enough to overcome blindness from bright or low contrast light,” he said.

Harley called the death unfortunate, but said that more deaths can be expected as the autonomous technology is refined.19

The NHTSA investigated the crash and all but said the crash was Brown’s fault, not the computer’s. They did note, however, that Tesla might reconsider its decision to call the feature Autopilot.

Decisions about how autonomous vehicles should react are quite literally life-and-death decisions. The Tesla Model X P90D has a curb weight of 5,381 pounds. For reference, a female Asian elephant weighs about six thousand pounds.

After I failed to book a ride in an Uber self-driving car in Pittsburgh, I tried to schedule a ride at NVIDIA, the company that makes the chips used in autonomous vehicles. They told me that it was a bad time and to check back after the Consumer Electronics Show (CES), a big tradeshow in Las Vegas. I did so; they didn’t get back to me. Waymo said on its website that it isn’t accepting press requests. Finally, to see the state of the art from a consumer perspective, I booked a test drive in a Tesla. On a bright, sunny, crisp winter morning, my family went to the Tesla dealer in Manhattan. The showroom is in the Meatpacking District, under the High Line, on West Twenty-Fifth Street. It’s surrounded by art galleries that used to be auto body shops. Across the street was a wrought-iron outline of a woman. Someone had yarn-bombed it, covering the metal. A peach-colored crochet bikini hung limply on the frame.

We walked in and saw a red Model S sedan. Next to it on the floor was a miniature version, in the identical deep crimson. It was a Radio Flyer version of a Tesla Model S. It was miniature like a Barbie Jeep or a mini John Deere tractor or a Power Wheels car—but it was a Tesla. I was enchanted.

We went out on a test drive in the Model X with a salesperson named Ryan. The X doors open like falcon wings: they furl and unfurl. My son walked up to the car. Ryan beeped the remote, which was shaped like a small Tesla, to open the rear passenger side door. The door opened slowly, halfway. “It senses you standing there,” Ryan said. “It won’t flip open fast and hit you because it sees you.” The door stopped. It wouldn’t open all the way. Ryan beeped the remote again, looked concerned. He went to investigate. We stood on the sidewalk, watching.

Ryan returned, looking relieved. “It’s the sensor,” Ryan explained. The door sensor was right next to a sign reading “No parking on street cleaning days.” The green metal pole was right next to the sensor, which is in the housing over the rear passenger side wheel. The sensor is one of eight cameras mounted in the car body. Because of the pole, Ryan said, the door wouldn’t open all the way. He promised that we could take a picture with the wings up when we got back, when the car would be parked differently.

We got in and Ryan explained where everything was. I inhaled deeply. It smelled like new car and luxury. White “vegan leather” wrapped around the driver’s seat and the seat back was encased in a shiny black plastic shell that looked like something out of a 1960s James Bond film. Really, the whole experience felt very James Bond.

I put my foot on the brake and the car started. Where a normal car would have buttons, the Model X has a giant touchscreen. There are only two buttons. One is for the hazard lights: “That’s federally mandated,” Ryan said, waving his hand apologetically. The other button is on the right side of the giant touchscreen. It opens the glove compartment.

The ride in an electric car doesn’t jolt you like a car with a gasoline engine. There is a subtle shaking that happens with a gas engine. In the Tesla, this vibration is gone. The car felt quiet and smooth as we pulled out of the parking spot and drove toward the West Side Highway.

I tried to turn on the Autopilot feature via a lever to the left of the steering wheel. I pulled it toward me twice to activate. It beeped, and an orange light on the console blinked. “This is a new car. Autopilot isn’t active,” Ryan explained. “A massive Autopilot update just rolled out a few days ago. It will be another few weeks before the Autopilot works on this car. It needs to gather data.”

“So it doesn’t work?” I asked.

“It works,” Ryan said. “The car is ready for total autonomy, but we can’t implement it yet because, you know, regulations.” “You know, regulations,” meant that Joshua Brown died in an Autopilot crash and the NTHSA hadn’t yet finished its investigation—so Tesla had turned off the Autopilot on all cars until the developers could build, test, and roll out new features.

Ryan chatted about the future, which in his view meant Teslas everywhere. “When we have full autonomy, Elon Musk says you should be able to press a button and summon your car no matter where you are. It might take a few days for your car to find you, but it should arrive.” I wondered if it occurred to him that waiting for days for your car to arrive kind of defeats the point of having a car at all.

“Someday” is the most common way to talk about autonomous vehicles. Not if, but when. This seems strange to me. The fact that I couldn’t get a ride at Uber or NVIDIA or Waymo means the same thing as Ryan’s, “you know, regulations”: The self-driving car doesn’t really work. Or, it works well enough in easy driving situations: a clear day, an empty highway with recently painted lines. That’s how Uber’s subsidiary Otto (which was founded by Levandowski) managed to pull off a publicity stunt of sending a self-driving beer truck from the East Coast to the West Coast. If you set up the conditions just right, it looks like it works. However, the technical drawbacks are abundant. Continuous autonomous driving requires two onboard servers—one for operation, one for backup—and together the servers generate about five thousand watts. That wattage generates a lot of heat. It’s the wattage you’d require to heat a four-hundred-square-foot room. No one has yet figured out how to also incorporate the cooling required to counter this.20

Ryan directed me onto the West Side Highway and into traffic. Ordinarily, I take my foot off the accelerator and drift up to a stop light, braking before I get there—but the Tesla has regenerative braking, which meant that the brakes kicked in when I took my foot off the gas pedal. It felt disorienting, the need to drive differently. Someone honked at me. I couldn’t tell if he honked because I was being weird about the traffic light, if he was giving me a hard time for being in a luxury car, or if he was just an ordinary NYC jerk.

I drove down the highway and turned onto cobblestone-paved Clarkson Street. It felt less bumpy than usual. Ryan directed me just past Houston onto a block-long stretch of smooth road without any driveways and with few pedestrians that stretches along the back of a shipping facility. “Open it up,” Ryan urged me. “There’s nobody around. Try it.”

I didn’t need to be told twice. I pressed the pedal to the metal—I had always wanted to do that—and the car surged ahead. The power was intoxicating. We were all thrust back against the seats with the force of acceleration. “Just like Space Mountain!” said Ryan. My son, in the back seat, agreed. I regretted that the block was so short. We turned back onto the West Side Highway and I hit the accelerator again, just to feel the surge. Everyone was thrown back against the seats again.

“Sorry,” I said. “I love this.”

Ryan nodded reassuringly. “You’re a very good driver,” he told me. I beamed. I realized he probably says this to everyone, but I didn’t care. My husband, I noticed in the rearview mirror, looked a little green.

“This is the safest car on the market,” Ryan said. “The safest car ever made.” He told a story about the NHTSA’s crash testing on the Tesla: it couldn’t crash it. “They tried to flip it—and they couldn’t. They had to get a forklift to flip it over. We did the crash test, where the car drives into a wall—we broke the wall. They dropped a weight on the car, we broke the weight. We’ve broken more pieces of test equipment than any car ever.”

We passed another Tesla in Greenwich Village, and we waved. This is a thing that Tesla owners do: they wave to each other. Drive a Tesla on the highway in San Francisco, and your arm gets tired from waving.

Ryan kept referring to Elon Musk. A cult of personality surrounds Musk, unlike any other car designer. Who designed the Ford Explorer? I have no idea. But Elon Musk, even my son knew. “He’s famous,” my son said. “He was even a guest star on the Simpsons.”

We parked and took a picture of my son and me standing next to the bright white car, its wings up. We got into our family car parked outside. “This feels so old-fashioned now,” my son said. We drove home down the West Side Highway, then over the cobblestones of Clarkson Street. We jolted and bobbled over the stones. It was the exact opposite of the smooth ride we felt in the Tesla. My car felt like it was shaking me at a low level. It was like the time I went to Le Bernadin for lunch, then came home and realized the only thing we had for dinner was hot dogs.

As a car, the Tesla is amazing. As an autonomous vehicle, I am skeptical. Part of the problem is that the machine ethics haven’t been finalized because they are very difficult to articulate. The ethical dilemma is generally led by the trolley problem, a philosophical exercise. Imagine you’re driving a trolley that’s hurtling down the tracks toward a crowd of people. You can divert it to a different track, but you will hit one person. Which do you choose: certain death for one, or for many? Philosophers have been hired by Google and Uber to work out the ethical issues and embed them in the software. It hasn’t worked well. In October 2016, Fast Company reported that Mercedes programmed its cars to always save the driver and the car’s occupants.21 This is not ideal. Imagine an autonomous Mercedes is skidding toward a crowd of kids standing at a school bus stop next to a tree. The Mercedes’s software will choose to hit the crowd of children instead of the tree because this is the strategy that is most likely to ensure the safety of the driver—whereas a person would likely steer into the tree, because young lives are precious.

Imagine the opposite scenario: the car is programmed to sacrifice the driver and the occupants at the expense of bystanders. Would you get into that car with your child? Would you let anyone in your family ride in it? Do you want to be on the road, or on the sidewalk, or on a bicycle, next to cars that have no drivers and have unreliable software that is designed to kill you or the driver? Do you trust the unknown programmers who are making these decisions on your behalf? In a self-driving car, death is a feature, not a bug.

The trolley problem is a classic teaching example of computer ethics. Many engineers respond to this dilemma in an unsatisfying way. “If you know you can save at least one person, at least save that one. Save the one in the car,” said Christoph von Hugo, Mercedes’s manager of driverless car safety, in an interview with Car and Driver.22 Computer scientists and engineers, following the precedent set by Minsky and previous generations, don’t tend to think through the precedent that they’re establishing or the implications of small design decisions. They ought to, but they often don’t. Engineers, software developers, and computer scientists have minimal ethical training. The Association for Computing Machinery (ACM), the most powerful professional association in computing, does have an ethical code. In 2016, it was revised for the first time since 1992. The web, remember, launched in 1991 and Facebook launched in 2004.

There’s an ethics requirement in the recommended standard computer science curriculum, but it isn’t enforced. Few universities have a course in computer or engineering ethics on the books. Ethics and morality are beyond the scope of our current discussion, but suffice it to say that this isn’t new territory. Moral considerations and concepts like the social contract are what we use when we get to the outer limits of what we know to be true or what we know how to deal with based on precedent. We imagine our way into a decision that fits with the collective framework of the society in which we live. Those frameworks may be shaped by religious communities or by physical communities. When people don’t have a framework or a sense of commitment to others, however, they tend to make decisions that seem aberrant. In the case of self-driving cars, there’s no way to make sure that the decisions made by individual technologists in corporate office buildings will match with actual collective good. This leads us to ask, again: Who does this technology serve? How does it serve us to use it? If self-driving cars are programmed to save the driver over a group of kindergarteners, why? What does it mean to accept that programming default and get behind the wheel?

Plenty of people, including technologists, are sounding warnings about self-driving cars and how they attempt to tackle very hard problems that haven’t yet been solved. Internet pioneer Jaron Lanier warned of the economic consequences in an interview:

The way self-driving cars work is big data. It’s not some brilliant artificial brain that knows how to drive a car. It’s that the streets are digitized in great detail. So where does the data come from? To a degree, from automated cameras. But no matter where it comes from, at the bottom of the chain there will be someone operating it. It’s not really automated. Whoever that is—maybe somebody wearing Google Glass on their head that sees a new pothole, or somebody on their bike that sees it—only a few people will pick up that data. At that point, when the data becomes rarified, the value should go up. The updating of the input that is needed is more valuable, per bit, than we imagine it would be today.23

Lanier is describing a world in which vehicle safety could depend on monetized data—a dystopia in which the best data goes to the people who can afford to pay the most for it. He’s warning of a likely future path for self-driving cars that is neither safe nor ethical nor toward the greater good. The problem seems to be that few people are listening. “Self-driving cars are nifty and coming soon” seems to be the accepted wisdom, and nobody seems to care that the technologists have been saying “coming soon” for decades now. To date, all self-driving car “experiments” have required a driver and an engineer to be onboard at all times. Only a technochauvinist would call this success and not failure.

A few useful consumer advances have come out of self-driving car projects. My car has cameras embedded in all four sides; the live video from these cameras makes it easier to park. Some luxury cars now have a parallel-parking feature to help the driver get into a tight space. Some cars have a lane-monitoring feature that sounds an alert when the driver strays too close to the lane markings. I know some anxious drivers who really value this feature.

Safety features rarely sell cars, however. New features, like onboard DVD players and in-car Wi-Fi and integrated Bluetooth, are far more helpful in increasing automakers’ profits. This is not necessarily toward the greater good, however. Safety statistics show that more technology inside cars is not necessarily better for driving. The National Safety Council, a watchdog group, reports that 53 percent of drivers believe that if manufacturers put infotainment dashboards and hands-free technology inside cars, these features must be safe to use. In reality, the opposite is true. The more infotainment technology goes into cars, the more accidents there are. Distracted driving is up since people started texting on mobile phones while driving. More than three thousand people per year die on US roads in distracted driving accidents. The National Safety Council estimates that it takes an average of twenty-seven seconds for the driver’s full mental attention to return after checking a phone. Texting while driving is banned in forty-six states, the District of Columbia, Puerto Rico, Guam, and the US Virgin Islands. Nevertheless, drivers persist in using phones to talk or text or find directions while driving. Young people are particularly at fault. Between 2006 and 2015, the number of drivers aged sixteen to twenty-four who were visibly manipulating handheld devices went up from 0.5 percent to 4.9 percent, according to the NHTSA.24

Building self-driving cars to solve safety problems is like deploying nanobots to kill bugs on houseplants. We should really focus on making human-assistance systems instead of on making human-replacement systems. The point is not to make a world run by machines; people are the point. We need human-centered design. One example of human-centered design might be for car manufacturers to put into their standard onboard package a device that blocks the driver’s cell phone. This technology already exists. It’s customizable so that the driver can call 911 if need be but otherwise can’t call or text or go online. This would cut down on distracted driving significantly. However, it would not lead to an economic payday. The hope of a big payout is behind a great deal of the hype behind self-driving cars. Few investors are willing to give up this hope.

The economics of self-driving cars may come down to public perception. In a 2016 conversation between President Barack Obama and MIT Media Lab director Joi Ito, which was published in Wired, the two men talked about the future of autonomous vehicles.25 “The technology is essentially here,” Obama said.

We have machines that can make a bunch of quick decisions that could drastically reduce traffic fatalities, drastically improve the efficiency of our transportation grid, and help solve things like carbon emissions that are causing the warming of the planet. But Joi made a very elegant point, which is, what are the values that we’re going to embed in the cars? There are gonna be a bunch of choices that you have to make, the classic problem being: If the car is driving, you can swerve to avoid hitting a pedestrian, but then you might hit a wall and kill yourself. It’s a moral decision, and who’s setting up those rules?

Ito replied: “When we did the car trolley problem, we found that most people liked the idea that the driver and the passengers could be sacrificed to save many people. They also said they would never buy a self-driving car.” It should surprise no one that members of the public are both more ethical and more intelligent than the machines we are being encouraged to entrust our lives to.

Notes