Just outside downtown Tempe, Arizona, along a wide, dusty highway, there is a small memorial to the victim of the most famous pedestrian-vehicle crash in history. Two wooden crosses and some flowers stand alongside Mill Avenue, where the dense brick buildings of downtown Tempe thin out into desert and parking lots. Here is where Elaine Herzberg, a forty-nine-year-old homeless woman, was killed by a computer-piloted Uber car on March 18, 2018. She was the first pedestrian ever killed by a self-driving car.
That night around 10 p.m., Herzberg had been walking her bike across the street in a dark location about 360 feet from a crosswalk. The red Hyper-brand bike she was riding—the kind you can buy for one hundred dollars at Walmart—was loaded with plastic bags containing her belongings, which might help explain why the laser-based sensor system (or LiDAR, for light detection and ranging) in the Volvo SUV did not immediately recognize her as either a bicyclist or a pedestrian.
Normally when a pedestrian is killed, investigators have to resort to unreliable witnesses and guesswork to reconstruct what happened—if anyone even bothers—but this was not an ordinary pedestrian death. The death of Elaine Herzberg was an international media event.
The last moments of her life were recorded by a camera and turned over to police and federal investigators and then viewed by people all over the world. In this, the rarest of cases, everyone could see exactly what happened.
The video footage, shot in black and white from the dashboard camera, shows the SUV proceeding down the dark desert highway. Very shortly before impact, the car’s headlights light up Herzberg’s lower body. She appears suddenly, emerging from a shadow in the road.
Less than a second before her death, the headlights finally light up her face. It is turned away from traffic, directed at the median ahead. She had almost arrived at that landscaped refuge, and until those final milliseconds, she seemed relatively relaxed. She probably expected the driver to notice her and slow down. Herzberg did not know that the car heading toward her was driven by a computer program undergoing high-stakes beta testing on Arizona residents.
Finally, she turns and looks over her shoulder at the approaching car, her eyes appearing to lock with the camera with an expression that is not fully captured. Then her face is obscured, at close distance, by a motion blur, and the video is clipped, right before impact, for her sake and viewers’.
Some people believe that self-driving cars will someday eliminate pedestrian crashes entirely—or nearly so. A system of perfectly calibrated self-driving cars could reduce traffic fatalities 90 percent or more, the Atlantic’s Adrienne LaFrance, among others, has said,1 but that assertion relies on many flawed assumptions—including that 94 percent of crashes can be attributed to so-called human error.
Nevertheless, the companies pursuing the self-driving cars tout safety as a foundational moral imperative for the technology, and on some level, it is a compelling vision. As discussed in earlier chapters, people are just not very reliable drivers. They get sleepy and drunk and very old and distracted. And they contribute to the deaths of more than thirty-five thousand Americans a year in countless ways.
The early experience with self-driving cars in the United States has been far from utopian, however. An investigation by local police and, later, by the National Traffic Safety Board showed that before striking Herzberg, Uber’s forty-four-hundred-pound Volvo XC90 SUV did not brake significantly. She was hit at about 40 miles per hour, even though the car’s self-driving system detected an object in the road—Herzberg—a full six seconds before the crash. It would later be revealed that Uber’s cars did not “include a consideration for jaywalking pedestrians”—an astonishing oversight.
“Even the most junior human driver knows to expect that people sometimes walk outside of crosswalks,” Jason Levine, executive director of the Center for Auto Safety told the Washington Post. He added that other autonomous vehicle companies could be testing cars with similar limitations, but it is not known because there is really no day-to-day government oversight of the industry.2
The Herzberg case is a cautionary tale about how self-driving car technology, in the absence of robust protections, can go very badly for the public—and especially the most vulnerable.
At the time Herzberg was hit and killed, there were no specific regulations at all—from either the federal government or the state of Arizona—to protect the public from self-driving cars. The US DOT has issued a set of voluntary best practices3 for autonomous vehicle companies but has not yet imposed any specific regulations.
Self-driving cars were a common sight in Maricopa County at the time Herzberg was killed. Arizona’s governor, Doug Ducey, had welcomed Uber to test its self-driving cars in 2015, bragging about the state’s industry-friendly environment.4 “While California puts the brakes on innovation and change with more bureaucracy and more regulation, Arizona is paving the way for new technology and new businesses,” Ducey boasted the following year.5
Uber was considered—correctly, it turns out—to be one of the more freewheeling companies testing self-driving cars. California had recently ousted Uber’s Advanced Technologies Group after one of its cars was caught on camera running a red light in San Francisco.6
Although Ducey was enthusiastic—boastful, in fact—about the partnership, many ordinary Arizona residents felt differently. A 2018 New York Times article highlighted a number of vigilante “attacks” on robot cars by enraged residents armed with sticks and knives. A Chandler, Arizona, man told the New York Times, for instance, “I don’t want to be their real-world mistake.”7
Autonomous vehicle testing does raise a lot of ethical questions. Arizonans—people like Elaine Herzberg—did not explicitly consent to have a potentially life-threatening technology beta-tested on them and their families. In other fields—pharmaceuticals, for example—companies are forbidden from testing potentially lethal new products on patients without their explicit consent. Self-driving car companies, meanwhile, are expected to police themselves.
Governors like Ducey, for their part, have been relatively enthusiastic about inviting the companies to test on public streets. Autonomous car testing offers government officials an aura of tech and business friendliness—and perhaps some jobs—usually without requiring any upfront public investment.
Twelve US states currently allow companies to test or market vehicles that operate without a driver.8 Google’s self-driving car operation, Waymo, claimed to have driven twenty million miles on public roads as of January 2020.9 That company already offers driverless taxi service to a small group of screened riders, who sign nondisclosure agreements, in Arizona.
Uber and other autonomous vehicle companies like Arizona in part because environments like Mill Avenue—where Herzberg was struck—are so hostile to pedestrians. On Google Maps’ satellite images, the area around the crash appears as a wasteland of highways and parking lots. It is a place designed for machines, not people.
Thanks to environments like Mill Avenue, ubiquitous across Arizona, the state has relatively few pedestrians. That was a selling point for autonomous car companies still trying to work out the bugs in their software. Interacting with pedestrians is one of the more difficult challenges for the artificial intelligence that is beginning to replace drivers. In addition, Arizona’s warm weather eliminates the need to deal with the additional complexity of driving on snow and ice.
It would have been impossible for the state of Arizona or the federal government to know—because they were not monitoring Uber’s self-driving operation in any formal capacity—but there was plenty of warning that a fatality might occur. Just the week before Herzberg was killed, Robbie Miller, a whistleblower at Uber’s Advanced Technology Group, emailed company executives warning about safety problems: “A car was damaged nearly every other day in February,” he said. “We shouldn’t be hitting things every 15,000 miles.” He also noted prophetically, “Several of the drivers appear to not have been properly vetted or trained.”10
The one thing standing between Herzberg and the front end of the Volvo—in case of a programming failure—was supposed to be a backup driver. But, it turns out, that was completely inadequate.
Following the crash, Uber provided video of the interior of the car as it approaches Herzberg’s bike. In that video, a woman named Rafaela Vasquez is seen in the driver’s seat, and her face is turned downward, lit up by a screen. An investigation would later reveal that Vasquez was streaming the television show The Voice on her smartphone.11
Had the disastrous encounter between an Uber car and Herzberg played out more than a year earlier, there would have been a second backup driver in the car. But in late 2017, Uber had changed its policy, moving from two backup drivers to one, presumably to save money.12
That was a fateful decision, one that safety experts have criticized. The level of concentration required to monitor a self-driving car for hours and hours is nearly impossible for a single person to maintain. The task is simply too boring for the human brain to remain vigilant for an extended time.
The National Transportation Safety Board (NTSB) refers to this phenomenon as “automation complacency” and said that Vasquez’s inattention—she checked her phone twenty-three times in the three minutes before the crash—was a “typical effect.” In its report about the event, the NTSB notes that Vasquez’s boredom was rational: she had passed the same site of the crash seventy-three times previously without any problems.13
What was worse, however, was that the system did not even notify Vasquez that an object had been detected in the road, which might have given her time to take over and brake or swerve. Uber, meanwhile, had disabled the automatic braking feature.
Looking back, it is clear that financial considerations outweighed safety for the company. In his book chronicling Uber’s rise, New York Times technology writer Mike Isaac noted that Uber executives gave the self-driving car program the code name “$” because it would allow them to cut out the cost of drivers and create billions in profits.14 Uber wanted to introduce fully driverless taxi service by the end of 2018—a goal that in hindsight looks like wild hubris.
But the financial pressure was intense. Uber had lost a jaw-dropping $577 million in the first quarter of 2018, mainly on its driver-based taxi service.15 Meanwhile, at the time that Herzberg was struck, the company was looking ahead to a 2019 initial public offering. Execs hoped that it would attract $100 billion in investment16—despite it being what Bloomberg Technology described as “a serially unprofitable business.”17 Eliminating drivers from its business model was considered one of the few avenues to profitability for Uber ahead of its much-anticipated stock market debut.18 “A lot of that is the Silicon Valley move-fast-and-break-things attitude,” said auto writer Dan Albert. “[But] when you’re not talking about just apps . . . the things you break are actual people.”19
Uber was not only rushing to stem billions in annual losses, but it was also vying with companies like Waymo, Ford, and others to bring self-driving car technology to market. It was a race covered breathlessly by the media in competitive terms.
All that should help explain why the car did not brake when it detected Herzberg in the road. In its preliminary report, the NSTB explained,
According to Uber, emergency braking maneuvers are not enabled while the vehicle is under computer control, to reduce the potential for erratic vehicle behavior.20
Why was Uber so concerned about “erratic vehicle behavior”? The answer, again, goes back to money.
Self-driving cars can be programmed to drive very cautiously, braking every time the LiDAR system detects some unknown object—such as a plastic bag blowing in the breeze—in the roadway. But riding in a car that is constantly braking is uncomfortable; it makes people nauseated.
Uber was trying to introduce a new product—self-driving taxis—that could make money for them later that year, and they wanted the rides to be competitive with its traditional taxi service. So Uber programmed its self-driving cars not to brake.
It is clear now that the company was, without any external supervision and under competitive and financial pressure, acting quite recklessly. Part of that appears to be explained by cultural deficits within the company. According to Isaac, executives at the self-driving car operation that eventually became Uber Advanced Technology Group, prior to its acquisition by Uber, reportedly pasted stickers around the Silicon Valley with their informal motto: “safety third.”21
Nevertheless, in 2019, local prosecutors determined that the company would not be charged criminally in the case.22 Vasquez, for her part, may still face charges.
Corinne Kisner, executive director of the National Association of City Transportation Officials, called Uber out for its “negligent safety culture” but emphasized that this case “did not occur in a vacuum.” She said in a statement, “The absence of federal leadership, including mandatory safety standards, contributes to an inherently risky and unaccountable [autonomous vehicle] testing environment.”23
At the time she was killed, Herzberg was reportedly trying to get her life together. A divorced mother of two, she had struggled with drugs and addiction for more than a decade, having spent some time in jail for petty crimes, mostly related to drug possession. At the time of her death, the investigation showed, she had marijuana and methamphetamines in her system.
She was, however, reportedly well liked and generous—Ms. Elle, they called her in Tempe’s tight-knit homeless community.24 At the time of her death, she was living with a group of other homeless people in an encampment in Papago Park, just near the site where she was killed.
Around that time, the city of Tempe was making efforts to clear out the encampment, following complaints from neighbors. But in lieu of just evicting everyone and destroying the little community, social services providers were sent out to the park. Herzberg, according to her friends, had arranged to get an apartment and was trying to kick her drug habit.25 She never got that chance.
Jake Fisher, senior director of auto testing at Consumer Reports, said that there are really no guarantees that self-driving cars will improve safety for pedestrians at all. Many experts, Fisher included, believe that fully autonomous vehicles may still be decades away from the market or more.
Meanwhile, “the testing that is being done absolutely can put pedestrians and cyclists at risk,” he said. “I don’t think it is necessarily decided that self-driving vehicles are going to be better for pedestrians,” said Fisher. “Despite what you read, I don’t think it’s necessarily going to be a positive outcome for society.”26
In addition, although safety is often touted as the impetus for self-driving cars, the vision presented by industry leaders is sometimes at odds with pedestrian safety and rights. For example, an unnamed auto industry official told the New York Times that perhaps instead of cars simply detecting pedestrians and stopping for them in Manhattan, pedestrian “gates” could be constructed at intersections that would prevent them from crossing against the light.27 Others have suggested that pedestrians or cyclists be forced to wear special homing devices that would allow them to be detected by autonomous vehicles.28 But that itself is a scary possibility, setting up a scenario in which people who fail to wear special equipment when they leave the house not only are killed but are blamed for it.
What is perhaps more promising in the near term for addressing the pedestrian safety crisis are advanced technologies that perform certain functions for drivers. Today, cars coming on the market are increasingly partly automated, with potentially enormous impacts for traffic safety.
Lane keeping, forward collision warning, adaptive cruise control, automatic emergency braking, and pedestrian detection features are becoming more and more common in new cars. Already there is some evidence that these features can help reduce crashes and injuries. An Insurance Institute for Highway Safety study, for example, found that Subaru’s EyeSight—an advanced technology package that includes forward collision warning, automatic emergency braking, adaptive cruise control, and lane departure warning—reduced the rate of pedestrian-related insurance claims by 35 percent.29
Automatic emergency braking (AEB) is one of the most promising of these technologies. This feature, which automatically brakes when an object is detected in front of the front bumper, has been shown to not only reduce crashes, but also to lessen the severity of crashes when they occur. The NHTSA estimates that if AEB were installed in all new cars by 2025, it would prevent twenty-eight thousand crashes, resulting in twelve thousand fewer injuries overall.30
Although these new technologies are making their way into new cars, the introduction of them has, in many cases, been limited to luxury models. AEB was standard in only about 30 percent of new cars in model year 2017.
According to experts like Fisher, automakers are moving slowly in part because there has been no move from the federal government to mandate these technologies, despite escalating pedestrian fatalities.31 Instead, all the major automakers have signed a voluntary agreement to include AEB in all their new vehicles by model year 2022. But consumer advocacy groups say that voluntary agreements are “weak and unenforceable” and that more direct federal action is needed.32
In addition, more cars, especially higher-end models, are beginning to come with pedestrian detection systems. In model year 2019, pedestrian detection came standard on 38 percent of vehicles sold, according to Consumer Reports,33 but even though it is a very promising frontier in vehicle safety, at this stage there remain concerns about its overall effectiveness.
In 2019, AAA tested the pedestrian detection systems in four midsized sedans with dummy pedestrians. The systems performed respectably at 20 miles per hour in daylight conditions, stopping about 40 percent of the time. But at 30 miles per hour, they were practically useless. AAA called them “completely ineffective at night,” when “none of the systems detected or reacted to the adult pedestrian.”34 That was an extremely disappointing result given that three-fourths of fatal pedestrian collisions occur at night.
Without regulation, said Shaun Kildare, research director at the consumer protection group Advocates for Highway and Auto Safety, these features cannot be expected to perform reliably. “The systems that are out there that call themselves automated pedestrian braking aren’t really up to any standard,” he said. “There’s no actual testing protocol that’s universal. The companies can kind of put whatever they want in there and call it what they want.”35
Today there is wide diversity in how well different brands’ pedestrian detection systems perform. The Insurance Institute for Highway Safety has started rating the available systems and will incorporate scores into its influential auto safety ratings, which should motivate automakers to include them and give consumers better information about their effectiveness. In addition, Fisher said, beginning in 2020, cars will not be eligible to be one of Consumer Reports’ influential “Top Safety Picks” if they do not include the technology. “It’s not regulated but we’re trying to do what we can,” he said. “We do give extra points to vehicles that have automatic pedestrian detection and AEB.”36
Lack of regulation and limited deployment may help explain why a huge increase in pedestrian deaths is being seen at a time when promising new safety technology is becoming available. There just may not be enough cars with the technology on the roads right now to have a noticeable impact on traffic fatalities, said Fisher. “The fleet turnover is so slow,” said Fisher. “The average vehicle’s on the road for about 11 years. Just because they’re selling them now, it’s only a very very tiny fraction of vehicles on the road that can detect and stop for pedestrians.”
In addition, some of the automated technologies, like lane keeping, should be considered more convenience technologies than safety technologies, he said. Could lane keeping, for example, prevent a driver from swerving to avoid a pedestrian walking on the shoulder of a highway? Right now, it is alarming to say, the answer is unknown.
“These systems are being marketed in a way that almost makes it seem like they’re assisting you in driving or even autonomous,” Fisher said. “And we don’t know if they’re helpful or perhaps put more people at risk.”
Another concern is that as cars begin to come standard with potentially helpful automated features, they might encourage drivers to take more risks, counteracting their benefits.
That kind of behavior has been seen with Tesla’s “autopilot” system, which is essentially a package of partially automated features such as automatic parking, lane keeping, lane changing, and adaptive cruise control.
There have been five deaths associated with autopilot: four in the United States and one in China. In many cases, the crashes involved drivers who were wildly inattentive. Tesla driver Joshua Brown, for example, was killed in 2016 after crashing into a fire truck. An NTSB investigation showed that Brown had had his hands off the wheel for almost thirteen minutes prior to the crash.37 But despite what its name suggests, autopilot is not a fully autonomous system and is only safe to use with intermittent monitoring from a human driver and on limited-access highways. German regulators, for example, have, for safety reasons, barred Tesla from using the “misleading” term autopilot.38 Tesla still maintains that its partially automated features improve safety, but without access to the company’s data, it is difficult to evaluate the claim.
In academia, they call this kind of behavior “risk compensation.” People adjust their behavior, to some extent, with the introduction of new safety technologies and take on more risk. Risk compensation behavior, however, varies a lot depending on the type of safety feature, said Offer Grembek, codirector of Berkeley’s Safe Transportation Research and Education Center. “This phenomenon is much more present when countermeasure or safety improvement is easily perceptible,” he said. “Seat belts, helmets—these things are very perceptible.”39
Safety additions like airbags and antilock brakes—safety technologies that assist only when needed—are less perceptible to drivers and thus are less subject to risk compensation, said Grembek. AEB and pedestrian detection are these kinds of features—not overtly perceptible, only deploying when you need them, which is rarely. “This phase of high-level technological improvement to the vehicle, such as automatic braking and maybe some steering assist, are extremely beneficial and are an important step,” Grembek said. “I think of them as a transitional phase to the aspirational full autonomy state that we will likely never get to.” In addition, even for the most perceptible safety improvements, like seat belts, behavioral adjustments to risk come far from negating the benefits, he added.40
While we look to an autonomous future to save lives lost in traffic crashes, many very promising technological solutions that already exist go forgotten or are rejected politically. A prime example is speed governors. In the United States, about ten thousand traffic deaths a year are speeding related, according to the US Centers for Disease Control and Prevention.41 That is about as many annual deaths as drunk driving.
But in the United States, there has not been the political will to make the hard decisions that could save a lot of lives relative to controlling speeding. For example, the technology exists to install speed governors in cars that could automatically cap speeds on or around the legal limit.
In Europe, these governors will soon become required. In 2019, the European Parliament ruled that by 2022, all new cars will come equipped with speed governors that physically limit the cars from exceeding the posted speed limit. The decision is expected to save fifteen thousand lives over fifteen years.42
Meanwhile in the United States, many new cars come with the technology to regulate speed, but it is left to consumers whether they choose to use it. Even though there is massive bloodshed related to speeding, there is simply little to no political will for bolder action. In 2019, bipartisan federal legislation was introduced to require speed limiters—a maximum of 65 miles per hour—on freight trucks only. Despite tearful pleas from devestated families, it was opposed by the trucking industry and never made it out of committee.43
Under a regulation-averse Trump administration, other opportunities to improve vehicle safety have languished, with little outcry. For example, Toyota had been developing the capacity for vehicle-to-vehicle (V2V) and vehicle-to-infrastructure (V2I) communication. There is incredible potential in these technologies, which use what are called dedicated short-range communications (DSRC) to prevent collisions between vehicles. This kind of technology could, for example, prevent red-light-running crashes by sending a signal to the vehicle and instructing it to brake after the traffic light turns red. The NHTSA has estimated that vehicle-to-vehicle communication alone could save 1,366 lives annually and prevent 615,000 injuries.44
V2V and V2I safety systems have a big potential benefit over autonomous vehicles as well: they can be installed retroactively in used cars. A safety improvement based on autonomous vehicles, by contrast, would require waiting perhaps fifteen years for the US vehicle fleet to entirely turn over.
For a time a few years ago, it looked as if this kind of safety tech—V2V, V2I—was inevitable. In late 2016, the Obama administration proposed adding DSRC to all new vehicles. Toyota had planned to begin installing DSRC in cars in 2021 and make it standard by the mid-2020s.45 Cadillac had also installed it in some of its cars beginning in 2017. But instead of developing the technology, the Trump administration has left V2V and V2I communication for dead.
DSRC required a “dedicated and adequate spectrum” in which automakers can have assurance that they will not have to worry about hacking or other kinds of more innocent interference. But the Trump administration’s FCC (Federal Communications Commission) chair, Ajit Pai, announced in late 2019 that he was beginning a rule-making process to open up half of the 5.9 GHz radio frequency that had, for twenty years, been reserved for DSRC to other uses. Tech groups like Facebook as well as right-wing groups like the Koch brothers’ political arm, Americans for Prosperity, supported the move.
Even before then, auto companies had been abandoning plans for V2V and V2I as it became clear that the federal government was not going to preserve the spectrum. In 2019, for example, Toyota announced that it had stopped pursuing the technology in the United States. Automotive News wrote, “Toyota said Friday’s decision was based on ‘a range of factors, including the need for greater automotive industry commitment as well as federal government’—ie the Trump Administration—‘support to preserve the 5.9 GHz spectrum band for DSRC.’”46
Meanwhile, regular Americans were paying a high price for lack of action on safety technologies. In 2017, red-light-running deaths hit a ten-year high, rising by more than one-fourth since 2012, an AAA study found.47
That currently available technology that could save thousands of lives lost to traffic is being rejected should make everyone question the promises of a fatality-free future with self-driving cars. In addition, there is still no real clarity on how autonomous vehicles would affect important traffic safety–related questions, such as whether cars would be required to obey the speed limit.
According to Albert, the obstacles to better traffic safety currently are more political than technological. “Automakers and tech companies do things to earn money, not save lives,” he said. “Everything from speed limits to seat belt laws are not a function of physics but of contested ideas about what’s right and proper.”48