— 11 — A Death in the Desert

A LITTLE AFTER NINE IN the morning on a sunny Saturday in October 2017, Chris Urmson stepped onto the green rectangle known as the Mall at the center of Carnegie Mellon University’s campus in Pittsburgh. Wearing blue jeans, canvas sneakers, and a subtly checkered sport coat, he parked the black carry-on suitcase he was rolling behind him on the paved path and walked over the carefully manicured grass toward the four vehicles standing in a line.

The quartet stood in front of Hamerschlag Hall, which was called Machinery Hall when it was built in 1906, one of the first buildings to house a new school dedicated to the pursuit of technology. Pittsburgh steel magnate Andrew Carnegie wanted a campus whose layout resembled that of an “explorer’s ship.” This arching, cream-colored stone building, an emblem of the era’s City Beautiful and Beaux Arts movements, was the helm. Its defining feature was the rotunda that resembled a crown but was in fact a well-disguised smokestack. At the dawn of the twentieth century, the university was too far from downtown Pittsburgh to connect to the city’s power grid, so it had to produce its own electricity. A century on, the rooms where every freshman had been required to spend two weeks shoveling coal to power the school had been converted into “clean rooms,” free of any particles or dust that could interfere with the computer chips and hardware that students now created there. This evolving home of technological wonders was the perfect backdrop for the day’s photo op.

The oldest of the four vehicles was Terregator, the six-wheeled mine exploring and mapping robot that looked like a desk and moved about as fast as one. On its right was H1ghlander, the cardinal-red Humvee that had come up limp in the desert outside Primm, Nevada, in the 2005 DARPA Grand Challenge. To its left was Boss, the Chevrolet Tahoe with the huge GM logo on its hood that had delivered Carnegie Mellon’s long delayed victory in the 2007 Urban Challenge. Urmson walked past them all, drawn to the vehicle at the end of the line, the original Red Team racer. The one for which he had left the Chilean desert, the one that had sent him into the Mojave and then onto a lifelong quest to render possible the impossible.

He ran his hand along the machine’s rough metal edges, feeling the damage from the crash (Too fast! Too fast!) that almost sank its chances in the first, 2004, Grand Challenge. Circling around to its left side, Urmson moved past the steel square that passed for a seat, where he had spent so many hours with his hands hovering over the go-kart wheel, his right side aching from the weight of the massive electronics box pressing into him every time the vehicle turned. Standing in front of the box, Urmson undid a pair of beaten-up latches and flipped open a small hatch. He twisted his body to reach his hand—one of its nails permanently chipped by the Humvee’s cooling fan fourteen years earlier—past where his eyes could see. As his fingers confirmed that the circuit boards he had programmed to conquer the desert were still there, Urmson grinned and let out a small “ha!” Sandstorm still had its heart.

Under his sport coat, the Canadian engineer wore a purple T-shirt reading “Aurora,” the name of the self-driving car startup he’d launched in 2016, a few months after stepping down from the Google team. His cofounders had led early autonomy efforts at Uber and Tesla, respectively. The trio framed Aurora as a fresh start built on what each of them had learned, and took an open-minded approach to the business. They kept Aurora independent but built a collection of automaker partners, including Hyundai, Kia, and Byton, a Tesla-like, all-electric startup that knew autonomous driving was a must-have feature, especially for an industry newcomer. Aurora’s plan was to work as a sort of supplier, building what it called its “Driver” for anyone who was interested. Urmson spent most of his time in Silicon Valley, and had headed east to check in on the company’s Pittsburgh office, and for a reunion. It had been a decade since Tartan Racing’s Boss won the Urban Challenge, the culmination of a years-long effort that pushed Urmson and many others to work harder than they ever had before, harder than they’d thought possible. Carnegie Mellon was marking the anniversary with a small conference, a chance to get the old team back together and to talk about the emerging world of self-driving cars that all their sweat and stress and pain had created.

Urmson spent fifteen minutes or so on the lawn, inspecting the robots. He told stories from his days with the Red Team and explained various features to the few people around, including three of his own employees, young enough to have been in grade school when their boss was cracking the code for a challenge he thought was unwinnable. He had long since been vindicated. His job now was to deliver that code to the public, and to wrestle down the pesky details that came with any new technology: operational efficiency, regulations, insurance, business plans. He was far from the only one seeking the answers.

The day’s festivities about to begin, Urmson led the group to the engineering building reserved for the get-together. He held open doors for everyone as he navigated by muscle memory the campus where he had spent so many years. Where he would likely have been content to spend many more, had Sebastian Thrun not drawn him to the West Coast.

The lecture hall’s seats, their upholstery stitched with tiny zeroes and ones, were filled with familiar faces. Tony Tether had driven up from Washington, where he worked as a consultant. One of his clients was a Lidar company with scores of competitors in what had become one of the hottest fields in tech. Less than a year into his deal with Ford, Argo CEO Bryan Salesky was getting Argo off the ground and onto the streets of Pittsburgh. Kevin Peterson of the Red Team was visiting from San Francisco, where he ran a company building robots that rolled along sidewalks delivering food. GM’s Jim Nickolaou had made the three-hundred-mile drive from Detroit in a Cadillac CT6, enjoying the Super Cruise system he’d helped design.

At the center of the celebration, of course, was Red Whittaker. Approaching his seventieth birthday, the former Marine still scoffed at the idea of relaxing on weekends, let alone retiring. He’d spent years working on what became known as the Google Lunar X Prize, an open challenge to land a robot on the Moon and have it travel five hundred meters, beaming data and images back to Earth. Whittaker hadn’t made it happen. Neither had anyone else, and the competition would be officially ended in early 2018—a sign that not just any open competition could spark innovation. DARPA, on the other hand, had built off the self-driving challenges with a series of similar prize-based contests addressing robotics, cybersecurity, predicting infectious disease, and more.

The lunar loss hadn’t slowed Whittaker. He was already going hard on a new generation of nuclear reactor robots. He had found the time, though, to play emcee for this event, which included a celebratory dinner and a series of panel discussions covering robotics history and the future of autonomous vehicles. During dinner, Whittaker took the opportunity to resolve a mystery that had troubled him and many others since October 8, 2005: What had happened to H1ghlander that day in the desert in the second Grand Challenge, when it gave way to Stanford?

While booting up the old Humvee to move it to its place on the lawn, Whittaker had brushed against a small black box, and heard the engine slow. That box modulated how much power ran to the engine. He realized that when H1ghlander had flipped during that practice run just before the 2005 Grand Challenge, that module had been damaged, so anytime something knocked against it, it would cut the engine’s power to nearly nothing. That’s why H1ghlander hadn’t just stopped on the roads outside Primm, but slowed and accelerated at an inexplicable rhythm, until Sebastian Thrun’s robot left it behind. The Red Team’s lengthy autopsy had focused on the sensors, the code, the work they had done. They missed this little thing, one of the electric components that came stock with the military truck. Announcing the discovery, Whittaker held the black box up in his hand. “How about that, buddy?” he said to Urmson, slapping him on the back. “You’re off the hook!”

In 2005, that malfunction had been a source of tremendous pain. By 2017, it had morphed into an almost tender memory, the way you might think of the high school sweetheart who dumped you. Much had changed in those years, and many of the onetime teammates filling those lecture hall seats were now competitors, motivated as much by the interests of investors as by any thrill of problem solving.

The seeds planted by the DARPA Challenges had burst into an ecosystem. Gone were the doubts that autonomous driving would reshape daily life; they’d been replaced by the central question: Who would gain the most from delivering this technological shift, and who would suffer its fallout? Every major automaker and most serious tech companies were pursuing autonomous driving in some way. Countless startups were seeking funding to create their own software or fill some niche. National legislation and regulations were in the pipeline. Unions were bracing themselves for yet another technological wave that could wash away the jobs that had helped build large swaths of America’s middle class. The basics of the software were so established that building a self-driving car was a common project for college-level engineering students. Sebastian Thrun was using Udacity, his online education company, to train thousands of self-driving engineers a year. In 2016, when he first offered the course, thirty-five thousand people said they were interested in enrolling. Dave Hall had opened a massive manufacturing plant in San Jose, building ten thousand Lidar sensors a year and hoping to fend off the scores of competitors joining the industry he’d created. The American military had come nowhere near meeting the mandate that had sparked the Grand Challenge, to make one-third of its ground vehicles unmanned by 2015. But it, too, continued to pursue the technology.

It was impossible to know what that chase would produce, but the development of the human-driven vehicle offered a lesson. When automobiles first hit the road, people called them horseless carriages, and for good reason: They worked just like horse-drawn carriages, without the clopping and the manure. But as auto technology evolved, they become capable of much more, and people started calling them cars. The common understanding of driverless cars—as taxis without the taxi driver, or personal cars whose owners don’t have to hold the wheel—displayed the same lack of imagination. This latest technological shift was sure to produce changes nobody could foresee, and likely outgrow a name pulled from the paradigm it replaced. In just a few years, the experts said, autonomous driving would generate billions of dollars. It would redefine or vaporize the way one in nine Americans made a living, and unlock untold wealth for whoever could make that happen. At the end of 2016, Google’s Chauffeur project had undergone a metamorphosis. It became its own company, a subsidiary of Alphabet, which Google had created as its parent company in a 2015 reorganization. And it was now called Waymo. As a final challenge before this step, and a proof of readiness, the team (minus most of the engineers who’d been there for the Larry 1K) had sent a blind man named Steve Mahan for a carefully planned ride around Austin, Texas, without a safety driver to intervene if the car made a mistake. The team had grown to hundreds of employees and logged 2.5 million miles on public roads, plus billions more in computer simulations. That growth had engendered multiple office relocations, and during one move, the original team’s collection of ten empty, signed champagne bottles, hallmarks of their completing the Larry 1K challenge, had disappeared.

The explorers had undergone their own changes. They were settlers now, trying to establish footholds in what had become a proper industry. And not everyone was getting along with his neighbors. “It’s a little sad to see what’s happening in the industry today with some of the questionable ethical behavior that’s out there,” Urmson said. “I guess that’s part of the evolution, part of how you tell this is immensely important and valuable. But for me, there’s a little bit of nostalgia for when everyone was pulling in the same direction.”

CMU’s conference was heavy on nostalgia, but in conversations here and there, one could hear talk of the trouble Urmson was alluding to, always associated with a single name, the person who’d done so much to light the powder keg that DARPA’s Challenges had filled.


In 2017, no one was toasting Anthony Levandowski. No one called him an “all-around whiz kid” or superstar anymore. Not after February 23, when Waymo filed a bombshell lawsuit against Uber, accusing it of a “calculated theft” of its trade secrets, violation of its patents, and all-around cheating to get ahead in the race to produce a self-driving car.

From the start, Waymo’s argument boiled down to a simple accusatory narrative: A desperate Uber paid Anthony Levandowski an enormous amount of money to steal Waymo’s intellectual property—chiefly its Lidar design—and use it to build a self-driving car for Uber. The suit asked for nearly $2 billion in damages, a sum that signaled how much value self-driving technology was poised to create. The consequences looked to be devastating. If Uber lost the case, it might have to pay Waymo a mountain range of money in restitution and kill the self-driving program CEO Travis Kalanick considered vital to long-term viability. If the Justice Department got involved and brought criminal charges, a scarier threat loomed. Trade secret theft could land Levandowski in prison.

So began a massive legal fight, as Uber’s phalanx of lawyers lined up against Waymo’s. In a San Francisco federal courthouse, Judge William Alsup dashed Uber’s hope to move the case into private arbitration and rejected Waymo’s bid to stop Uber working on its self-driving tech until the case concluded. But nobody seemed to doubt that Levandowski had walked out of Google with those downloaded files. “You have one of the strongest records I’ve seen for a long time of anybody doing something that bad,” Alsup told a Waymo lawyer. He even took the extraordinary step of referring the case to the US Attorney’s Office “for investigation of possible theft of trade secrets based on the evidentiary record supplied thus far.”

Waymo, however, wasn’t suing Levandowski. This was Waymo v. Uber, and for any charges to stick, Waymo had to prove not just that the files had left its servers, but that Uber had used them to its advantage. So followed months of discovery and depositions, as 129 lawyers sifted through millions of emails and technical documents for evidence of good or bad behavior, filing more than one hundred thousand pages of briefs, depositions, and motions.

Missing from all the back-and-forth over Levandowski’s behavior was the man himself. Recognizing that he was at risk no matter what happened to Uber, Levandowski decided to use his Fifth Amendment right to avoid testifying and turning over any documents. As detailed in an Uber due diligence report later introduced into evidence, Levandowski had told Kalanick he had Google files before Uber acquired Otto, and said that he had destroyed the discs containing them. The investigators also found plentiful evidence of questionable behavior: Levandowski had fifty thousand Google emails on his personal laptop (he had synced his personal and work email accounts in 2014). His various devices contained Google videos, patent applications, and testing files. He had had an old Street View prototype camera and assorted robot parts in his garage (which he also had destroyed, he said). He made a habit of deleting his text messages and telling others to do the same.

Uber’s lawyers argued that those files were of minor importance, but when it came to Levandowski’s fate, that mattered little. The suit was threatening Uber’s future, and Levandowski’s refusal to testify made things worse. Kalanick still thought Levandowski was one of the world’s best minds when it came to autonomous driving. But the CEO did what he always did: He moved to protect Uber. At the end of May 2017, he fired his self-driving superstar. Levandowski would get close to nothing from that reported $680 million Otto acquisition, because almost all of that money hinged on his team’s hitting various milestones over the course of years.

Kalanick himself resigned as Uber’s CEO just a month later, amid a swirl of scandals that included workplace sexual harassment and discrimination, psychological manipulation of the company’s drivers, and deceptive practices in dealing with regulatory authorities. In the end, some of his biggest investors turned against him, even suing him for fraud, alleging that when he presented the Otto acquisition to his board of directors, he’d hidden the fact that Levandowski had taken data from Waymo.

When the lawyers finally went before a jury on Monday, February 5, 2018, the narratives they laid out in their opening arguments were simple. Waymo argued that Uber was desperate to catch up in the race to develop a self-driving car, and had schemed with Levandowski to close the gap Waymo had created by starting its research long before any other company realized the tech’s potential. Uber painted a picture of Waymo as a market leader threatened by younger, faster competition, and willing to take extreme measures to ensure its engineers stayed within the castle walls. Whatever Levandowski had done, that was on him.

Over four days, as Waymo made its case, the jury heard testimony from Travis Kalanick, John Krafcik, and Dmitri Dolgov (who had taken Chris Urmson’s place as Waymo’s lead engineer), among others. But behind the scenes, the lawyers and the executives were negotiating. Waymo had smeared Uber’s reputation, but failed to produce any slam-dunk evidence that the ridehail giant had profited from Levandowski’s machinations. Dara Khosrowshahi, who replaced Kalanick as Uber CEO in August 2017, would be happy to defang one of the many potentially deadly scandals he’d inherited. So when the clock hit 7:30 on Friday morning—Judge Alsup liked to start early—the reporters squeezed into the courtroom’s wooden benches were confused to find the lawyers and the judge missing. Twenty minutes later, Alsup took his seat, and gave the floor to a lawyer for Waymo, who moved to have the case dismissed, announcing that the parties had reached a settlement. Waymo would drop the case in exchange for 0.34 percent of Uber’s equity—worth about $245 million—and a promise that Uber would not use any of Waymo’s hardware or software in its self-driving cars. “All right,” Alsup said. “This case is ancient history.”


Levandowski had been silent since Waymo filed its explosive accusations. The press that had long been friendly to him started digging into his history, no longer satisfied with portraying him as the creative kid so eager to make robots real. WIRED revealed the details of Google’s acquisition of 510 Systems and Anthony’s Robots in 2011, the $20 million deal structured to reward Levandowski and not his employees. The same story noted that in September 2015, shortly before he was due to receive his $120 million bonus, Levandowski created Way of the Future, a church dedicated to the idea that as artificial intelligence progressed, machines were bound to rule over humans. “We’re basically creating God,” Levandowski said. And once humans shared the planet with something orders of magnitude smarter than them, there was no use trying to contain or control it. “It is for sure gonna get out of the cage.” Way of the Future was a bid, he said, to show the computers that humans were on their side, to become pets rather than livestock.

Eight months after the trial’s conclusion, the New Yorker ran a story replete with details that further tarnished Levandowski’s reputation, including his “I Drink Your Milkshake” T-shirt and the argument with Isaac Taylor that led to the risky highway encounter with the Toyota Camry. The public perception of Levandowski went from talented, driven engineer to self-serving, irresponsible jerk. Even his nanny got in a kick, filing a bizarre lawsuit in which she claimed Levandowski had often failed to pay her, and recounted things she’d heard him saying, including considering fleeing to Canada when Waymo made its accusations. Levandowski called that suit “a work of fiction,” but paid the nanny an undisclosed amount of money to drop it, which she did. One story from car news website Jalopnik, recounting the juiciest details from the New Yorker’s story, carried the headline “The Engineer in the Google vs. Uber ‘Stolen Tech’ Case Really Was Terrible.”

Among many of his former colleagues, though, Levandowski maintained a peculiar kind of goodwill. “I like Anthony. I’m just afraid to have him around,” Velodyne’s Dave Hall said. Sebastian Thrun still marveled at his talent for getting things done and inspiring others. “He’s very underestimated in the entire scandal. I think he’s a way better person as a human being, and he’s a way better executor than the way he’s portrayed in the media,” Thrun said, adding a careful caveat: “His attitude toward disclosure and truth is not anywhere near my ethical standards.” And while plenty who worked with Levandowski were happy to see him go down—dismissing him as “a weasel” and “evil”—others echoed that two-part evaluation: Anthony’s got a lot of great qualities, but it was always going to end this way. He had skated around trouble for years. Now, the ice buckled under his feet.


In a way, Waymo v. Uber was premised on an early incarnation of the self-driving world, when Google was the only game in town, and starting a viable competitor hinged on winning away its brainpower. By early 2017, the expert population had exploded, and the various contenders found their own ways to pursue an autonomous future. While Waymo and Uber clawed at each other, General Motors’ Cruise went on to raise more than $7 billion by early 2019. Ford had Bryan Salesky’s Argo AI. Chris Urmson started Aurora with a clean slate. Chauffeur alums Dave Ferguson and Jiajun Zhu launched Nuro, a startup focused on self-driving delivery robots. Don Burnette, who had cofounded Otto with Levandowski, left Uber to run his own robo-trucking effort, Kodiak Robotics. Zoox was raising billions of dollars with a promise to redesign not just the driver, but the idea of the car itself, developing a symmetrical, bi-directional custom vehicle. Alisyn Malek had left GM to help run May Mobility, making shuttles that drove themselves short distances along simple routes. Among them, they employed thousands of people, enough to make the fact that Google had started this industry with a handful of engineers in 2009 seem hard to believe.

The quest now was to prove not that cars could drive themselves, but that one could make a business out of it. That regulators and insurance agents and lawyers and most of all the public could be convinced that this technology was safe—and worth paying for. But just five weeks after Judge Alsup told the Waymo and Uber lawyers to clear out of his courtroom, the odds of doing that appeared to plummet.

After Anthony Levandowski refused to get a testing permit to run Uber’s autonomous cars in California in December of 2016, he accepted Arizona governor Doug Ducey’s invitation to test in his state. It was a good testing ground. The sunny weather was kind to sensors that weren’t so good in snow and rain. Most streets, developed for a car-dominated transportation network in the second half of the twentieth century, were wide and straight. Few people walked or cycled anywhere, minimizing complicating factors. Best of all, the state put virtually no limits on who operated within its borders, or how. That was key, because every serious competitor believed that testing on public roads was the only way to properly train and evaluate the tech, and that paying a human to sit in the driver’s seat and retake control if necessary was the way to keep everyone safe.

At about 9:00 the night of Sunday, March 18, 2018, Rafaela Vasquez climbed into one of the Volvo XC90 SUVs that Uber had outfitted with its suite of sensors and self-driving software. She had been given the route she was supposed to drive on a loop for her nearly eight-hour shift, noting any problems in the custom tablet that took the place of the car’s center screen. Before she had started as a safety operator for Uber in the Phoenix suburb of Tempe, forty-four-year-old Vasquez had taken a three-week training course, including a week in Pittsburgh, with classroom time spent going over the technology and testing protocols, and time on a track learning how to maneuver a car out of dangerous situations. She’d been trained to keep her hands an inch or two from the steering wheel and her right foot hovering over the brake pedal. She’d been told to remain vigilant and be ready to take control of the vehicle, and that using one’s phone while the car was driving was a fireable offense. Uber’s tech had improved markedly in the past year, but it was nowhere near reliable enough to operate without human supervision.

At 9:58, the Volvo was driving north in the right lane of Mill Avenue, going the speed limit in a 45 mph zone. The car’s radar, Lidar, and cameras detected the presence of a forty-nine-year-old woman named Elaine Herzberg, who stepped from the median into the road, pushing an orange and black bicycle loaded with plastic bags. As Herzberg walked across the shoulder and the two left lanes, Uber’s software alternately classified her as a vehicle, a cyclist, and an unidentified object, an investigation by the National Transportation Safety Board later revealed. It did not identify her as a person on foot because Herzberg was jaywalking. Uber hadn’t taught its cars to look for pedestrians outside of crosswalks, the safety investigators found. It was a galling failure of imagination and a sign that, even after years of work, the makers of robo-cars didn’t always appreciate how their technology had to adapt to a human world.

When just twenty-five meters separated Herzberg and the car, the computer determined it needed to slam on the brakes. But Uber’s engineers had limited the car’s ability to make an emergency stopping maneuver, fearful of it making the wrong call and causing a crash by halting for no reason. That’s why they had the safety driver there, after all. But dash cam footage released by the police showed Vasquez wasn’t keeping her eyes on the road. She was looking down at something out of the camera’s field of view, near her right knee. According to the National Transportation Safety Board, in the three minutes before Herzberg started her crossing, Vasquez took her eyes away from the road twenty-two times. Seven of those looks lasted more than three seconds. Vasquez told investigators she was looking at the tablet that displayed information about the autonomous system, but a police inquiry determined that her silver LG smartphone was streaming an episode of the NBC singing competition The Voice at the time. Whatever the truth, she didn’t see Herzberg until a fraction of a second before the car struck her at nearly 45 mph, throwing her 75 feet. Vasquez stopped the car and dialed 911. Herzberg died at the hospital that night, the first bystander killed by a self-driving car.

By Monday morning, the crash was national news. Uber immediately parked its cars in Tempe, Pittsburgh, San Francisco (where it had finally gotten a permit and done some testing), and Toronto, where it had hired a team of high-profile artificial intelligence researchers after firing Levandowski as its self-driving lead. Developers like Waymo were quick to argue that their system would not have made the same mistake. Uber was among the few companies that put just one safety operator in its cars, which made it easier for that person to break the rules. Worse, Uber insiders said, the cars had been performing terribly. One had even driven onto a Pittsburgh sidewalk. In a March 13 email, Robbie Miller, an Uber operations manager who’d helped develop Chauffeur’s testing program, alerted his bosses to serious problems. “The cars are routinely in accidents resulting in damage. This is usually the result of poor behavior of the operator or the AV technology. A car was damaged nearly every other day in February. We shouldn’t be hitting things every 15,000 miles,” he wrote in an email later published by The Information. “Repeated infractions for poor driving rarely results [sic] in termination. Several of the drivers appear to not have been properly vetted or trained.” But critics inside the company said that Uber’s self-driving leadership prized logging miles as a metric to show investors and the public that the program was humming along in the wake of the Waymo lawsuit and Levandowski’s departure. Miller’s manager told him the company would look into his concerns. Uber’s car killed Herzberg five days later.

Whatever Uber’s faults may have been, the crash threatened to impugn the very idea of a self-driving car. This was the sort of death that robots should prevent. Uber’s failure raised questions that the young self-driving industry had so far elided, like who was liable, financially or criminally, in the event of a crash. Uber quickly reached an undisclosed settlement with Herzberg’s family. Authorities eventually charged Vasquez with negligent homicide in September 2020; she pleaded not guilty. Uber evaded prosecution, despite criticism that it had set her up to fail. But the tragedy left open the question of whether cities and states should be so eager to have robots roaming their streets, even if it gave them the gloss of being home to new technology. Arizona governor Doug Ducey—who had welcomed Uber “with open arms and wide open roads” in 2016—banned the company’s robo-cars from the state. Uber soon shuttered its Tempe operation, firing all 254 safety operators who staffed it, including Vasquez.

The aftermath of the crash didn’t answer the harder questions. What were the ethics of testing on public roads, around people who had no choice but to participate in a science experiment? Was having a human in the car, or even two, enough to guarantee the public’s safety? And if the technology was still making such simple errors, with disastrous consequences, when might it be ready to start saving lives instead of taking them? How would its creators know when it was ready, and how would they prove it to a rightfully wary public?


By the end of 2018, that public would be asked to make a leap of faith.

Every year around late October or early November, the Parks Department workers of Chandler, Arizona, started gathering tumbleweeds. They roamed the outskirts of the Phoenix suburb, grabbing the prickly, uprooted, rolling bushes and tossing them into a custom-made trailer. When they had gathered a thousand or more of the things, looking for a variety of shapes and sizes, they attached them to a twenty-five-foot-tall, conical chicken-wire frame. After painting the result white, they covered it with flame-retardant chemicals, sixty-five pounds of glitter, and about twelve hundred holiday lights. In the Sonoran Desert, this was what passed for a Christmas tree.

Chandler is a predominantly white and wealthy city of 250,000 people, a land of large houses with big yards, palm trees, cacti, a skeletal public transit system, strip malls, and indoor malls. As 2018 drew to a close and municipal workers celebrated by lighting that year’s tumbleweed tower, John Krafcik announced that Waymo was ready to deliver the future. And that Chandler, with its good weather and friendly regulatory environment, would be its first customer.

Waymo had been running a proto-ridehail service in the city for close to two years by that point. In April 2017, it had selected a few hundred people to participate. Using an app on their phones, they could call a Waymo and use it to get around town. They would sit in the second or third row of the white, Chrysler Pacifica minivans, more easily identified by the black, gumdrop-shaped sensor on the roof containing Waymo’s proprietary Lidar, than by the green-and-blue “W” logo on the side. Unlike the adventurous Firefly pod car, these vehicles hadn’t been deprived of their steering wheels or pedals. And good thing, because behind the wheel sat a Waymo safety operator, there to answer any questions and ensure that the vehicle stayed out of trouble.

This was where the team that had started life in 2009, as Google’s Project Chauffeur, explored the logistical realities of operating about a hundred cars. Waymo set up a sixty-eight-thousand-square-foot depot and struck a deal with rental car company Avis to help to maintain and clean its fleet. It huddled with the local fire and police departments, participating in a test to prove its tech could detect all sorts of sirens and pull over safely. It set up a call center where agents could handle rider questions, and, more importantly, help the cars if necessary. This would prove to be a common yet little discussed feature of any robo-ridehail service, premised on the fact that no self-driving vehicle could ever be infallible. If a car without a driver inside got into a situation it couldn’t handle—an unexpected construction zone maybe, or a cop directing traffic the wrong way down a one-way street—it would slow to a stop and send out a digital request for aid. From Waymo’s remote centers, a worker would inspect the scene using the car’s cameras, determine what to do, and issue the car instructions, along the lines of Cross the double yellow, proceed ten meters, and return to the right side of the road. The car, still driving itself and using its sensors to watch for trouble, would execute the maneuver before returning to its normal operating procedure.

Krafcik had promised a commercial service launch sometime in 2018, and so on December 5 of that year, Waymo announced “Waymo One,” the real-deal version of its prototype service. Riders would no longer be bound by nondisclosure agreements that stopped them from discussing their experiences, and they would pay for rides at prices comparable to what Uber and Lyft charged for their human-driven services. They could call a car at any time, and go anywhere within a roughly one-hundred-square-mile area that included Chandler and the neighboring cities of Mesa, Gilbert, and Tempe. The minivans had room for three adults and one kid (the child’s seat wasn’t to be removed). Users could monitor their ride using one of the screens fixed to the back of the front row seat headrests. The display offered a dark-toned simulacrum of the road, depicting the Waymo in the middle and highlighting other vehicles, cyclists, pedestrians, and traffic signals the car detected. If that wasn’t enough to assuage a nervous rider, it offered options to have the car pull over, or to dial up a customer service rep trained to answer questions. Otherwise, riders could relax and do whatever they liked, checking the car’s route and ETA on the display.

Not everyone was enthusiastic about Waymo’s presence. One person slashed a car’s tires. One threatened a safety operator with a PVC pipe, another waved a revolver. On multiple occasions, a man driving a Jeep Wrangler tried to force a Waymo minivan off the road. His wife told the New York Times that she had done the same, explaining that one of the robots had nearly hit their ten-year-old son. Other, less confrontational Chandler residents simply found the Waymo cars annoyingly slow and cautious. Not that their grievances carried any weight: State and local authorities fully supported Waymo’s operation in the area, saying the tech company brought jobs to the region.

The service that Waymo’s press team pitched as an epochal change, though, came with two important caveats. First, it wasn’t open to the general public. Only those riders preselected for the pilot could call up a robot, at least to start. Second—the one that really mattered—was that while the cars would drive themselves, they were not driverless. Waymo would continue to pay its safety operators to sit behind the wheel. The computer its engineers had spent a decade developing just wasn’t ready to go without its human backup.


Yes, it had been almost exactly ten years since January 2009, when Sebastian Thrun, Chris Urmson, Anthony Levandowski, and their teammates embarked on a voyage to realize the vision of DARPA’s Challenges. These engineers had put Waymo on a path to drive 10 million miles on public roads and billions more in computer simulations. They were the first members of a team that grew to include hundreds of people honing algorithms and building maps and tinkering with hardware, all to root out and answer every last what if. Of the original team, few had stuck around for the era of Waymo and John Krafcik. Most moved on to their own robo-ventures, each applying his knowledge to craft his own take on the effort, just as they had brought their own approaches to the Challenges. But none were surprised to see Waymo continue to rely on its humans. If they had realized anything in the decade since they’d first joined forces, it was that they were pursuing a problem that put a cruel spin on Sisyphus: The higher they pushed their boulder, the steeper the climb became, the more opaque the clouds that blocked their view of the summit.

Waymo called its car the “world’s most experienced driver.” The snappy marketing line ignored the difference between experience and wisdom. For all those miles, Waymo—and its now myriad competitors—still struggled to disprove Moravec’s Paradox. With a few years of experience, any human driver could handle a car capably and in just about any conceivable situation. Making a robot do the same, reliably enough to underpin a real-life business, was almost impossibly harder. More remarkable than the stubbornness of the problem, though, was the stubbornness of those who had decided they would crack it—no matter how long it took or how much it cost.

The underwhelming launch of Waymo One was not a signal of failure. It wasn’t the equivalent of Anthony Levandowski’s motorcycle falling over, or Chris Urmson’s Sandstorm burning up its tires on Daggett Ridge, or Sebastian Thrun’s Junior losing out to the faster, more aggressive Boss. This race had no deadline, no first place. It was not the Grand Challenge but a grand challenge. And even if no one knew exactly where the finish line lay, or what reward waited on the other side, nothing would stop them driving toward it.