3The Ultimate Mobility Device

In the year 2014, Google fired a shot heard all the way to Detroit. Google’s newest driverless car prototype had no steering wheel and no brakes. The message was clear: cars of the future will be born fully autonomous, with no human driver needed or desired.

Even more jarring, rather than retrofit a Prius or a Lexus as Google did to build its previous two generations of driverless cars, the company custom-built the body of its youngest driverless car with a team of subcontracted automotive suppliers. Best of all, the car emerged from the womb already an expert driver, with roughly 700,000 miles of experience culled from the brains of previous prototypes. Now that Google’s self-driving cars have had another few more years of practice, the fleet’s collective drive-time equals more than 1.3 million miles, the equivalent of a human logging 15,000 miles a year behind the wheel for 90 years.1

Car manufacturing is a changing playing field. For decades, the automotive industry has operated inside protective walls, sheltered from external competition by high barriers to entry and exclusive relationships with preferred suppliers. As the appeal and value of a new car becomes increasingly dictated by its software, for the first time, traditional automotive companies will face competition from companies outside the industry. Google (or its parent company, Alphabet, Inc.) is the front-runner, but Apple is rumored to be hiring automotive engineers and software developers to build its own autonomous vehicle. Adding to the speculation, in a recent speech at a tech conference, Apple vice president Jeff Williams cryptically described cars as “the ultimate mobile device.”2

In response, car companies are pouring billions of dollars into software development and the epicenter of automotive innovation has moved from Detroit to Silicon Valley. At the time this book was written, Mercedes-Benz’s Silicon Valley Division employed nearly 300 people working on advanced engineering projects and user experience design. Volkswagen had 140 engineers, social scientists, and product designers integrating Google Earth maps into Audi’s navigation system and developing new infotainment systems.3 Toyota announced that it would invest $1 billion over the next several years in artificial-intelligence research, with a laboratory near Stanford and another in Massachusetts, near MIT.

Four trends are forcing car companies to rethink their business models: electric cars, ubiquitous wireless, car-sharing, and autonomous vehicles. As driverless-car technology matures, these four trends will be folded into one: autonomy. To survive, car companies will have to reenvision their product as an autonomous transportation robot, a shift that will demand major changes in their workforce and product development process. Car companies have two options: they can scramble to develop their own in-house software expertise, or they can form a go-to-market partnership with a company that will provide the car’s operating system while the car company builds the car’s body.

Cars and code

But wait! Modern cars are already automated. Even a budget sedan is loaded with sensing capabilities that a decade ago would have been found only on a custom-built fighter jet. Today, a typical car has as many as 100 microprocessors that manage the brakes, cruise control, and transmission.4 Other software modules warn drivers of the presence of a pedestrian, or that the car is weaving out of its lane. In fact, a new car contains a respectable 5–10 million lines of code.5

While the average car may be loaded with software, the problem is that it’s the wrong kind of code. Automotive software systems are modular, meaning they act as mostly independent silos with only a few exchanges, or handshakes, between them. A more formidable barrier, however, is the fact that today’s cars lack the right kind of artificial intelligence.

As long as a human driver is in charge, the software on-board a modern car works very well. Take the human out of the equation, however, and the car is nearly paralyzed. Today’s human-dependent cars lack a robust robotic operating system that contains the necessary artificial intelligence to capably respond to novel situations and is capable of “learning” from previous experience.

To drive as well as or better than a human, a driverless car’s software must be intelligent enough to know where it is located, understand what’s around it, anticipate what’s going to happen next, and plan how to respond. But that’s not all. In addition to making good decisions about where to turn, when to wait, when to hit the brakes, or when to change lanes, an autonomous car’s operating system must also oversee the car’s low-level physical activities, such as telling the car’s synthetic “muscles” (called actuators) to press the brakes, or to turn the steering wheel just a tiny bit to the right.

The big unanswered question that will determine the future of car companies is which industry—automotive or software—will be the first to develop an intelligent operating system, complete with the crown jewel of artificial-intelligence research: consistently accurate artificial perception. One fear that keeps car company executives up at night is that vehicle “hardware”—the car’s metal frame (chassis), engine, and interior—will follow the path already trod by computer hardware and become a secondary feature to the car’s software. If a vehicle’s software becomes its most distinguishing marketable feature in the eyes of consumers, car companies will lose control of the automotive market.

To learn more, we ventured out to Burlingame, California, to attend the Automated Vehicles Symposium, a conference put together by the Association for Unmanned Vehicle Systems International (AUVSI). All the big players in the car industry were there, showcasing their company’s latest and greatest autonomous vehicle technology. The atmosphere of the conference was reminiscent of a poker game at an exclusive men’s club: lots of cheerful bonhomie among the players cloaking the fact that no one knew the content of one another’s poker hands.

Our first lesson was that players in the autonomous vehicle industry keep their cards close to their chests. Of the employees we contacted at half a dozen car companies, not a single one responded to our email requests for an interview. When we reached out to Google X, the division of the company developing the driverless car, after several repeated queries, an administrative assistant politely pointed us to an exhibit on the history of driverless cars taking place in the Computer History Museum in nearby Palo Alto.

Our second lesson was that cars can be deadly overlords. For three days, we scurried back and forth from the conference hotel to where were staying, a harrowing pedestrian experience involving crossing several ten-lane highways that slice apart the parched landscape outside Silicon Valley. We arrived each morning at the conference hall, sweaty and shaken, but grateful to have survived the gauntlet of restless or distracted human drivers, each wielding his own potentially lethal two-ton weapon.

Our third lesson was that car companies and automotive suppliers are just getting their feet wet when it comes to creating robotic operating systems. Speaker after speaker shared beautifully designed and compelling presentations about their company’s research into autonomous parking software or machine-vision technology capable of … identifying street signs. Despite a healthy dose of hype about driverless cars, the so-called “autonomous vehicle” projects presented by the big car companies were essentially driver-assist systems on steroids.

Ironically, car companies are experts in robotics, but of another sort. The automotive industry is the leading employer of robots, whose arms tirelessly help assemble, paint, and build cars. A measure of an industry’s or nation’s reliance on robotic labor is robotic density, or the ratio of robotic workers to human workers. The robotic density of the U.S. automotive industry is higher than that of any other industry, roughly 1,100 robots per 10,000 human employees.6 Each year, automotive companies hire more robots as the robotic density of the global automotive industry continues to grow at a rapid clip, a rate of 27 percent each year.

While the car-making process might be heavily robotized, however, driverless cars would not recognize their brawny, simpleminded cousins on the assembly lines as fellow robots. Hard-working manufacturing, assembly line, and warehouse robots are neither mobile nor autonomous. They’re bolted into place, and carefully programmed by human technicians to do a specific task in a highly structured setting. If their environment throws them a curve ball, they can’t deviate from the script, nor are they capable of learning from previous experience.

The shake-up

If car companies had the power to define the transition to driverless cars, they’d likely favor a very gradual process. Their preferred stage 1 would involve refining driver-assist technologies. Stage 2 would involve implementing a few high-end models with limited autonomous capability in specific situations, most likely on highways. In stage 3, limited autonomous capacity would trickle down to cheaper car models.

Consulting firm Deloitte describes such a gradual approach as one that’s incremental, “in which automakers invest in new technologies—e.g., antilock brakes, electronic stability control, backup cameras, and telematics—across higher-end vehicle lines and then move down market as scale economics take hold.”7 Such a cautious approach, although appealing to an industry incumbent, may actually be unwise. For car companies, inching closer toward autonomy by gradually adding computer-guided safety technologies to help human drivers steer, brake, and accelerate could prove to be an unsafe strategy in the long run, both in terms of human life and for car industry bottom lines.

One reason car companies favor an incremental approach is that it prolongs their control over the automotive industry. Driverless cars need an intelligent on-board operating system that can perceive the car’s surroundings, make sense of the data that’s flowing in, and then act appropriately. Software capable of artificial intelligence—especially artificial perception—requires skilled personnel and a certain depth of intellectual capital to create. Car companies, while extraordinarily adept at creating complex mechanical systems, lack the staff, culture, and operational experience to effectively delve into the thorny thickets of artificial-intelligence research.

Driverless cars introduce uncertainty into the automotive industry. For the past century, selling cars directly to consumers has been a good business. However, if driverless cars enable consumers to pay per ride rather than buy their own car, the business of selling generic car bodies to transportation companies that lease out driverless taxis might not be as lucrative. If car companies are someday forced to partner with a software company to build driverless cars, such a partnership could result in car companies taking home a smaller slice of the final profits.

Like a growing kitty in the middle of an all-night poker game, there’s a lot of money sitting on the table. Former University of Michigan professor and GM executive Larry Burns explains that there’s a gold mine tucked into the three trillion miles a year that people drive each year (in the United States). He said, “If a first-mover captures a 10 percent share of the three trillion miles per year and makes 10 cents per mile, then the annual profit is $30 billion which is on par with Apple and ExxonMobil in good years.”8

The auto industry’s dogged attachment to human-led driving may be partly a strategic ploy, but there’s more to the story. Car companies bear a tremendous responsibility for consumer safety. Over the past several decades, they have been forced to become preoccupied with product safety in the face of engineering tragedies such as the Ford Pinto affair and, more recently, GM’s auto-ignition malfunction. Safety is an onerous responsibility that can cost a car company billions of dollars in product recalls, negative PR, and class action lawsuits.

In addition to safety, car companies bear responsibility for the health of a sizeable chunk of the world’s economy. In the United States alone, the automotive industry and its extended value chain—e.g., rental cars, oil companies, car dealers, insurance, media, and medical—adds up to a $2 trillion industry; in 2014, it totaled 11.5 percent of the U.S. gross domestic product (GDP).9 Any major misstep on the part of the big automakers can cost the entire value chain a lot of money and damage all the players’ reputations.

In 1979, the CEO of Chrysler, Lee Iacocca, made history when he asked Congress for a $1.5 billion loan to save his ailing company, at that time the tenth largest corporation in the world. In his testimony to Congress, when asked why he, a long-time advocate of a free market system, was asking for help from the government, Iacocca replied, “I do not speak alone here today. I speak for the hundreds of thousands of people whose livelihood depends on Chrysler remaining in business. It is that simple. Our one hundred forty thousand employees and their dependents, our forty-seven hundred dealers and their one hundred fifty thousand employees who sell and service our products, our nineteen thousand suppliers, and the two hundred fifty thousand people on their payrolls.”10

As the size of Chrysler’s economic footprint indicates, car companies have enjoyed a position of economic incumbency for decades. However, as insular as the automotive industry may be, making and selling safe, affordable, and reliable cars is still a tough business. In a series of interviews, a group of auto executives argued that “outsiders simply do not appreciate the sheer complexity of developing a vehicle today, the challenge of introducing new advanced technologies into a vehicle’s architecture, or the rigor and inertia of the regulatory environment.”11

The act of building even a simple car is a formidable undertaking that involves advanced logistics, technology, and manufacturing, the culmination of a century of hard-won experience. Each automotive company has global relationships with thousands of individual suppliers who provide raw materials and parts. A single car has about 30,000 components, including the big parts, such as door panels, and the little parts, such as the screws that bolt the car together.12 To assemble these parts takes about 17–18 hours per car.13

Ironically, the auto industry’s emphasis on building great hardware might eventually end up being what keeps them in business. Tech companies such as Google have zero experience in assuming responsibility for the physical safety of the general public on a large scale. Driverless cars will still require a safe, fuel-efficient automotive body that meets exacting regulations. When driverless cars become commercially viable, we predict that the new automotive industry will consist of a series of corporate marriages between software companies and car companies, each contributing what they do best. At the time of this writing, a few tentative unions have already formed between Google and Ford, Volvo and Microsoft, and GM and Lyft.

Intimate partnerships between different car companies are already a time-tested business model in the automotive industry. Car-making takes place in a multitiered network of suppliers and entanglements between different car companies. It’s not unusual for one car company to make a portion of a car that will later be sold bundled into another company’s vehicle.

We recently rented an RV for a week-long trip, and during that trip, got familiar with the RV’s dashboard. A week later, while riding in an airport shuttle, we found ourselves staring at an identical dashboard. We later looked up the RV and shuttle companies and learned that both were using Ford’s “E-Series” platform, a standard chassis and dashboard that Ford sells to downstream companies who build a custom body on top.

One day, partnerships between software and car companies will be commonplace in the automotive industry. The most likely scenario is that car companies will sell “building block” platforms to downstream technology companies who transform them into autonomous vehicles. Such an industry model could wind up being a win-win for everyone involved. What remains to be seen is how such go-to-market pairings pan out for car companies.

The situation is reminiscent of an earlier race between hardware and software providers that took place in the personal computer market. When I bought my first computer in the early 1980s, the most important factor in the decision was the computer’s hardware. I had the choice of several dozen personal computers, each with its own operating system, dedicated software, and mutually incompatible peripherals. Some of my friends chose the Commodore 64, but I chose the “BBC Micro.”

As time went on, Microsoft restructured the market for personal computer hardware. Microsoft created the DOS operating system and then later Windows to be hardware-agnostic, meaning they would run on any IBM-compatible PC. Microsoft further enriched the value of its operating system by opening up the Windows application program interface (API) and encouraging third-party software vendors to build applications for both desktop and server platforms.

Today, the software has become the more valuable portion of the personal computer and the hardware is a commodity. In a 180-degree turnaround from my youth, I now select my laptops first according to what operating system they run. The identity of the hardware vendor matters very little.

The exception to the “software-first” paradigm is Apple. As Microsoft fitted its operating system onto the bodies of several different OEM platforms, Apple maintained tight control over both its hardware platform and operating system. Some people believe that Apple’s end-to-end ownership of its products is what enabled the bold leap forward in product design that resulted in the breakthrough designs of the iPhone, iPod, and iPad.

Today, car companies and Google are like gigantic tankers on a collision course, both slowly cruising toward a common destination: to wring the most profits from the next generation of automated cars. Car companies favor an evolutionary approach, to develop driver-assist modules to the point where they can take over the wheel for extended periods of time. In contrast, Google’s strategy aims to dive directly into full autonomy.

If Google gets there first, its fully autonomous vehicles will become commercially available in a few select environments. As these early driverless cars prove their reliability, they’ll eventually creep into mainstream use on regular streets and the revolution will begin. As today’s teenagers mature into tomorrow’s consumers, a car’s software will become its most salient feature and the car’s mechanical body will be an afterthought. The Microsoft paradigm will prevail.

In the Microsoft paradigm, car companies will make cost-effective car bodies and their go-to-market partner, Google or another software company, will outfit the “naked” cars with an intelligent operating system. The software company will also serve as a production hub, installing and testing the operating system and managing the hardware vendors that provide the car’s various sensors. The role of the car companies will be demoted to mere original equipment manufacturers (OEMs), nearly invisible and interchangeable.

The Microsoft paradigm would also apply if the automotive industry shifts to mostly fleet sales. Should the market for personal car ownership shrink as predicted, the intervention of transportation companies who manage fleets of driverless taxis could further diminish the role of the carmaker. If most new cars are sold to fleets, not to consumers, the software company will still remain the sales lead and production hub. Since cost will be key, the car’s hardware body will be sold to fleet owners as a generic commodity and fleet sales will result in an even smaller profit margin.

A happier outcome for car companies would be the Apple paradigm, that car companies remain in control of the car’s product development and sales process. Not all consumers of the future will want an efficient and generic transport pod. Some consumers will still want to buy their own car, an expensive specialty model that’s designed for a specific purpose, perhaps an office on wheels, or a mini, autonomous “home away from home,” boasting a bright and recognizable logo.

Consumers who purchase these costly specialty models will be a desirable demographic: high-net-worth individuals, the same people who own a second home for vacations, or who prefer to charter a private jet to go on vacation rather than flying commercial. While a partnership with a software company will still be required, the focus of the sale will be on the quality of the entire car, both hardware and software. If the Apple paradigm prevails, automotive OEMs will remain in charge of at least a slice of the automotive industry of the future.

Human in the loop

Car companies aren’t the only ones that prefer a gradual approach. The U.S. Department of Transportation (USDOT) and the Society of Automotive Engineers have each sketched out their own maps of the road to full autonomy.14 While their stages differ slightly, what they have in common is the assumption that the best way forward is via a series of gradual and linear stages in which the car’s “driver assist” software temporarily takes over the driving, but quickly gives control of the car back to the human driver should a sticky situation occur.

We disagree with the notion that a gradual transition is the best way to proceed. For many reasons, humans and robots should not take turns at the wheel. Many experts, however, believe that the optimal model is to have human and software share control of the wheel, that the human driver should remain the master and the software the servant.

MIT roboticist David Mindell points out that “the most challenging problem for a driverless car will be the transfer of control between automation and driver.”15 In the delicate dance between human and automation, Mindell believes the best artificial-intelligence software is that which remains subordinate to a human pilot or driver. The right combination of the human and the mechanical, Mindell argues, enables robot designers to sidestep a potentially fruitless struggle with the process of developing software capable of responding appropriately to the environment. In Mindell’s paradigm, the result of pairing human with software is a machine that performs better than either human or machine alone.

Software that’s based on a paradigm in which humans and machines are partners is known to engineers as human in the loop software. In many situations, pairing up a human and a computer does indeed yield excellent results. Skilled human surgeons use robotic arms to achieve inhuman precision during surgery. Today commercial airplanes use human in the loop software, as do many industrial and military applications.

Arguments in favor of keeping humans in the loop have their appeal. It’s an enticing thought experiment to dream of meticulously wiring together the best of human ability with the best of machine ability, similar to the intoxicating optimization puzzle of hand-picking professional football players for a fantasy football team. Machines are precise, tireless, and analytical. Machines excel at detecting patterns, performing calculations, and taking measurements. In contrast, humans excel at drawing conclusions, making associations between apparently random objects or events, and learning from past experience.

Mindell is an eloquent writer and an expert, thoughtful commentator on the state of modern robotics. In theory, at least, if you combine a human with an intelligent machine, the result should be an alert, responsive, and extremely skilled driver. After all, the advantage of human in the loop approaches to automation is that it’s possible to harvest the strengths of what humans and machines do best.

In reality, human in the loop software could work in the case of a driverless car only if each party (human and software) maintained a clear and consistent set of responsibilities. Unfortunately, maintaining clear and consistent sets of responsibilities between human and software is not the model that’s being proposed by the automotive industry and federal transportation officials. Instead, their proposed approach keeps the human in the loop, but with unclear and shifting responsibilities.

At the core of this strategy of gradual transition is the assumption that should something unexpected occur, a beep or vibration will signal the human driver that she needs to hastily climb back into the driver’s seat to deal with the situation. A gradual and linear path toward full automation may sound sensible and safe. In practice, however, a staged transition from partial to full autonomous driving would be unsafe.

Machines and humans can work together well in some situations, but driving is not one of them. Driving is not an activity suitable for a human in the loop approach for one major reason: driving is tedious. When an activity is tedious, humans are all too happy to let machines take over, so they eagerly cede responsibility.

When I was in officer training in the navy, I learned that one of the core tenets of good management was to never divide a mission-critical task between two people, a classic management blunder known as split responsibility. The problem with split responsibility is that, ultimately, both people involved in completing the task may feel it’s safe to drop the ball, assuming the other person will pick up the slack. If neither party dives in to the rescue, the result is mission failure. If humans and machines are given split responsibility for driving, the results could be disastrous.

Even as he champions machine and human partnerships, Mindell himself describes a harrowing example of split responsibility between man and machine in the plight of air France Flight 447, which, in 2009, plunged into the Atlantic Ocean, tragically killing all 228 people on board. Later analysis of the plane’s black box revealed that the cause of the crash was not terrorism or a mechanical malfunction. What went wrong was the handoff from automated flight mode to the team of human pilots.

While in flight, the plane’s auto-pilot software became covered in ice and unexpectedly shut down. The team of human pilots, befuddled and out of practice, were suddenly called to the controls on what they expected would be a routine flight. When thrust into an unexpected position of responsibility, the human pilots made a series of disastrous errors that caused the plane to nosedive into the sea.

Mindell calls the case of Flight 447 an example of a “failed handoff.” Google calls the problem of humans relying too heavily on auto-pilot software simply “silly behavior.” In its monthly report from October 2015, the company described the behavior of humans in an earlier version of its driverless car a few years before.

In the fall of 2012, several Google employees were allowed to use one of the autonomous Lexuses for the freeway portion of their commute to work. The idea was that the human driver would guide the Lexus to the freeway, merge, and, once settled into a single lane, turn on the self-driving feature. Every employee was warned that this was early stage technology and they should be paying attention to the driving 100 percent of the time. Each car was equipped with a video camera inside that would film the passenger and car for the entire journey.

Employee response to the self-driving car was overwhelmingly positive. All described the benefits of not having to tussle with freeway rush-hour traffic, of arriving home refreshed to spend quality time with their families. Problems arose, however, when the engineering team watched the videos from these drives home. One employee turned completely away from the driver’s seat to search his back seat for a cell-phone charger. Other people took their attention away from the wheel and simply relaxed, relieved to have a few peaceful moments of free time.

The Google report described the situation of split responsibility, or what engineers call automation bias. “We saw human nature at work: people trust technology very quickly once they see it works. As a result, it’s difficult for them to dip in and out of the task of driving when they are encouraged to switch off and relax.”16

Google’s conviction that there’s no middle ground—that humans and machines should not share the wheel—sounds risky, but is actually the most prudent path forward when it comes to consumer safety. Automation can impair a driver in two ways: first, by inviting him to engage in secondary task engagements, activities such as reading or watching a video that directly distract him from watching the road; second, by disrupting his situational awareness, or his ability to perceive critical factors in the driving environment and to react rapidly and appropriately. Put the two together—a distracted driver who has no idea of what’s happening outside the car—and it’s clear why splitting the responsibility for driving is such a dangerous idea.

Research at Virginia Tech University sponsored by GM and the U.S. Department of Transportation Federal Highway Administration put some numbers around the temptation humans face when a capable technology offers to unload a tedious task. Virginia Tech researchers evaluated twelve human drivers on a test track. Each test vehicle was equipped with two forms of driver-assist software: one that managed lane centering, and another that handled the car’s braking and steering, called adaptive cruise control. The goal of the study was to measure how humans reacted when presented with driving technologies that took over the car’s lane keeping, maintained the car’s speed, and handled its braking. To measure the human driver’s activities during the study, each vehicle was equipped with data collection and recording devices.

Researchers recruited twelve individuals, twenty-five to thirty-four years old, from the general population of Detroit, Michigan, and offered $80 for their participation. Recruited drivers were asked to pretend they were taking a long trip and were not only encouraged to bring their cell phones with them on their test drive, but were provided with ready access to reading material, food, drinks, and entertainment media. As participants showed up for the study, researchers explained to them that someone from the research team would be joining them inside the vehicle. Each driver was told that their fellow passenger had a homework assignment he needed to complete during the trip, so he would be watching a DVD on his laptop for most of the drive.

The twelve human subjects were placed into common freeway driving scenarios on the test track and their responses and activities were measured and recorded. The researcher’s goal was twofold: one, to gauge the temptation to engage in secondary tasks such as eating, reading, or watching a video; two, to measure the degree to which driver attention would wander if software were to handle most of the driving. In other words, researchers were testing whether automated driving technologies would encourage humans to engage in unsafe misbehaviors such as mentally tuning out, engaging in inappropriate behavior while behind the wheel, or losing their situational awareness, including their ability to perceive critical factors in the driving environment.

It turned out that most human drivers, when presented with technology that will drive for them, eagerly become guilty of all three bad driving behaviors. The “fake homework” strategy of the researcher, combined with the competence of the adaptive cruise control and lane centering software, lulled the participants into feeling secure enough to stop paying attention behind the wheel. Over the course of approximately three hours of test driving time, during which different automated driving technologies were used, most drivers engaged in some form of secondary task, most frequently eating, reaching for an item in the back seat, talking on the cell phone and texting, and sending emails.

The lane-keeping software especially invited the human drivers to engage in secondary activities. When the lane-keeping software was switched on, a whopping 58 percent of the drivers watched a DVD for some time during the trip. Twenty-five percent of the drivers enjoyed the free time to get some reading done, increasing their risk of a car crash by 3.4 times.17

The human drivers’ visual attention was not much better. Once again, when the lane-centering software took the wheel, driver attention wandered. Overall, drivers were estimated to be looking away from the road about 33 percent of the time during the course of the three-hour trip. More dangerously, the drivers engaged in long and potentially dangerous “off-road glances” lasting more than two seconds an estimated 3,325 times over the course of the study. The good news, however, was that these deadly long off-road looks occurred only 8 percent of the time.

Clearly, this particular study is just a starting point. Twelve people is a fairly small control group and more research on driver inattention is needed. One interesting finding that emerged was that although most drivers were eager to read, eat, watch movies, or send email while at the wheel, some were able to resist the temptation to tune out. For reasons that deserve additional research, the study revealed that not all human drivers were so quick to give up their responsibilities at the wheel. As researchers concluded, “this study found large individual differences in regard to the nature and frequency of secondary task interactions suggesting that the impact of an autonomous system is not likely to be uniformly applied across all drivers.”

Clearly there’s a tipping point at which autonomous driving technologies will actually create more danger for human drivers rather than less. Imagine if the twelve human drivers in Virginia Tech’s research project were given a seat in a fully autonomous vehicle for a three-hour drive. It is highly likely that the intensity of their secondary activities would increase to the point where the human driver would fall asleep or become deeply absorbed in sending email. Full autonomy would make it nearly impossible for a deeply distracted or sleepy human driver to effectively take over the wheel in a challenging situation if control were abruptly handed over.

In another study, at the University of Pennsylvania, researchers sat down with thirty teens for a frank discussion of teen drivers’ cell-phone usage while at the wheel.18 Two central points emerged. While teens said they understood the dangers of texting while driving, they still did it. Even teens who initially claimed they did not use their cell phones while driving, revealed reluctantly when pressed that they would wait until they were at a red light to send a text. Also, teens used their own classification system to define what constituted “texting while driving” and what didn’t. For example, they said that checking Twitter while driving did not constitute texting; nor did taking a passenger’s picture.

Wandering human attention is one risk. Another risk of having humans and software share the wheel lies in the fact that if not used regularly, human skills will degrade. Like the pilots of Flight 447, human drivers, if offered the chance to relax behind the wheel, will take it. If a human hasn’t driven in weeks, months, or years, and then is suddenly asked to take the wheel in an emergency situation, not only will the human not know what’s going on outside the car, but her driving skills may have gotten rusty as well.

The temptation to engage in secondary tasks and the so-called handoff problem of split responsibility between human and machine are such significant dangers in human/machine interactions that Google has opted to skip the notion of a gradual transition to autonomy. Google’s October 2015 monthly activity report for its driverless-car project concludes with a bombshell: based on early experiments with partial autonomy, the company’s strategy path forward will focus on achieving only full automation. The report states, “In the end, our tests led us to our decision to develop vehicles that could drive themselves from point A to B, with no human intervention. … Everyone thinks getting a car to drive itself is hard. It is. But we suspect it’s probably just as hard to get people to pay attention when they’re bored or tired and the technology is saying ‘don’t worry, I’ve got this ... for now.’”19

At the time this book was written, Google’s driverless cars had a total of seventeen minor fender benders and one low-speed collision with a bus. In the seventeen fender benders, the culprit was not the not driverless car, but the other human drivers. On February 14, 2016, however, Google’s car had its first significant accident when it “made contact” with the side of a city bus. Unlike the previous seventeen minor collisions, this accident was the fault of the car’s software because it erroneously predicted that if the car rolled forward, the bus would stop.20

With the exception of the run-in with the bus, the rest of Google’s accidents have happened because, ironically, Google’s cars drive too well. A well-programmed autonomous vehicle follows driving rules to the letter, confusing human drivers who tend to be less meticulous behind the wheel, and not always so law-abiding. The typical accident scenario involves one of Google’s obedient driverless cars trying to merge onto a highway or turn right at a busy intersection on a red light. Impatient human drivers, not understanding the car’s precise adherence to speed limits or lane-keeping laws, accidently run into the driverless car.

So far, fortunately, none of Google’s accidents have resulted in any injuries. In the near-term future, the best way to avoid collisions will be to teach driverless cars to drive more like people, carelessly and illegally. In the longer term future, the best way to solve the problem of human drivers will be to replace them with patient software that never stops paying attention to the road.

As car and tech companies gather at the table to play their high-stakes, global game of automotive poker, it remains to be seen who will have the winning hand. If federal officials pass laws that mandate a “human in the loop” approach, the winner will be car companies, who will retain control over the automotive industry. On the other hand, if eventually the law permits, or—for safety reasons—even requires full autonomy for driverless cars, then software companies will take the lead.

Google retains some major advantages as the undisputed industry leader in digital maps and deep-learning software. From the perspective of business strategy, Google’s lack of a toehold in the automotive industry could actually be one of its key strengths. Analyst Kevin Root writes that “Unlike OEMs, they [Google] are not encumbered by … lost revenue from bypassing the new feature trickle down approach, they are developing for the end state of fully autonomous driverless cars and appear to have a sizeable lead.”21 Add to that Google’s eagerness to create a new revenue stream that’s not reliant on selling internet ads, currently its primary source of revenue.

One thing is clear. Regardless of how the transition to driverless cars unfolds, the automotive industry will be forced to develop new core competencies. In order to remain a player in the new industry of selling driverless cars, car companies will have to master the difficult art of building artificial-intelligence software, a challenge that has eluded the world’s best roboticists for decades.

Notes