In the not-so-distant future, automotive museums will host exhibits of highly polished cars from the early twenty-first century. Like history buffs ducking through the doorway of a carefully preserved medieval hovel on a historic estate, museumgoers will squeeze themselves into the car’s front seat. People will sit behind the steering wheel, poke at the screen of the built-in GPS, and playfully pump the brake with their foot, marveling that, not so long ago, everyone used such an inconvenient and dangerous mode of transportation.
Today’s cars are brainless. The standard automotive “platform” of the modern car (four wheels, a metal body, and a gasoline-powered engine) has not changed significantly since its introduction nearly 100 years ago. Meanwhile, the rest of the world has been shaken to its foundations by increasingly intelligent software, nearly ubiquitous communication networks, and robust and accurate hardware sensors that shrink in size and price each year. Thanks to recent advances in robotics technology and artificial-intelligence software, the era of unintelligent cars is finally coming to an end. The everyday car is about to evolve into a self-guided, mobile robot.
For nearly a century, human-driven cars have shaped our lives. The introduction of horseless carriages rearranged the layout of “walking cities,” hives of small meandering alleyways and homes intermingled with shops and public squares, into “driving cities,” tidy grids of wide streets and parking lots. Cars gave people freedom, as well as access to new work and social opportunities. Cars gave businesses the ability to rapidly deliver their products to entirely new markets.
Precious personal mobility, however, has come at a high price. Over the course of nearly one hundred years, traffic accidents have stolen millions of lives. While cars give people the freedom to drive long distances to work, their use has also catalyzed the development of a new evil: traffic-choked cities and suburbs. Today, as people in cities all over the world embark on their daily, solo commute to work and freight trucks haul goods to their final destination, urban air has degenerated into an oily cloak of yellowish smog.
Roughly one billion human-driven cars roam the Earth. Our reliance on cars is costly on many levels. Yet, for most of the world’s population, car travel remains the speediest, cheapest, and most comfortable form of personal mobility available. For better or worse, cars will remain an integral part of modern life.
The best way to solve the problems caused by cars is to make them smarter. When human drivers let intelligent software take the wheel, driverless cars will offer billions of people all over the world a safer, cleaner, and even more convenient mode of transportation. In the next decade, self-driving cars will hit the streets, once again rearranging where we live and how we work and play.
Skeptical? We don’t blame you. For nearly a century now, various experts have predicted the demise of human beings at the hands of intelligent machines.
So far, these predictions have come true only in highly specific industrial jobs, or in activities confined to the virtual world. For example, mechanical robotic arms flawlessly do work once performed by human factory workers. In the virtual realm, artificial-intelligence software has bypassed our human ability to play board games, make rapid stock market trades, or plan optimal routes for complex, mass transit systems.
It’s true that modern software boasts a lot of intellectual horsepower and advanced robots are capable of performing skilled work. Yet, even the most advanced artificial-intelligence software programs falter when they’re tasked with managing the movements of a robotic body that’s not bolted down, but equipped with mechanical limbs so it can move about and interact with its environment. For several reasons we will explore in later chapters, mobile robots today are about as physically nimble and perceptive as a cockroach or, on a good day, a toad.
While roboticists continue to struggle to build capable mobile robots, building a reliable driverless car is a feat of engineering that’s technologically well within reach. When it comes to writing code to manage physical movement, cars enjoy a major advantage over other forms of mobile robots: it’s easier to roll than to walk or climb.
Software that guides the movements of a multilimbed artificial creature rapidly balloons in volume and complexity because multiple limbs can assume a nearly infinite number of different types of movements and positions. In contrast, a car’s four wheels, brakes, and steering wheel behave in a fairly predictable manner. The software that guides the physical movements of a driverless car must manage a relatively small set of actions, such as turning the car’s wheels in the right direction and overseeing the car’s stopping and acceleration.
The second reason the task of driving lends itself to automation is that guiding a car is a relatively repetitive and reactive activity. Humans of all levels of intelligence are able to obtain a driver’s license. A driverless car needs to be only aware enough to respond immediately to clear physical hazards such as an approaching pothole or slow-moving herd of schoolchildren, to plan a route along clearly delineated roads and highways, and to obey relatively clear traffic rules.
At this point, a skeptic would (correctly) point out that there must be more to the story. If building a driverless car were as simple as programming a four-wheeled robot to follow the rules of the road, driverless cars would have become commonplace decades ago. There are two reasons why cars are only now becoming capable of guiding themselves. The first is practical: the bar is high. Cars are two-ton metal boxes that run on public streets. If something were to go wrong with the software that guides a driverless car, the results would be gruesome and tragic.
The high cost of human life explains why the first autonomous vehicles are in use today in places where human casualties would be minimal if a vehicle were to malfunction and veer unexpectedly off course. In remote Northern Australia, for example, mining companies use gigantic autonomous trucks to haul gravel. In North America, farmers use self-driving tractors to seed, plow, and spray vast, unpopulated agricultural fields. In distribution centers and factories, specialized automated vehicles move boxes from one side of the room to the other. At resorts and airports, driverless shuttles carry passengers back and forth on set tracks at speeds of fifteen miles per hour.
The second and more formidable challenge that has delayed the development of driverless cars is purely technological. Although 99 percent of the time driving is mind-numbingly dull and predictable, 1 percent of the time it suddenly becomes exciting. Biological life forms rely on so-called “simple” instincts to deal with the unexpected events that life throws at us. The instincts that enable humans to drive through rush-hour traffic actually cloak a formidable amount of intelligence.
Roboticists have given a name to the unexpected rare events that take place 1 percent of the time: corner cases. Corner cases are unusual situations that are difficult to anticipate but can have potentially catastrophic results. How effectively a robot’s artificial “instincts” handle corner cases ultimately determines that particular robot’s reliability, and hence its usefulness. If a robot’s software is unable to handle each and every one of the corner cases it encounters, the best-case scenario would be that the robot is unreliable; the worst-case scenario would be that the robot would fail miserably at its assigned mission and wreak havoc.
Driving might be mostly repetitive, but it’s fraught with an endless supply of potentially deadly corner cases. The software that guides a driverless car must be able to instinctively react to a deer that leaps onto the hood of a car, or know how to handle the aggressive panhandlers who swarm cars with spray bottles, hoping that passengers will pay them for cleaning the windshield. Despite decades of effort, automotive engineers and roboticists have failed to write software capable of handling the infinite variety of situations a driverless car will encounter on the road.
One of the basic rules of robotics is that the simpler and more predictable the environment (i.e., the lower the number of corner cases), the easier it is to build software to enable a robot to navigate that environment. Robots thrive in industry because the typical factory is a closed world, a highly structured environment in which corner cases have been anticipated and carefully removed by industrial engineers. In a closed-world environment, a robot’s work setting is carefully designed around a particular task. Robots in industrial settings know what’s coming. They run on software that that guides the robot through a series of unchanging activities such as stamping metal parts, attaching bolts, or lugging boxes from one place to another.
While it’s possible to set up a tidy closed-world environment in a factory setting, in the real world, streets and highways are chaotic and unpredictable. Anyone behind the wheel of a car must not only deal with novel situations, but with another related challenge that’s difficult for software programs to address: interactions that are guided by rules of conduct that are either vague or highly situation-specific. In particular, artificial-intelligence software stumbles over two activities that are critical for safe driving: complex nonverbal communication and the ability to consistently recognize the same object in a wide range of settings.
Driving demands complex social communication between other drivers and pedestrians. Human drivers routinely engage in a nonverbal “social ballet” when they’re behind the wheel, nodding, waving, and making eye contact to announce their intentions. Waving and smiling may be simple activities for humans but it’s extraordinarily difficult to write software that can read people’s facial expressions and body language and respond appropriately.
Not only do mobile robots struggle to engage in complex nonverbal communications, their computer-guided intelligence falters when faced with an unexpected event. Both of these shortcomings are caused by a deficiency of perception, the ability to make sense of what one sees and to react accordingly. Ideally, computer scientists would solve this problem by writing a program to provide cars with consistent, accurate artificial awareness and situational understanding. The problem has been that until very recently, such software has not existed.
Since the founding of the field of artificial intelligence more than half a century ago, computer scientists and roboticists have unsuccessfully searched for ways to automate the mysterious art of perception. One aspect of perception involves cognition, a person or animal’s ability to get a “read” on a complex situation, and to know how to respond appropriately. Another involves processing visual data. Living things have highly developed visual systems capable of recognizing an object even when viewing it from different angles, in different lights or in unusual settings.
Humans process visual data instinctively and nearly flawlessly. Yet, our capacity for making sense of what we see has resisted automation. For decades, researchers in the field of machine vision have tried and failed to create software that, like us, is capable of rapidly and accurately “understanding” the visual environment. The lack of such software has been a major technological barrier in the field of mobile robotics research.
For most of their history, robots have struggled to process visual information. Industrial robots deal with this shortcoming by toiling blindly in closed-world environments in dark, unlit factories. For robots whose work involves some kind of visual activity, their workday is set up in such a way that they are never asked to classify or inspect objects that are unfamiliar to them.
One of the barriers to the development of machine vison software has been insufficient computing power. Since processing images is a data-intensive activity, the first machine-vision systems streamlined the process by using a structured approach that used a set of rules to parse visual information. These early machine-vision systems worked by attempting to match perceived objects against a robot’s small template library of known objects, a slow, inaccurate, and inflexible process.
One of the biggest weaknesses of early machine-vision software was that it proved unreliable when presented with a novel object or situation. As a result, any robot (or vehicle) guided by such software would be frequently unable to recognize even familiar objects if they appeared in a slightly unfamiliar setting. Since the ability to correctly identify nearby objects is key to safe driving, underperforming machine vision software has held back the development of driverless cars for decades. A recent breakthrough in artificial intelligence, however, promises to change everything.
After years of languishing in the fringes of academic artificial-intelligence research, in 2012 a new type of software called deep learning has attained human-level accuracy in correctly classifying random objects in thousands of digital images. While the ability to classify random objects in images accurately may sound trivial, that ability provides a foundation for artificial perception. Once an object can be correctly identified, it can be “handed off” to other types of artificial-intelligence software that can do what software traditionally does best: calculate the optimal response using either statistical reasoning or a logical, rule-based approach.
One of the reasons deep-learning software is so effective for driverless cars is that it’s ideal for use in unstructured environments such as the open road. Deep learning belongs to a category of artificial-intelligence software called machine learning that breaks tradition by not being designed by a human programmer. Rather than attempting to build a model of the world and address it using formal logic and rules, machine-learning software is “trained” to do its job by being fed huge amounts of training data. To develop deep-learning software for a driverless car, for example, programmers “feed” the software program a daily feast of gigabytes of raw visual data gathered by driving around with a camera on board.
The fact that deep-learning software “learns” by looking at the world gives it another major advantage: it’s not rule-bound. In the same way a human toddler learns to recognize objects according to their distinctive, identifying features, deep-learning software classifies objects according to their visual features. A software program using a traditional rule-based approach would falter in confusion if presented with an image of a cat riding a bicycle. In contrast, deep-learning software would focus on the cat’s identifying visual features—perhaps its pointy ears and tail—and would quickly (and correctly) surmise that although the cat appears in an unusual setting, it’s still a cat.
Deep learning has transformed the study of artificial perception and is being applied with great success to speech recognition and other activities that require software to deal with information that presents itself in quirky and imperfect ways. In the past few years, in search of deep-learning expertise, entire divisions of automotive companies have migrated to Silicon Valley. Deep learning is why software giants like Google and Baidu, already armed with expertise in managing huge banks of data and building intelligent software, are giving the once-invincible automotive giants a run for their money.
Deep learning has been so revolutionary to the AI community that its reverberations are still unfolding as we write this book and will likely continue to unfold in the years ahead. Cars won’t be the only technology that’s transformed by deep learning. We predict that deep learning will change the developmental trajectory of mobile robotics in general. As robots gain the ability to visually understand their environment, the development of artificial life forms will follow a path similar to that already trod by biological life forms more than 500 million years ago.
The fossil record indicates that roughly before the Cambrian Era, all biological life forms were blind. As the Cambrian Era dawned roughly 500 million years ago, nearly blind organisms whose “eyes” consisted of primitive clusters of light-sensitive cells suddenly and mysteriously evolved complex new visual systems. Once they could see, these simple organisms also evolved complex physical body shapes capable of rapid response and locomotion. New physical capabilities, in turn, demanded the development of a bigger brain to oversee the coordination of these new limbs and fins. Armed with a visual system, a responsive body, and a bigger brain, what were once blobs of cells evolved into a diverse set of complex creatures that crawled out of the primordial ooze and adapted to thrive in a particular niche on land.
One intriguing hypotheses for the Cambrian Explosion—the burst of rapid evolution that took place during the Cambrian Era—is the Light Switch theory. This theory, developed by Andrew Parker, holds that the evolution of eyes initiated an evolutionary arms race among living things, and those with the best vision became the most likely to survive.1 Perhaps the Light Switch theory will apply to robots, too.
As once-blind machines gain the gift of perception, they, too, will crawl out of their primordial confinement in the structured and unlit industrial environments where they dwell today. Robust machine vision will enable robots to make use of new physical configurations of wheels, limbs, or treads that will grant them new levels of agility. To guide their complex new mechanical “limbs,” their robotic brains will also expand. We will witness a Cambrian explosion of robotic diversity of form and function as sighted robots master new skills and find productive new niches in which to work.
A school of tropical fish is a beautiful sight to behold. Their brightly colored bodies wiggle through the water in an evenly spaced tight formation. This collective of dozens of individual fish can pivot and change direction in a split second, moving with the tight precision of a single body. If an obstacle suddenly appears in the school’s path, individual fish swarm past the obstacle and swiftly reestablish themselves in their original formation. Fish never collide with one another, nor do they ever collide with unexpected objects—branches, rocks, a coral reef—that the sea throws into their path.
In an ideal future, our streets and highways will glisten with schools of tightly packed driverless cars. Like fish, swarms of driverless cars will demonstrate extraordinary anti-collision abilities, navigating intelligently and instinctively through urban streets full of pedestrians and falling gracefully into fuel-efficient formations on long, empty stretches of highway. Some cars will carry a passenger or two. Others will be empty, on their way to drop off a pizza or to pick up a child from daycare.
How do we get from now, from today’s clumsy human-guided traffic formations, to this ideal future where driverless cars of all shapes and sizes smoothly and safely swarm the streets? In Ernest Hemingway’s 1926 novel The Sun Also Rises, character Bill asks another character Mike, “How do you go bankrupt?” “Two ways,” Mike responds. “Gradually and then suddenly.”
The technology will develop suddenly. By now, most people reading this book are familiar with Moore’s Law. Moore’s Law is the observation that as the performance of computer chips continues to improve at an exponential rate, the price and size of the chips drop at a corresponding asymptotic rate. As reflected by Moore’s Law, the technologies that enable driverless cars—sensors, lots of data, and computing power to process it all—have become reliable, robust, and affordable.
The exact configurations vary, but most driverless cars use several digital cameras, a radar sensor, and a laser-radar (lidar) device to “see” where they’re going (for an in-depth exploration of these enabling technologies see chapter 9). Digital cars pair a global positioning system (GPS) device with another location device called an inertial measurement unit (IMU) that compensates for GPS inaccuracies. An on-board computer takes the information streaming in from the sensors and GPS, folds that data onto a high-definition digital map that contains information on intersections and stop lights, and processes it all together into a digital model of the world outside the car called an occupancy grid.
Driverless-car technology is nearly mature. Elon Musk, CEO of car company Tesla and an advocate of fully autonomous vehicles, sums up the situation. “It’s a much easier problem than people think it is. … But it’s not a one-guy-three-months problem. It’s more like, thousands of people for two years.”2 While the technology may be nearly ready, the society that’s wrapped around that particular technology may not be.
Several social factors will delay the deployment of driverless cars. One of the lessons that software developers reluctantly learn is the people problem. When new software programs are introduced into an organization, the biggest barrier to adoption is usually not the performance of the technology; it’s the fact that the organization’s culture and workflow are built on previous software products, and changing people’s work habits stirs up resistance. As their workflow changes, some people lose turf, others are forced to rethink how they get things done, and so on. The people problem is frequently the iceberg hidden underneath the surface that can foil an organization’s successful adoption of a new technology that would otherwise save the organization time and money and improve productivity.
In the adoption of driverless cars, one aspect of the people problem might be resistance from consumers, but we predict otherwise. Although the executives of automotive companies gamely insist that people love the experience of driving, and will continue to prefer it, we believe that consumer acceptance will not be an obstacle. Research by the accounting firm KPMG indicates that many consumers would happily ride in a driverless car, assuming the technology was mature and there were no risks to their personal safety. When researchers asked people what the likelihood was on a scale of one to ten that they would ride in a self-driving car for everyday use, the focus group respondents rated their willingness at an average level of six out of ten. If a driverless car reduced their travel time by 50 percent and could get them to their destination at a guaranteed time, respondents’ willingness increased to nearly eight.3
Research by Boston Consulting Group revealed a similar enthusiasm. A survey of more than 1,500 U.S. drivers revealed that 55 percent of respondents said they would likely, or very likely, buy a partially automated car within five years, and 44 percent said they would be very likely to buy a fully self-driving car within ten years should they become commercially available.4 The report predicted that the first autonomous vehicles would be available for purchase in the year 2025, and that by 2035, roughly 10 percent of new vehicles sold will be fully autonomous, representing a global market worth $38 billion.
The younger the consumer, on average, the greater his or her enthusiasm for driverless cars. A survey by Harris Poll queried four age groups, millennials (ages 18–37), generation X (38–49), baby boomers (50–68), and matures (69+) on their attitudes on self-driving vehicles.5 More than half of the “matures” vehemently responded, “I will never consider buying or leasing a self-driving vehicle,” compared to only 20 percent of millennials. Nearly 25 percent of surveyed millennials said they would buy a driverless car when they believed the “bugs” had been worked out, and when their cost dropped to a reasonable price. We suspect the adoption would be even higher once cars are proven to drive more safely than a human.
Younger drivers are just not that into driving and, unlike their grandparents, would be happy to let a robot take the wheel. In a talk we watched at an autonomous vehicle conference in 2014, an executive of the consulting firm JD Powers described a generational shift his firm observed when studying attitudes toward cars and driving.6 People under age thirty, a demographic some call generation Y, view time spent driving as a hindrance, forced dead-time away from social media and the internet. The executive summed up the situation, saying that “the young folks, the Gen Y … tend to be less engaged with the idea that driving is something we should cherish. … Their main motivation is to get to where they’re going. … They’re looking for having that time to be useful to them in their own personal way.”
The more significant people problem that will delay the widespread adoption of driverless cars will be government oversight and regulation, in particular, state and federal traffic regulations, liability laws, and insurance coverage. Thus far, the most significant force behind the development of driverless cars has come from industry. Federal oversight of autonomous driving has gotten off to a slow start. In 2016, however, the U.S. Department of Transportation began to show signs it might be warming up to the potential benefits of driverless cars by announcing plans to provide guidance to state Departments of Motor Vehicles on driverless-car regulation. At the time of this writing, in the United States, four states—California, Nevada, Florida, and Michigan—offered an official autonomous car license and several states are considering one.
A driverless license is a good start, but a significant amount more research and exploration of regulatory oversight is needed. Ideally, the highest levels of the government should adopt a proactive, rather than reactive, approach. For example, legal experts need to examine and possibly revamp liability laws to clarify who, exactly, is at fault in a driverless accident. Car insurance will need to be similarly restructured. Legislators will need to decide how safe is safe enough for a car to drive without a human at all, and how safety should be tested. Although these challenges of governance are entirely solvable, as they remain unaddressed, human-driven cars will continue to reap their grisly harvest in the form of lives lost, time wasted, and polluting fuel burned.
As driverless-car technology matures and the people problem rears its ugly head, the cost of delay can be directly tallied in the number of human lives lost. According to the World Health Organization, car accidents are the leading cause of death worldwide among people aged 15 to 29, and the second leading cause of death for all age groups. Most of these traffic tragedies are caused not by vehicular malfunction, but by preventable human error, or the “4 Ds,” drunk, drugged, drowsy, or distracted drivers.
As long as humans are still behind the wheel, the number of deaths from car accidents is likely to grow. People in emerging economies have eagerly embraced car ownership; in developing nations such as China, India, Brazil, and Russia, as more cars crowd the roads, the number of people injured and killed will also increase. Another hazard is that distracted driving is on the rise: in the year 2013, 424,000 people in the United States alone were injured in car crashes caused by a distracted driver, an almost 10 percent increase from the year 2011.7
One peculiar irony of the automobile is that while cars have killed millions of people since their invention, our society has a blind spot—perhaps grim acceptance—of the resulting death toll. Each year, cars kill about 1.2 million people around the world.8 To put things in perspective, an annual 1.2 million fatality rate is equivalent to having ten Hiroshima-scale atomic bombs go off each and every year.
Cars are nearly as deadly as war, violence, and drugs combined. Homicide, suicide, and war are responsible for the death of an estimated 1.6 million people each year.9 An estimated 183,000 people die annually from drug-related causes.10 Yet, despite the high global death toll from preventable car accidents, there’s no federally funded “war on cars” or calls to ban people from driving. In the aftermath of yet another massive freeway pile-up that sends dozens of people to the hospital, there’s rarely lasting public outrage against car companies.
What if there were a solution to reduce the number of people who die each year in a car crash? If such a solution existed, federal, state, and city governments would band together to put it into widespread use. Reducing the death toll from car accidents would become a top governmental priority, backed by public opinion. Advocates of this solution would host fund-raisers and hand out badges boasting a highly recognizable symbol of the effort, similar to the iconic pink ribbon for breast cancer research. Federal agencies would dole out generous research grants to universities.
The reality is that such a solution will soon exist. The solution is to remove human drivers from the equation and replace them with intelligent software and sensors. If our society were to pull together to make the development of driverless cars our next cultural “Apollo moment,” we could save millions of lives. A study by the Eno Center for Transportation estimates that in the United States alone, if 90 percent of the cars on the roads were autonomous vehicles, the number of driving-related deaths would fall from 32,400 a year to 11,300.11
It would be irresponsible, however, at this point to not mention that replacing human drivers with robots would not solve everything. Humans might simply find new ways to misbehave. Some analysts point out that while autonomous vehicles offer some safety benefits, they could introduce new risks, such as hacking. Passengers could take on new risks because they feel safe, for example, not wearing their seat belts or weaving in and out of platoons of self-driving vehicles for sport.12
Even if we factor in the possibility that in the future, new forms of poor human judgment will no doubt appear, driverless cars will still make road travel safer. Safer roads and highways won’t be their only benefit, though. Driverless cars will make more efficient use of the roads, reducing traffid jams and air pollution. As people gain access to safe and convenient personal transportation, they will also gain new opportunities in their choice of where they live, work, and play.
Losing the burden of a tedious commute would be one direct benefit of driverless cars. Another benefit would be that more people could enjoy the benefits of convenient personal mobility. According to the U.S. transportation data, each day 586 older drivers are injured in car accidents.13 Unfortunately, the decision to stop driving is usually one that people resist for as long as possible. Driverless cars would give the elderly and the impaired the ability to move around the world on their own.
The cost of not aggressively embracing driverless cars can be counted in lives, time, pollution, and lost opportunities. Yet not everyone is convinced of their value. While writing this book, we discovered a few pieces of misinformation that stubbornly remain in wide circulation and are used by opponents of driverless cars when arguing against policy that would favor their adoption. We distilled these bits of misinformation into seven myths. These myths are:
The timeline for the adoption of driverless cars is not a simple, linear one. Instead, the transition to a world of driverless vehicles will happen gradually. It’s impossible to select a specific year when cars will become driverless, for two reasons. The first is that adoption of driverless cars will take place in some environments and countries before others. The second is that car companies are currently promoting a staged approach to autonomy; if they succeed, humans might drive just part of the time, making it impossible to pin down an exact transition point.
The first autonomous vehicles will appear in special environments before they appear on mainstream roads. Mines and farms already use autonomous vehicles. Freight trucking will likely also be an early adopter.
In cities, at first, driverless-car adoption will be cautious, taking the form of low-speed shuttles that drive slowly in enclosed and structured environments such as airports or resorts. In the United Kingdom, for example, the town of Milton Keynes is testing two-seater, electric, autonomous taxi pods that will drive people on pavements and footpaths. As time passes and these driverless shuttles perform well and prove their safety, the speeds at which they operate and their driving range will gradually increase. We predict that Google’s first sales of driverless cars will be not to consumers for everyday driving, but more likely to corporations and some municipalities as a niche transportation solution. At some point there will be a quiet crossing over, and the autonomous vehicles will venture outside their enclosed territories and onto city highways.
Another aspect of the transition to driverless cars is location, that is, where the first driverless cars will come into everyday use. Some countries will adopt driverless cars sooner than others. Within countries, some states or provinces will agree to legalize driverless cars before others.
One way to define “completion” would be to insist that complete autonomy will have been achieved only when 100 percent of the cars on the road are fully autonomous, 100 percent of the time. Complete autonomy defined this way means that achievement of full autonomy could take as long as a century. Because of the several ways in which adoption of driverless cars could take place, there’s little consensus on the timeline.
Some of this wide variance on an adoption timeline stems from practical considerations. Since cars must meet strict safety and emissions requirements, new vehicle technology tends to be implemented more slowly than technologies used for other applications. In addition, cars are expensive, and therefore people hang onto them for several years. The fact that people buy and discard cars at a slower rate than they do their smart phones will result in a transition phase from human-driven to driverless that will span several decades.
In general, car companies and transportation officials take a longer view, estimating that driverless cars will be a mainstay on public roads sometime after the year 2025. According to the automotive market research firm IHS, the first sales of autonomous vehicles will begin around the year 2025.14 IHS analysts estimate that by the year 2035, roughly 10 percent of new cars sold will be autonomous, a total number of 11.8 million cars each year. After 2050, IHS predicts that almost all new vehicles sold will be autonomous.
Car companies have preferred a staged, gradual approach to autonomous driving, another factor that makes it impossible to pin down an exact date for the adoption of driverless cars. A typical headline for a press release from a car company promoting its driver-assist technology will trumpet, “Company X to launch its driverless-car product by 2020.” A closer read, however, reveals that the product Company X is discussing is actually a feature that will enable a car to guide itself through a specific task, for example, parking itself under controlled conditions, or perhaps some kind of glorified cruise control and lane keeping combination.
Tech companies are a bit more optimistic about the day when cars will be capable of fully driving themselves in all environments. Google and Tesla are firm in their conviction that the future of driving lies in fully autonomous vehicles, although the exact date and details are to be determined. In October 2014, Tesla’s Elon Musk told Bloomberg Television that “five or six years from now we will be able to achieve true autonomous driving where you could literally get in the car, go to sleep and wake up at your destination.” But he cautioned, “it will then take another two to three years for regulatory approval.”
Analyst Tod Litman predicts that without a federal mandate to speed along adoption, deployment will follow the pattern of the adoption of automatic transmissions, a process that took nearly five decades. Litman calculates that even if driverless cars are made legal by the year 2020, the gradual replacement of human-driven cars with driverless ones will take decades. He estimates that by the year 2050, driverless cars will make up 80–100 percent of new car sales. Yet, since it takes several years for the average car to reach the end of its lifespan, 40–60 percent of the cars on the roads will still be those driven by humans.15
Turning over the world’s population of human-driven cars will be no small matter. In the United States there are roughly 250 million active cars on the roads, a group analysts call “the active car park.”16 Every year, 13–14 million of the cars in the active car park are retired and consigned to the scrap heap. Even if it were possible to purchase a well-tested and legal autonomous vehicle immediately, because of the 10–15-ear average life span of a modern automobile it would take nearly twenty years for all of the old, human-driven cars to be taken off the road.
No matter whose predictions turn out to be correct, one thing is clear. The transition to fully self-driving cars will be one that will take decades to complete. While the details of who drives what, when, and where remain to be worked out, humans and robots will share the road for decades ahead.
In the following chapters we’ll explore driverless cars from several angles, debunking the myths that limit their development along the way. We lay out new cityscapes where parking lots are repurposed as human-friendly space and commutes are no longer painful. We delve into the robotics technologies that give modern driverless cars the ability to “see,” “react,” and “think.” We explore what will happen to the auto, media, and retail industries. We explore the long, rich history of previous efforts to liberate cars from human drivers, culminating in today’s autonomous vehicles that are the fruit of decades of academic research in artificial intelligence and machine learning.