THE SUN WAS BARELY UP above the horizon first thing in the morning of 13 March 2004, but the Slash X saloon bar, in the middle of the Mojave desert, was already thronging with people.1 The bar is on the outskirts of Barstow, a small town between Los Angeles and Las Vegas, near where Uma Thurman was filmed crawling out of a coffin for Kill Bill II.2 It’s a place popular with cowboys and off-roaders, but on that spring day it had drawn the attention of another kind of crowd. The makeshift stadium that had been built outside in the dust was packed with crazy engineers, excited spectators and foolhardy petrolheads who all shared a similar dream. To be the first people on earth to witness a driverless car win a race.
The race had been organized by the US Defence Advanced Research Projects Agency, DARPA (nicknamed the Pentagon’s ‘mad science’ division).3 The agency had been interested in unmanned vehicles for a while, and with good reason: roadside bombs and targeted attacks on military vehicles were a big cause of deaths on the battlefield. Earlier that year, they had announced their intention to make one-third of US ground military forces vehicles autonomous by 2015.4
Up to that point, progress had been slow and expensive. DARPA had spent around half a billion dollars over two decades funding research work at universities and companies in the hope of achieving their ambition.5 But then they had an ingenious idea: why not create a competition? They would openly invite any interested people across the country to design their own driverless cars and race them against each other on a long-distance track, with a prize of $1 million for the winner.6 It would be the first event of its kind in the world, and a quick and cheap way to give DARPA a head start in pursuing their goal.
The course was laid out over 142 miles, and DARPA hadn’t made it easy. There were steep climbs, boulders, dips, gullies, rough terrain and the odd cactus to contend with. The driverless cars would have to navigate dirt tracks that were sometimes only a few feet wide. Two hours before the start, the organizers gave each team a CD of GPS coordinates.7 These represented two thousand waypoints that were sprinkled along the route like breadcrumbs – just enough to give the cars a rough sense of where to go, but not enough to help them navigate the obstacles that lay ahead.
The challenge was daunting, but 106 plucky teams applied in that first year. Fifteen competitors passed the qualifying rounds and were considered safe enough to take on the track. Among them were cars that looked like dune buggies, cars that looked like monster trucks, and cars that looked like tanks. There were rumours that one contender had mortgaged their house to build their car, while another had two surfboards stuck on their vehicle roof to make it stand out. There was even a self-balancing motorbike.8
On the morning of the race, a ramshackle line-up of cars gathered at Slash X along with a few thousand spectators. Without any drivers inside, the vehicles took turns approaching the start, each one looking more like it belonged in Mad Max or Wacky Races than the last. But looks didn’t matter. All they had to do was get around the track, without any human intervention, in less than ten hours.
Things didn’t quite go as planned. One car flipped upside down in the starting area and had to be withdrawn.9 The motorbike barely cleared the start line before it rolled on to its side and was declared out of the race. One car hit a concrete wall 50 yards in. Another got tangled in a barbed-wire fence. Yet another got caught between two tumbleweeds and – thinking they were immovable objects – became trapped reversing back and forth, back and forth, until someone eventually intervened.10 Others still went crashing into boulders and careering into ditches. Axles were snapped, tyres ripped, bodywork went flying.11 The scene around the Slash X saloon bar began to look like a robot graveyard.
The top-scoring vehicle, an entry by Carnegie Mellon University, managed an impressive 7 miles before misjudging a hill – at which point the tyres started spinning and, without a human to help, carried on spinning until they caught fire.12 By 11 a.m. it was all over. A DARPA organizer climbed into a helicopter and flew over to the finish line to inform the waiting journalists that none of the cars would be getting that far.13
The race had been oily, dusty, noisy and destructive – and had ended without a winner. All those teams of people had worked for a year on a creation that had lasted at best a few minutes. But the competition was anything but a disaster. The rivalry had led to an explosion of new ideas, and by the next Grand Challenge in 2005, the technology was barely recognizable.
Second time around, all but one of the entrants surpassed the 7 miles achieved in 2004. An astonishing five different cars managed to complete the full race distance of 132 miles without any human intervention.14
Now, little more than a decade later, it’s widely accepted that the future of transportation is driverless. In late 2017, Philip Hammond, the British Chancellor of the Exchequer, announced the government’s intention to have fully driverless cars – without a safety attendant on board – on British roads by 2021. Daimler has promised driverless cars by 2020,15 Ford by 2021,16 and other manufacturers have made their own, similar forecasts.
Talk in the press has moved on from questioning whether driverless cars will happen to addressing the challenges we’ll face when they do. ‘Should your driverless car hit a pedestrian to save your life?’ asked the New York Times in June 2016;17 and, in November 2017: ‘What happens to roadkill or traffic tickets when our vehicles are in control?’18 Meanwhile, in January 2018, the Financial Times warned: ‘Trucks headed for a driverless future: unions warn that millions of drivers’ jobs will be disrupted.’19
So what changed? How did this technology go from ramshackle incompetence to revolutionary confidence in a few short years? And can we reasonably expect the rapid progress to continue?
What’s around me?
Our dream of a perfect autonomous vehicle dates all the way back to the sci-fi era of jet packs, rocket ships, tin-foil space suits and ray guns. At the 1939 World Fair in New York, General Motors unveiled its vision of the future. Visitors to the exhibition strapped themselves into an audio-equipped chair mounted on a conveyor that took them on a 16-minute tour of an imagined world.20 Beneath the glass, they saw a scale model of the GM dream. Superhighways that spanned the length and breadth of the country, roads connecting farmlands and cities, lanes and intersections – and roaming over all of them, automated radio-controlled cars capable of safely travelling at speeds of up to 100 miles an hour. ‘Strange?’ the voiceover asked them. ‘Fantastic? Unbelievable? Remember, this is the world of 1960!’21
There were numerous attempts over the years to make the dream a reality. General Motors tried with the Firebird II in the 1950s.22 British researchers tried adapting a Citroën DS19 to communicate with the road in the 1960s (somewhere between Slough and Reading, you’ll still find a 9-mile stretch of electric cable, left over from their experiments).23 Carnegie’s ‘Navlab’ in the 1980s, the EU’s $1 billion Eureka Prometheus Project in the 1990s.24 With every new project, the dream of the driverless car seemed, tantalizingly, only just around the corner.
On the surface, building a driverless car sounds as if it should be relatively easy. Most humans manage to master the requisite skills to drive. Plus, there are only two possible outputs: speed and direction. It’s a question of how much gas to apply and how much to turn the wheel. How hard can it be?
But, as the first DARPA Grand Challenge demonstrated, building an autonomous vehicle is actually a lot trickier than it looks. Things quickly get complicated when you’re trying to get an algorithm to control a great big hunk of metal travelling at 60 miles per hour.
Take the neural networks that are used to great effect to detect tumours in breast tissue; you’d think they should be perfectly suited to help a driverless car technology ‘see’ its surroundings. By 2004, neural networks (albeit in slightly more rudimentary form than today’s state-of-the-art versions) were already whirring away within prototype driverless vehicles,25 trying to extract meaning from the cameras mounted on top of the cars. There’s certainly a great deal of valuable information to be had from a camera. A neural network can understand the colour, texture, even physical features of the scene ahead – things like lines, curves, edges and angles. The question is: what do you do with that information once you have it?
You could tell the car: ‘Only drive on something that looks like tarmac.’ But that won’t be much good in the desert, where the roads are dusty paths. You could say: ‘Drive on the smoothest thing in the image’ – but, unfortunately, the smoothest thing is almost always the sky or a glass-fronted building. You could think in quite abstract terms about how to describe the shape of a road: ‘Look for an object with two vaguely straight borders. The lines should be wide apart at the bottom of the image and taper in towards each other at the top.’ That seems pretty sensible. Except, unfortunately, it’s also how a tree looks in a photograph. Generally, it isn’t considered wise to encourage a car to drive up a tree.
The issue is that cameras can’t give you a sense of scale or distance. It’s something film directors use to their advantage all the time – think of the opening scene in Star Wars where the Star Destroyer slowly emerges against the inky blackness of space, looming dramatically over the top of the frame. You get the sense of it being a vast, enormous beast, when in reality it was filmed using a model no more than a few feet long. It’s a trick that works well on the big screen. But in a driverless car, when two thin parallel lines could either be a road ahead on the horizon or the trunk of a nearby tree, accurately judging distance becomes a matter of life and death.
Even if you use more than one camera and cleverly combine the images to build a 3D picture of the world around you, there’s another potential problem that comes from relying too heavily on neural networks, as Dean Pomerleau, an academic from Carnegie Mellon University, discovered back in the 1990s. He was working on a car called ALVINN, Autonomous Land Vehicle In a Neural Network, which was trained in how to understand its surroundings from the actions of a human driver. Pomerleau and others would sit at the wheel and take ALVINN on long drives, recording everything they were doing in the process. This formed the training dataset from which their neural networks would learn: drive anywhere a human would, avoid everywhere else.26
It worked brilliantly at first. After training, ALVINN was able to comfortably navigate a simple road on its own. But then ALVINN came across a bridge and it all went wrong. Suddenly, the car swerved dangerously, and Pomerleau had to grab hold of the wheel to save it from crashing.
After weeks of going through the data from the incident, Pomerleau worked out what the issue had been: the roads that ALVINN had been trained on had all had grass running along the sides. Just like those neural networks back in the ‘Medicine’ chapter, which classified huskies on the basis of snow in the pictures, ALVINN’s neural network had used the grass as a key indicator of where to drive. As soon as the grass was gone, the machine had no idea what to do.
Unlike cameras, lasers can measure distance. Vehicles that use a system called LiDAR (Light Detection and Ranging, first used at the second DARPA Grand Challenge in 2005) fire out a photon from a laser, time how long it takes to bounce off an obstacle and come back, and end up with a good estimate of how far away that obstacle is. It’s not all good news: LiDAR can’t help with texture or colour, it’s hopeless at reading road signs, and it’s not great over long distances. Radar, on the other hand – the same idea but with radio waves – does a good job in all sorts of weather conditions, can detect obstacles far away, even seeing through some materials, but is completely hopeless at giving any sort of detail of the shape or structure of the obstacle.
On its own, none of these data sources – the camera, the LiDAR, the radar – can do enough to understand what’s going on around a vehicle. The trick to successfully building a driverless car is combining them. Which would be a relatively easy task if they all agreed about what they were actually seeing, but is a great deal more difficult when they don’t.
Consider the tumbleweed that stumped one of the cars in the first DARPA Grand Challenge and imagine your driverless car finds itself in the same position. The LiDAR is telling you there is an obstacle ahead. The camera agrees. The radar, which can pass through the flimsy tumbleweed, is telling you there’s nothing to worry about. Which sensor should your algorithm trust?
What if the camera pulls rank? Imagine a big white truck crosses your path on a cloudy day. This time LiDAR and radar agree that the brakes need to be applied, but against the dull white sky, the camera can see nothing that represents a danger.
If that weren’t hard enough, there’s another problem. You don’t just need to worry about your sensors misinterpreting their surroundings, you need to take into account that they might mis-measure them too.
You may have noticed that blue circle on Google Maps that surrounds your location – it’s there to indicate the potential error in the GPS reading. Sometimes the blue circle will be small and accurately mark your position; at other times it will cover a much larger area and be centred on entirely the wrong place. Most of the time, it doesn’t much matter. We know where we are and can dismiss incorrect information. But a driverless car doesn’t have a ground truth of its position. When it’s driving down a single lane of a highway, less than 4 metres wide, it can’t rely on GPS alone for an accurate enough diagnosis of where it is.
GPS isn’t the only reading that’s prone to uncertainty. Every measurement taken by the car will have some margin of error: radar readings, the pitch, the roll, the rotations of the wheels, the inertia of the vehicle. Nothing is ever 100 per cent reliable. Plus, different conditions make things worse: rain affects LiDAR;27 glaring sunlight can affect the cameras;28 and long, bumpy drives wreak havoc with accelerometers.29
In the end, you’re left with a big mess of signals. Questions that seemed simple – Where are you? What’s around you? What should you do? – become staggeringly difficult to answer. It’s almost impossible to know what to believe.
Almost impossible. But not quite.
Because, thankfully, there is a route through all of this chaos – a way to make sensible guesses in a messy world. It all comes down to a phenomenally powerful mathematical formula, known as Bayes’ theorem.
The great Church of the Reverend Bayes
It’s no exaggeration to say that Bayes’ theorem is one of the most influential ideas in history. Among scientists, machine-learning experts and statisticians, it commands an almost cultish enthusiasm. Yet at its heart the idea is extraordinarily simple. So simple, in fact, that you might initially think it’s just stating the obvious.
Let me try and illustrate the idea with a particularly trivial example.
Imagine you’re sitting having dinner in a restaurant. At some point during the meal, your companion leans over and whispers that they’ve spotted Lady Gaga eating at the table opposite.
Before having a look for yourself, you’ll no doubt have some sense of how much you believe your friend’s theory. You’ll take into account all of your prior knowledge: perhaps the quality of the establishment, the distance you are from Gaga’s home in Malibu, your friend’s eyesight. That sort of thing. If pushed, it’s a belief that you could put a number on. A probability of sorts.
As you turn to look at the woman, you’ll automatically use each piece of evidence in front of you to update your belief in your friend’s hypothesis. Perhaps the platinum-blonde hair is consistent with what you would expect from Gaga, so your belief goes up. But the fact that she’s sitting on her own with no bodyguards isn’t, so your belief goes down. The point is, each new observation adds to your overall assessment.
This is all Bayes’ theorem does: offers a systematic way to update your belief in a hypothesis on the basis of the evidence.30 It accepts that you can’t ever be completely certain about the theory you’re considering, but allows you to make a best guess from the information available. So, once you realize the woman at the table opposite is wearing a dress made of meat – a fashion choice that you’re unlikely to chance upon in the non-Gaga population – that might be enough to tip your belief over the threshold and lead you to conclude that it is indeed Lady Gaga in the restaurant.
But Bayes’ theorem isn’t just an equation for the way humans already make decisions. It’s much more important than that. To quote Sharon Bertsch McGrayne, author of The Theory That Would Not Die: ‘Bayes runs counter to the deeply held conviction that modern science requires objectivity and precision.’31 By providing a mechanism to measure your belief in something, Bayes allows you to draw sensible conclusions from sketchy observations, from messy, incomplete and approximate data – even from ignorance.
Bayes isn’t there just to confirm our existing intuitions. It turns out that being forced to quantify your beliefs in something often leads to counter-intuitive conclusions. It’s Bayes’ theorem that explains why more men than women are falsely identified as future murderers, in the example on page 67 in the ‘Justice’ chapter. And it’s Bayes’ theorem that explains why – even if you have been diagnosed with breast cancer – the level of error in the tests means you probably don’t have it (see the ‘Medicine’ chapter, page 94). Across all branches of science, Bayes is a powerful tool for distilling and understanding what we really know.
But where the Bayesian way of thinking really comes into its own is when you’re trying to consider more than one hypothesis simultaneously – for example, in attempting to diagnose what’s wrong with a patient on the basis of their symptoms,* or finding the position of a driverless car on the basis of sensor readings. In theory, any disease, any point on the map, could represent the underlying truth. All you need to do is weigh up the evidence to decide which is most likely to be right.
And on that point, finding the location of a driverless car turns out to be rather similar to a problem that puzzled Thomas Bayes, the British Presbyterian minister and talented mathematician after whom the theorem is named. Back in the mid-1700s, he wrote an essay which included details of a game he’d devised to explain the problem. It went something a little like this:32
Imagine you’re sitting with your back to a square table. Without you seeing, I throw a red ball on to the table. Your job is to guess where it landed. It’s not going to be easy: with no information to go on, there’s no real way of knowing where on the table it could be.
So, to help your guess, I throw a second ball of a different colour on to the same table. Your job is still to determine the location of the first ball, the red one, but this time I’ll tell you where the second ball ends up on the table relative to the first: whether it’s in front, behind, to the left or right of the red ball. And you get to update your guess.
Then we repeat. I throw a third, a fourth, a fifth ball on to the table, and every time I’ll tell you where each one lands relative to the very first red one – the one whose position you’re trying to guess.
The more balls I throw and the more information I give you, the clearer the picture of the red ball’s position should become in your mind. You’ll never be absolutely sure of exactly where it sits, but you can keep updating your belief about its position until you end up with an answer you’re confident in.
In some sense, the true position of the driverless car is analogous to that of the red ball. Instead of a person sitting with their back to the table, there’s an algorithm trying to gauge exactly where the car is at that moment in time, and instead of the other balls thrown on to the table there are the data sources: the GPS, the inertia measurements and so on. None of them tells the algorithm where the car is, but each adds a little bit more information the algorithm can use to update its belief. It’s a trick known as probabilistic inference – using the data (plus Bayes) to infer the true position of the object. Packaged up correctly, it’s just another kind of machine-learning algorithm.
By the turn of the millennium, engineers had had enough practice with cruise missiles, rocket ships and aircraft to know how to tackle the position problem. Getting a driverless car to answer the question ‘Where am I?’ still wasn’t trivial, but with a bit of Bayesian thinking it was at least achievable.
Between the robot graveyard of the 2004 Grand Challenge and the awe-inspiring technological triumph of the 2005 event – when five different vehicles managed to race more than 100 miles without any human input – many of the biggest leaps forward were thanks to Bayes. It was algorithms based on Bayesian ideas that helped solve the other questions the car needed to answer: ‘What’s around me?’ and ‘What should I do?’†
So, should your driverless car hit a pedestrian to save your life?
Let’s pause for a moment to consider the second of those questions. Because, on this very topic, in early autumn 2016, tucked away in a quiet corner of an otherwise bustling exhibition hall at the Paris Auto Show, a Mercedes-Benz spokesperson made a rather exceptional statement. Christoph von Hugo, the manager of driver assistance systems and active safety for the company, was asked in an interview what a driverless Mercedes might do in a crash.
‘If you know you can save at least one person, at least save that one,’ he replied.33
Sensible logic, you would think. Hardly headline news.
Except, Hugo wasn’t being asked about any old crash. He was being tested on his response to a well-worn thought experiment dating back to the 1960s, involving a very particular kind of collision. The interviewer was asking him about a curious conundrum that forces a choice between two evils. It’s known as the trolley problem, after the runaway tram that was the subject of the original formulation. In the case of driverless cars, it goes something like this.
Imagine, some years into the future, you’re a passenger in an autonomous vehicle, happily driving along a city street. Ahead of you a traffic light turns red, but a mechanical failure in your car means you’re unable to stop. A collision is inevitable, but your car has a choice: should it swerve off the road into a concrete wall, causing certain death to anyone inside the vehicle? Or should it carry on going, saving the lives of anyone inside, but killing the pedestrians now crossing the road? What should the car be programmed to do? How do you decide who should die?
No doubt you have your own opinion. Perhaps you think the car should simply try to save as many lives as possible. Or perhaps you think that ‘thou shalt not kill’ should over-ride any calculations, leaving the one sitting in the machine to bear the consequences.‡
Hugo was clear about the Mercedes position. ‘Save the one in the car.’ He went on: ‘If all you know for sure is that one death can be prevented, then that’s your first priority.’
In the days following the interview, the internet was awash with articles berating Mercedes’ stance. ‘Their cars will act much like the stereotypical entitled European luxury car driver,’34 wrote the author of one piece. Indeed, in a survey published in Science that very summer,35 76 per cent of respondents felt it would be more moral for driverless vehicles to save as many lives as possible, thus killing the people within the car. Mercedes had come down on the wrong side of popular opinion.
Or had they? Because when the same study asked participants if they would actually buy a car which would murder them if the circumstances arose, they suddenly seemed reluctant to sacrifice themselves for the greater good.
This is a conundrum that divides opinion – and not just in what people think the answer should be. As a thought experiment, it remains a firm favourite of technology reporters and other journalists, but all the driverless car experts I interviewed rolled their eyes as soon as the trolley problem was mentioned. Personally, I still have a soft spot for it. Its simplicity forces us to recognize something important about driverless cars, to challenge how we feel about an algorithm making a value judgement on our own, and others’, lives. At the heart of this new technology – as with almost all algorithms – are questions about power, expectation, control, and delegation of responsibility. And about whether we can expect our technology to fit in with us, rather than the other way around. But I’m also sympathetic to the aloof reaction it receives in the driverless car community. They, more than anyone, know how far away we are from having to worry about the trolley problem as a reality.
Breaking the rules of the road
Bayes’ theorem and the power of probability have driven much of the innovation in autonomous vehicles ever since the DARPA challenge. I asked Paul Newman, professor of robotics at the University of Oxford and founder of Oxbotica, a company that builds driverless cars and tests them on the streets of Britain, how his latest autonomous vehicles worked, and he explained as follows: ‘It’s many, many millions of lines of code, but I could frame the entire thing as probabilistic inference. All of it.’36
But while Bayesian inference goes some way towards explaining how driverless cars are possible, it also explains how full autonomy, free from any input by a human driver, is a very, very difficult nut to crack.
Imagine, Paul Newman suggests, ‘you’ve got two vehicles approaching each other at speed’ – say, travelling in different directions down a gently curved A-road. A human driver will be perfectly comfortable in that scenario, knowing that the other car will stick to its own lane and pass safely a couple of metres to the side. ‘But for the longest time,’ Newman explains, ‘it does look like you’re going to hit each other.’ How do you teach a driverless car not to panic in that situation? You don’t want the vehicle to drive off the side of the road, trying to avoid a collision that was never going to happen. But, equally, you don’t want it to be complacent if you really do find yourself on the verge of a head-on-crash. Remember, too, these cars are only ever making educated guesses about what to do. How do you get it to guess right every single time? That, says Newman, ‘is a hard, hard problem’.
It’s a problem that puzzled the experts for a long time, but it does have a solution. The trick is to build in a model for how other – sane – drivers will behave. Unfortunately, the same can’t be said of other nuanced driving scenarios.
Newman explains: ‘What’s hard is all the problems with driving that have nothing to do with driving.’ For instance, teaching an algorithm to understand that hearing the tunes of an ice-cream van, or passing a group of kids playing with a ball on the pavement, might mean you need to be extra cautious. Or to recognize the confusing hopping of a kangaroo, which, at the time of writing, Volvo admitted to be struggling with.37 Probably not much of a problem in rural Surrey, but something the cars need to master if they’re to be roadworthy in Australia.
Even harder, how do you teach a car that it should sometimes break the rules of the road? What if you’re sitting at a red light and someone runs in front of your car and frantically beckons you to edge forwards? Or if an ambulance with its lights on is trying to get past on a narrow street and you need to mount the pavement to let it through? Or if an oil tanker has jack-knifed across a country lane and you need to get out of there by any means possible?
‘None of these are in the Highway Code,’ Newman rightly points out. And yet a truly autonomous car needs to know how to deal with all of them if it’s to exist without ever having any human intervention. Even in emergencies.
That’s not to say these are insurmountable problems. ‘I don’t believe there’s any level of intelligence that we won’t be able to get a machine to,’ Newman told me. ‘The only question is when.’
Unfortunately, the answer to that question is: probably not any time soon. That driverless dream we’re all waiting for might be quite a lot further away than we think.
Because there’s another layer of difficulty to contend with when trying to build that sci-fi fantasy of a go-anywhere, do-anything, steering-wheel-free driverless car, and it’s one that goes well beyond the technical challenge. A fully autonomous car will also have to deal with the tricky problem of people.
Jack Stilgoe, a sociologist from University College London and an expert in the social impact of technology, explains: ‘People are mischievous. They’re active agents, not just passive parts of the scenery.’38
Imagine, for a moment, a world where truly, perfectly autonomous vehicles exist. The number one rule in their on-board algorithms will be to avoid collisions wherever possible. And that changes the dynamics of the road. If you stand in front of a driverless car – it has to stop. If you pull out in front of one at a junction – it has to behave submissively.
In the words of one participant in a 2016 focus group at the London School of Economics: ‘You’re going to mug them right off. They’re going to stop and you’re just going to nip round.’ Translation: these cars can be bullied.
Stilgoe agrees: ‘People who’ve been relatively powerless on roads up ’til now, like cyclists, may start cycling very slowly in front of self-driving cars, knowing that there is never going to be any aggression.’
Getting around this problem might mean bringing in stricter rules to deal with people who abuse their position as cyclists or pedestrians. It’s been done before, of course: think of jay-walking. Or it could mean forcing everything else off the roads – as happened with the introduction of the motor car – which is why already you won’t see bicycles, horses, carts, carriages or pedestrians on a highway.
If we want fully autonomous cars, we’ll almost certainly have to do something similar again and limit the number of aggressive drivers, ice-cream vans, kids playing in the road, roadwork signs, difficult pedestrians, emergency vehicles, cyclists, mobility scooters and everything else that makes the problem of autonomy so difficult. That’s fine, but it’s a little different from the way the idea is currently being sold to us.
‘The rhetoric of autonomy and transport is all about not changing the world,’ Stilgoe tells me. ‘It’s about keeping the world as it is but making and allowing a robot to just be as good as and then better than a human at navigating it. And I think that’s stupid.’
But hang on, some of you may be thinking. Hasn’t this problem already been cracked? Hasn’t Waymo, Google’s autonomous car, driven millions of miles already? Aren’t Waymo’s fully autonomous cars (or at least, close to fully autonomous cars) currently driving around on the roads of Phoenix, Arizona?
Well, yes. That’s true. But not every mile of road is created equally. Most miles are so easy to drive, you can do it while daydreaming. Others are far more challenging. At the time of writing, Waymo cars aren’t allowed to go just anywhere: they’re ‘geo-fenced’ into a small, pre-defined area. So too are the driverless cars Daimler and Ford propose to have on the roads by 2020 and 2021 respectively. They’re ride-hailing cars confined to a pre-decided go-zone. And that does make the problem of autonomy quite a lot simpler.
Paul Newman thinks this is the future of driverless cars we can expect: ‘They’ll come out working in an area that’s very well known, where their owners are extremely confident that they’ll work. So it could be part of a city, not in the middle of a place with unusual roads or where cows could wander into the path. Maybe they’ll work at certain times of day and in certain weather situations. They’re going to be operated as a transport service.’
That’s not quite the same thing as full autonomy. Here’s Jack Stilgoe’s take on the necessary compromise: ‘Things that look like autonomous systems are actually systems in which the world is constrained to make them look autonomous.’
The vision we’ve come to believe in is like a trick of the light. A mirage that promises a luxurious private chauffeur for all but, close up, is actually just a local minibus.
If you still need persuading, I’ll leave the final word on the matter to one of America’s biggest automotive magazines – Car and Driver:
No car company actually expects the futuristic, crash-free utopia of streets packed with driverless vehicles to transpire anytime soon, nor for decades. But they do want to be taken seriously by Wall Street as well as stir up the imaginations of a public increasingly disinterested in driving. And in the meantime, they hope to sell lots of vehicles with the latest sophisticated driver-assistance technology.39
So how about that driver-assistance technology? After all, driverless cars are not an all-or-nothing proposition.
Driverless technology is categorized using six different levels: from level 0 – no automation whatsoever – up to to level 5 – the fully autonomous fantasy. In between, they range from cruise control (level 2) to geo-fenced autonomous vehicles (level 4) and are colloquially referred to as level 1: feet off; level 2: hands off; level 3: eyes off; level 4: brain off.
So, maybe level 5 isn’t on our immediate horizon, and level 4 won’t be quite what it’s cracked up to be, but there’s a whole lot of automation to be had on the way up. What’s wrong with just slowly working up the levels in our private cars? Build cars with steering wheels and brake pedals and drivers in driver’s seats, and just allow a human to step in and take over in an emergency? Surely that’ll do until the technology improves?
Unfortunately, things aren’t quite that simple. Because there’s one last twist in the tale. A whole host of other problems. An inevitable obstacle for anything short of total human-free driving.
The company baby
Among the pilots at Air France, Pierre-Cédric Bonin was known as a ‘company baby’.40 He had joined the airline at the tender age of 26, with only a few hundred hours of flying time under his belt, and had grown up in the airline’s fleet of Airbuses. By the time he stepped aboard the fated flight of AF447, aged 32, he had managed to clock up a respectable 2,936 hours in the air, although that still made him by far the least experienced of the three pilots on board.41
None the less, it was Bonin who sat at the controls of Air France flight 447 on 31 May 2009, as took it off from the tarmac of Rio de Janeiro–Galeão International Airport and headed home to Paris.42
This was an Airbus A330, one of the most sophisticated commercial aircraft ever built. Its autopilot system was so advanced that it was practically capable of completing an entire flight unaided, apart from take-off and landing. And even when the pilot was in control, it had a variety of built-in safety features to minimize the risk of human error.
But there’s a hidden danger in building an automated system that can safely handle virtually every issue its designers can anticipate. If a pilot is only expected to take over in exceptional circumstances, they’ll no longer maintain the skills they need to operate the system themselves. So they’ll have very little experience to draw on to meet the challenge of an unanticipated emergency.
And that’s what happened with Air France flight 447. Although Bonin had accumulated thousands of hours in an Airbus cockpit, his actual experience of flying an A330 by hand was minimal. His role as a pilot had mostly been to monitor the automatic system. It meant that when the autopilot disengaged during that evening’s flight, Bonin didn’t know how to fly the plane safely.43
The trouble started when ice crystals began to form inside the air-speed sensors built into the fuselage. Unable to take a sensible reading, the autopilot sounded an alarm in the cabin and passed responsibility to the human crew. This in itself was not cause for concern. But when the plane hit a small bump of turbulence, the inexperienced Bonin over-reacted. As the aircraft began to roll gently to the right, Bonin grabbed the side-stick and pulled it to the left. Crucially, at the same time, he pulled back on the stick, sending the aircraft into a dramatically steep climb.44
As the air thinned around the plane, Bonin kept tightly pulling back on the stick until the nose of the aircraft was so high that the air could no longer flow slickly over the wings. The wings effectively became wind-breakers and the aircraft dramatically lost lift, free-falling, nose-up, out of the sky.
Alarms sounded in the cockpit. The captain burst back in from the rest cabin. AF447 was descending towards the ocean at 10,000 feet per minute.
By now, the ice crystals had melted, there was no mechanical malfunction, and the ocean was far enough below them that they could still recover in time. Bonin and his co-pilot could have easily rescued everyone on board in just 10–15 seconds, simply by pushing the stick forward, dropping the aircraft’s nose and allowing the air to rush cleanly over the wings again.45
But in his panic, Bonin kept the side-stick pulled back. No one realized he was the one causing the issue. Precious seconds ticked by. The captain suggested levelling the wings. They briefly discussed whether they were ascending or descending. Then, within 8,000 feet of sea level, the co-pilot took the controls.46
‘Climb . . . climb . . . climb . . . climb . . .’ the co-pilot is heard shouting.
‘But I’ve had the stick back the whole time!’ Bonin replied.
The penny dropped for the captain. He finally realized they had been free-falling in an aerodynamic stall for more than three minutes and ordered them to drop the nose. Too late. Tragically, by now, they were too close to the surface. Bonin screamed: ‘Damn it! We’re going to crash. This can’t be happening!’47 Moments later the aircraft plunged into the Atlantic, killing all 228 souls on board.
Ironies of automation
Twenty-six years before the Air France crash, in 1983, the psychologist Lisanne Bainbridge wrote a seminal essay on the hidden dangers of relying too heavily on automated systems.48 Build a machine to improve human performance, she explained, and it will lead – ironically – to a reduction in human ability.
By now, we’ve all borne witness to this in some small way. It’s why people can’t remember phone numbers any more, why many of us struggle to read our own handwriting and why lots of us can’t navigate anywhere without GPS. With technology to do it all for us, there’s little opportunity to practise our skills.
There is some concern that the same might happen with self-driving cars – where the stakes are a lot higher than with handwriting. Until we get to full autonomy, the car will still sometimes unexpectedly hand back control to the driver. Will we be able to remember instinctively what to do? And will teenage drivers of the future ever have the chance to master the requisite skills in the first place?
But even if all drivers manage to stay competent§ (allowing for generous interpretation of the word ‘stay’), there’s another issue we’ll still have to contend with. Because what the human driver is asked to do before autopilot cuts out is also important. There are only two possibilities. And – as Bainbridge points out – neither is particularly appealing.
A level 2, hands-off car will expect the driver to pay careful attention to the road at all times.49 It’s not good enough to be trusted on its own and will need your careful supervision. Wired once described this level as ‘like getting a toddler to help you with the dishes’.50
At the time of writing, Tesla’s autopilot is one such example of this approach.51 It’s currently like a fancy cruise control – it’ll steer and brake and accelerate on the highway, but expects the driver to be alert and attentive and ready to step in at all times. To make sure you’re paying attention, an alarm will sound if you remove your hands from the wheel for too long.
But, as Bainbridge wrote in her essay, that’s not an approach that’s going to end well. It’s just unrealistic to expect humans to be vigilant: ‘It’s impossible for even a highly motivated human to maintain effective visual attention towards a source of information, on which very little happens, for more than about half an hour.’52
There’s some evidence that people have struggled to heed Tesla’s insistence that they keep their attention on the road. Joshua Brown, who died at the wheel of his Tesla in 2016, had been using Autopilot mode for 37½ minutes when his car hit a truck that was crossing his lane. The investigation by the National Highway Traffic Safety Administration concluded that Brown had not been looking at the road at the time of the crash.53 The accident was headline news around the world, but this hasn’t stopped some foolhardy YouTubers from posting videos enthusiastically showing how to trick your car into thinking you’re paying attention. Supposedly, taping a can of Red Bull54 or wedging an orange55 into the steering wheel will stop the car setting off those pesky alarms reminding you of your responsibilities.
Other programmes are finding the same issues. Although Uber’s driverless cars need human intervention every 13 miles,56 getting drivers to pay attention remains a struggle. On 18 March 2018, an Uber self-driving vehicle fatally struck a pedestrian. Video footage from inside the car showed that the ‘human monitor’ sitting behind the wheel was looking away from the road in the moments before the collision.57
This is a serious problem, but there is an alternative option. The car companies could accept that humans will be humans, and acknowledge that our minds will wander. After all, being able to read a book while driving is part of the appeal of self-driving cars. This is the key difference between level 2: ‘hands off’ and level 3: ‘eyes off’.
The latter certainly presents more of a technical challenge than level 2, but some manufacturers have already started to build their cars to accommodate our inattention. Audi’s traffic-jam pilot is one such example.58 It can completely take over when you’re in slow-moving highway traffic, leaving you to sit back and enjoy the ride. Just be prepared to step in if something goes wrong.¶
There’s a reason why Audi has limited its system to slow-moving traffic on limited-access roads. The risks of catastrophe are lower in highway congestion. And that’s important. Because as soon as a human stops monitoring the road, you’re left with the worst possible combination of circumstances when an emergency happens.
A driver who’s not paying attention will have very little time to assess their surroundings and decide what to do. Imagine sitting in a self-driving car, hearing an alarm and looking up from your book to see a truck ahead shedding its load into your path. In an instant, you’ll have to process all the information around you: the motorbike in the left lane, the van braking hard ahead, the car in the blind spot on your right. You’d be most unfamiliar with the road at precisely the moment you need to know it best; add in the lack of practice, and you’ll be as poorly equipped as you could be to deal with the situations demanding the highest level of skill.
It’s a fact that has also been borne out in experiments with driverless car simulations. One study, which let people read a book or play on their phones while the car drove itself, found that it took up to 40 seconds after an alarm sounded for them to regain proper control of the vehicle.59 That’s exactly what happened with Air France flight 447. Captain Dubois, who should have been easily capable of saving the plane, took around one minute too long to realize what was happening and come up with the simple solution that would have solved the problem.60
Ironically, the better self-driving technology gets, the worse these problems become. A sloppy autopilot that sets off an alarm every 15 minutes will keep a driver continually engaged and in regular practice. It’s the smooth and sophisticated automatic systems that are almost always reliable that you’ve got to watch out for.
This is why Gill Pratt, who heads up Toyota’s research institute, has said:
The worst case is a car that will need driver intervention once every 200,000 miles . . . An ordinary person who has a [new] car every 100,000 miles would never see it [the automation hand over control]. But every once in a while, maybe once for every two cars that I own, there would be that one time where it suddenly goes ‘beep beep beep, now it’s your turn!’ And the person, typically having not seen this for years and years, would . . . not be prepared when that happened.61
Great expectations
Despite all this, there’s good reason to push ahead into the self-driving future. The good still outweighs the bad. Driving remains one of the biggest causes of avoidable deaths in the world. If the technology is remotely capable of reducing the number of fatalities on the roads overall, you could argue that it would be unethical not to roll it out.
And there’s no shortage of other advantages: even simple self-driving aids can reduce fuel consumption62 and ease traffic congestion.63 Plus – let’s be honest – the idea of taking your hands off the steering wheel while doing 70 miles an hour, even if it’s only for a moment, is just . . . cool.
But, thinking back to Bainbridge’s warnings, they do hint at a problem with how current self-driving technology is being framed.
Take Tesla, one of the first car manufacturers to bring an autopilot to the market. There’s little doubt that their system has had a net positive impact, making driving safer for those who use it – you don’t need to look far to find online videos of the ‘Forward Collision Warning’ feature, recognizing the risk of accident before the driver, setting off an alarm and saving the car from crashing.64
But there’s a slight mismatch between what the cars can do – with what’s essentially a fancy forward-facing parking sensor and clever cruise control – and the language used to describe them. For instance, in October 2016 an announcement on the Tesla site65 stated that ‘all Tesla cars being produced now have full self-driving hardware’.# According to an article in The Verge, Elon Musk, product architect of Tesla, added: ‘The full autonomy update will be standard on all Tesla vehicles from here on out.’66 But that phrase ‘full autonomy’ is arguably at odds with the warning users must accept before using the current autopilot: ‘You need to maintain control and responsibility of your vehicle.’67
Expectations are important. You may disagree, but I think that people shoving oranges in their steering wheels – or worse, as I found in the darker corners of the Web, creating and selling devices that ‘[allow] early adopters to [drive, while] reducing or turning off the autopilot check-in warning’** – are the inevitable result of Tesla’s overpromises.
Of course, Tesla responds to reports of such devices by repeating the need for the driver to control the car, and they aren’t the only company in the car industry to make bold promises. Every company on earth appeals to our fantasies to sell their products. But for me, there’s a difference between buying a perfume because I think it will make me more attractive, and buying a car because I think its full autonomy will keep me safe.
Marketing strategies aside, I can’t help but wonder if we’re thinking about driverless cars in the wrong way altogether.
By now, we know that humans are really good at understanding subtleties, at analysing context, applying experience and distinguishing patterns. We’re really bad at paying attention, at precision, at consistency and at being fully aware of our surroundings. We have, in short, precisely the opposite set of skills to algorithms.
So, why not follow the lead of the tumour-finding software in the medical world and let the skills of the machine complement the skills of the human, and advance the abilities of both? Until we get to full autonomy, why not flip the equation on its head and aim for a self-driving system that supports the driver rather than the other way around? A safety net, like ABS or traction control, that can patiently monitor the road and stay alert for a danger the driver has missed. Not so much a chauffeur as a guardian.
That is the idea behind work being done by the Toyota research institute. They’re building two modes into their car. There’s the ‘chauffeur’ mode, which – like Audi’s traffic-jam pilot – could take over in heavy congestion; and there’s the ‘guardian’ mode, which runs in the background while a human drives, and acts as a safety net,68 reducing the risk of an accident if anything crops up that the driver hasn’t seen.
Volvo has adopted a similar approach. Its ‘Autonomous Emergency Braking’ system, which automatically slows the car down if it gets too close to a vehicle in front, is widely credited for the impressive safety record of the Volvo XC90. Since the car first went on sale in the UK in 2002, over 50,000 vehicles have been purchased, and not a single driver or passenger within any of them has been killed in a crash.69
Like much of the driverless technology that is so keenly discussed, we’ll have to wait and see how this turns out. But one thing is for sure – as time goes on, autonomous driving will have a few lessons to teach us that apply well beyond the world of motoring. Not just about the messiness of handing over control, but about being realistic in our expectations of what algorithms can do.
If this is going to work, we’ll have to adjust our way of thinking. We’re going to need to throw away the idea that cars should work perfectly every time, and accept that, while mechanical failure might be a rare event, algorithmic failure almost certainly won’t be any time soon.
So, knowing that errors are inevitable, knowing that if we proceed we have no choice but to embrace uncertainty, the conundrums within the world of driverless cars will force us to decide how good something needs to be before we’re willing to let it loose on our streets. That’s an important question, and it applies elsewhere. How good is good enough? Once you’ve built a flawed algorithm that can calculate something, should you let it?
* Watson, the IBM machine discussed in the ‘Medicine’ chapter, makes extensive use of so-called Bayesian inference. See https://www.ibm.com/developerworks/library/os-ind-watson/.
† The eventual winner of the 2005 race, a team from Stanford University, was described rather neatly by the Stanford University mathematician Pesri Diaconis: ‘Every bolt of that car was Bayesian.’
‡ A number of different versions of the scenario have appeared across the press, from the New York Times to the Mail on Sunday: What if the pedestrian was a 90-year-old granny? What if it was a small child? What if the car contained a Nobel Prize winner? All have the same dilemma at the core.
§ There are things you can do to tackle the issues that arise from limited practice. For instance, since the Air France crash, there is now an emphasis on training new pilots to fly the plane when autopilot fails, and on prompting all pilots to regularly switch autopilot off to maintain their skills.
¶ A step up from partial automation, level 3 vehicles like the Audi with the traffic-jam pilot can take control in certain scenarios, if the conditions are right. The driver still needs to be prepared to intervene when the car encounters a scenario it doesn’t understand, but no longer needs to continuously monitor the road and car. This level is a bit more like persuading a teenager to do the washing up.
# At the time of writing, in February 2018, the ‘full self-driving hardware’ is an optional extra that can be paid for at purchase, although the car is not currently running the software to complete full self-driving trips. The Tesla website says: ‘It is not possible to know exactly when each element of the functionality described above will be available.’ See https://www.tesla.com/en_NZ/modelx/design/referral/cyrus275?redirect=no.
** This is a real product called ‘The Autopilot Buddy’ which you can buy for the bargain price of $179. It’s worth noting that the small print on the website reads: ‘At no time should ‘Autopilot Buddy™’ be used on public streets.’ https://www.autopilotbuddy.com/.