Conclusion

RAHINAH IBRAHIM WAS an architect with four children, a husband who lived overseas, a job volunteering at a local hospital and a PhD at Stanford to complete. As if her life wasn’t busy enough, she had also just undergone an emergency hysterectomy and – although she was pretty much back on her feet by now – was still struggling with standing unaided for any length of time without medication. None the less, when the 38th annual International Conference on System Sciences rolled around in January 2005 she booked her flights to Hawaii and organized herself to present her latest paper to her academic peers.1

When Ibrahim arrived at San Francisco airport with her daughter, first thing on the morning of 2 January 2005, she approached the counter, handed over her documents and asked the staff if they could help her source some wheelchair assistance. They did not oblige. Her name flashed up on the computer screen as belonging to the federal no-fly list – a database set up after 9/11 to prevent suspected terrorists from travelling.

Ibrahim’s teenage daughter, left alone and distraught by the desk, called a family friend saying they’d marched her mother away in handcuffs. Ibrahim, meanwhile, was put into the back of a police car and taken to the station. They searched beneath her hijab, refused her medication and locked her in a cell. Two hours later a Homeland Security agent arrived with release papers and told her she had been taken off the list. Ibrahim made it to her conference in Hawaii and then flew on to her native Malaysia to visit family.

Ibrahim had been put on the no-fly list when an FBI agent ticked the wrong box on a form. It might be that the mistake was down to a mix-up between Jemaah Islamiyah, a terrorist organization notorious for the Bali bombings of 2002, and Jemaah Islam, a professional Malaysian organization for people who study abroad. Ibrahim was a member of the latter, but had never had any connection with the former. It was a simple mistake, but one with dramatic consequences. As soon as the error had made its way into the automated system, it had taken on an aura of authority that made it all but immune to appeal. The encounter at San Francisco wasn’t the end of the story.

On the return leg of her journey two months later, while flying home to the United States from Malaysia, Ibrahim was again stopped at the airport. This time, the resolution did not come so quickly. Her visa had been revoked on the grounds of suspected connections to terrorism. Although she was the mother of an American citizen, had her home in San Francisco and held a role at one of the country’s most prestigious universities, Ibrahim was not allowed to return to the United States. In the end, it would take almost a decade of fighting to win the case to clear her name. Almost a decade during which she was forbidden to set foot on American soil. And all because of one human error, and a machine with an omnipotent authority.

Human plus machine

There’s no doubting the profound positive impact that automation has had on all of our lives. The algorithms we’ve built to date boast a bewilderingly impressive list of accomplishments. They can help us help diagnose breast cancer, catch serial killers and avoid plane crashes; give each of us free and easy access to the full wealth of human knowledge with our fingertips; and connect people across the globe instantly in a way that our ancestors could only have dreamed of. But in our urge to automate, in our hurry to solve many of the world’s issues, we seem to have swapped one problem for another. The algorithms – useful and impressive as they are – have left us with a tangle of complications to unpick.

Everywhere you look – in the judicial system, in healthcare, in policing, even online shopping – there are problems with privacy, bias, error, accountability and transparency that aren’t going to go away easily. Just by virtue of some algorithms existing, we face issues of fairness that cut to the core of who we are as humans, what we want our society to look like, and how far we can cope with the impending authority of dispassionate technology.

But maybe that’s precisely the point. Perhaps thinking of algorithms as some kind of authority is exactly where we’re going wrong.

For one thing, our reluctance to question the power of an algorithm has opened the door to people who wish to exploit us. In researching this book, I have come across all manner of snake-oil salesmen willing to trade on myths and profit from our gullibility. Despite the weight of scientific evidence to the contrary, there are people selling algorithms to police forces and governments that claim to ‘predict’ whether someone is a terrorist or a paedophile based on the characteristics of their face alone. Others insist their algorithm can suggest changes to a single line in a screenplay that will make a movie more profitable at the box office.fn1 Others boldly state – without even a hint of sarcasm – that their algorithm is capable of finding your one true love.fn2

But even the algorithms that live up to their claims often misuse their authority. This book is packed full of stories of the harm that algorithms can do. The ‘budget tool’ used to arbitrarily cut financial assistance to disabled residents of Idaho. The recidivism algorithms that, thanks to historical data, are more likely to suggest a higher risk score for black defendants. The kidney injury detection system that forces millions of people to give up their most personal and private data without their consent or knowledge. The supermarket algorithm that robs a teenage girl of the chance to tell her father that she’s fallen pregnant. The Strategic Subject List that was intended to help victims of gun crime, but was used by police as a hit list. Examples of unfairness are everywhere.

And yet, pointing out the flaws in the algorithms risks implying that there is a perfect alternative we’re aiming for. I’ve thought long and hard and I’ve struggled to find a single example of a perfectly fair algorithm. Even the ones that look good on the surface – like autopilot in planes or neural networks that diagnose cancer – have problems deep down. As you’ll have read in the ‘Cars’ chapter, autopilot can put those who trained under automation at a serious disadvantage behind the wheel or the joystick. There are even concerns that the apparently miraculous tumour-finding algorithms we looked at in the ‘Medicine’ chapter don’t work as well on all ethnic groups. But examples of perfectly fair, just systems aren’t exactly abundant when algorithms aren’t involved either. Wherever you look, in whatever sphere you examine, if you delve deep enough into any system at all, you’ll find some kind of bias.

So, imagine for a moment: what if we accepted that perfection doesn’t exist? Algorithms will make mistakes. Algorithms will be unfair. That should in no way distract us from the fight to make them more accurate and less biased wherever we can – but perhaps acknowledging that algorithms aren’t perfect, any more than humans are, might just have the effect of diminishing any assumption of their authority.

Imagine that, rather than exclusively focusing our attention on designing our algorithms to adhere to some impossible standard of perfect fairness, we instead designed them to facilitate redress when they inevitably erred; that we put as much time and effort into ensuring that automatic systems were as easy to challenge as they are to implement. Perhaps the answer is to build algorithms to be contestable from the ground up. Imagine that we designed them to support humans in their decisions, rather than instruct them. To be transparent about why they came to a particular decision, rather than just inform us of the result.

In my view, the best algorithms are the ones that take the human into account at every stage. The ones that recognize our habit of over-trusting the output of a machine, while embracing their own flaws and wearing their uncertainty proudly front and centre.

This was one of the best features of the IBM Watson Jeopardy-winning machine. While the format of the quiz show meant it had to commit to a single answer, the algorithm also presented a series of alternatives it had considered in the process, along with a score indicating how confident it was in each being correct. Perhaps if likelihood of recidivism scores included something similar, judges might find it easier to question the information the algorithm was offering. And perhaps if facial recognition algorithms presented a number of possible matches, rather than just homing in on a single face, misidentification might be less of an issue.

The same feature is what makes the neural networks that screen breast cancer slides so effective. The algorithm doesn’t dictate which patients have tumours. It narrows down the vast array of cells to a handful of suspicious areas for the pathologist to check. The algorithm never gets tired and the pathologist rarely misdiagnoses. The algorithm and the human work together in partnership, exploiting each other’s strengths and embracing each other’s flaws.

There are other examples, too – including in the world of chess, where this book began. Since losing to Deep Blue, Garry Kasparov hasn’t turned his back on computers. Quite the opposite. Instead, he has become a great advocate of the idea of ‘Centaur Chess’, where a human player and an algorithm collaborate with one another to compete with another hybrid team. The algorithm assesses the possible consequences of each move, reducing the chance of a blunder, while the human remains in charge of the game.

Here’s how Kasparov puts it: ‘When playing with the assistance of computers, we could concentrate on strategic planning instead of spending so much time on calculations. Human creativity was even more paramount under these conditions.’2 The result is chess played at a higher level than has ever been seen before. Perfect tactical play and beautiful, meaningful strategies. The very best of both worlds.

This is the future I’m hoping for. One where the arrogant, dictatorial algorithms that fill many of these pages are a thing of the past. One where we stop seeing machines as objective masters and start treating them as we would any other source of power. By questioning their decisions; scrutinizing their motives; acknowledging our emotions; demanding to know who stands to benefit; holding them accountable for their mistakes; and refusing to become complacent. I think this is the key to a future where the net overall effect of algorithms is a positive force for society. And it’s only right that it’s a job that rests squarely on our shoulders. Because one thing is for sure. In the age of the algorithm, humans have never been more important.