Epilogue

We started writing this book in 2019. Many people, including some governments, were becoming aware of the challenges posed by AI technologies, and the movement to regulate to minimize their risks was gaining momentum. The prospects of being subject to algorithmic decisions that are hard to challenge or even understand, of losing your job to an AI, or of being driven by driverless cars were, and still are, matters of great interest and concern.

But we started writing this book when none of us had heard the term “COVID-19.”

As we put the finishing touches on this book, we look out on a world still in the midst of a pandemic. The numbers infected are approaching one and a half million, with 75,000 deaths. Entire countries are locked down, with an estimated one-third of the world’s population under some kind of restriction, many largely confined to their homes. Emergency laws are being rushed into effect and desperate measures being taken to plug gaps in health care provision and forestall economic collapse.

It’s a fool’s errand to try to predict the nature and scale of the changes that COVID-19 will leave behind. It seems safe to say, though, that for many people, societies, and governments, the epidemic of 2020 will lead to a significant re-evaluation of priorities.

These changed priorities will be reflected in regulatory responses. In late March 2020, the Financial Times quoted an anonymous source “with direct knowledge of the European Commission’s thinking” as saying that “the EU is not backtracking yet on its position but it is thinking more actively about the unintended consequences of what they have proposed in the white paper on AI.”1

We can only speculate about what unintended consequences they have in mind, but it’s easy to imagine that the balance between safety and privacy may be struck differently in the context of the present crisis. Whether AIs can do jobs just as well as humans may be seen as less important than whether they can offer some sort of help when there are no humans available. And rigorous testing and scrutiny of algorithmic tools may seem like a luxury when they promise the chance of early detection or contact tracing of infected people, triaging scarce resources, perhaps even identifying treatments.

It’s understandable—inevitable, really—that in the immediate grip of this crisis, attention is focused on whatever can slow the disease’s spread and provide essential treatment for those in need. But as we emerge from months of lockdown into potentially far longer periods of restriction, surveillance, and rationing, hard questions about privacy, safety, equity and dignity will have to be asked. As governments and police assume new powers, and technology is rapidly pressed into service to track and monitor our movements and contacts, questions about accountability and democracy can’t be ignored. As we noted in the prologue, the technologies currently being used to contact-trace infected people—and, perhaps more controversially, to police quarantine compliance—won’t disappear once the crisis has passed. It is naïve to expect governments, police forces, and private companies to hand those powers back.

The future of work, too, might look very different. The COVID-19 crisis has taught us how vital many precarious workers—delivery drivers, cleaners, shelf-stackers—are to the functioning of our societies. Yet as companies scramble for survival in what seems very likely to be a prolonged recession, it is those very workers whose livelihoods could be endangered if technological replacement seems more economically viable. As we write, Spain has announced its intention to introduce a permanent universal basic income, a measure that only months ago was widely viewed as experimental and likely unaffordable.2 All around us, the scarcely imaginable is fast becoming the seriously entertained.

Decisions about how we respond to these challenges will need to be informed by technical, ethical, legal, economic and other expertise. The populist refrain that “we’ve heard enough from experts” is surely going to fade, at least for a while. But these sorts of decisions can’t be made just by experts. Technocracy is not democracy. Neither is an oligarchy of wealthy tech entrepreneurs.

If we are to leverage the benefits of AI technologies while side-stepping the pitfalls, it’s going to take vigilance. Not just vigilance from governments and regulators, though we firmly believe that this will be necessary. And not just vigilance from activists and academics, though we hope that we’ve made a modest contribution in this book. But vigilance from all sectors of society: from those who are going to be on the sharp end of algorithmic decisions and those whose jobs are going to be changed or replaced altogether.

It’s a tall order for people who are already struggling with financial insecurity, discrimination, state oppression, or exploitative employer practices to take the time to learn about something like artificial intelligence. The field is moving at a discombobulating pace. But for those citizens who do have the resources and inclination, we wish you well. Your role as citizen scrutinizers will be essential to keeping the technology and its applications fair. We hope this book will support your efforts.

Notes

  1. 1. Javier Espinoza, “Coronavirus Prompts Delays and Overhaul of EU Digital Strategy,” Financial Times, March 22, 2020.

  2. 2. Pascale Davies, “Spain Plans Universal Basic Income to Fix Coronavirus Economic Crisis,” Forbes, April 6, 2020, https://www.forbes.com/sites/pascaledavies/2020/04/06/spain-aims-to-roll-out-universal-basic-income-to-fix-coronavirus-economic-crisis/#68d9f7474b35.