7

Oppression and Resistance

In 2019, I travelled to China to deliver a speech at the UNESCO International Conference on Artificial Intelligence and Education. It gathered business leaders and ministers from around the world under the UN umbrella to determine how AI can improve education, whether in smart classrooms or through individually tailored education, based on the needs, personality and abilities of the student. Such personalized education can use AI to analyse how a student responds to different pedagogical approaches to identify the most effective methodology for them.

The delegates were enthusiastic and willing to discuss limits and boundaries. Many countries – including China, Namibia, Russia and Argentina – agreed that AI should augment, not replace, teachers and that AI should also be ethical and respect humanity. Interestingly, the Chinese minister for education highlighted how new technologies have enabled the country to lift many out of poverty and illiteracy.

However, by the end of the conference, once the principles had been approved, with humanity, ethics and the social good at the very heart of them, it seemed that no newspaper had reported on it. Instead, the next day, the China Daily dedicated three pages to a large event in Shenzhen at which world business leaders discussed the practical applications of AI… with no ethics in sight.

Reading those pages as I left Beijing, I felt, once again, that the arena of AI ethics was an important place to be – but that merely occupying that space was not enough. Ethics is critical. There is nothing neutral about AI, its development, its use and the selection of data that its algorithms are fed. Exchanging pleasantries about ethics is useless if what we really want is for those global business leaders in Shenzhen to transform their companies’ use of AI.

If we do not establish such regulation now, we will soon come to regret it. The big tech companies were harvesting our data while privacy law was still in its infancy, and we are already seeing the consequences and paying the price for our inaction.

The connection between AI and corporate power is already solid, and we see it in the link between academia and corporations involved in AI.97 The 2018 AI Index statistics show that the number of corporate-affiliated AI papers has grown significantly in recent years.98 The start-up scene is also vibrant, with most new companies supported by venture-capital firms, which themselves then hope to be hoovered up by big tech.

AI ethics is not immune to the close relationship with corporate power either. We have seen a proliferation of principles, guidance, processes and frameworks established by big corporations to reassure consumers that they take an ethical approach to innovation (whether or not they actually do).99 There is a lot of discussion surrounding the ethics of AI, and the terminology that goes along with this is as enticing as it is pompous: trustworthy AI, responsible AI, ethical AI, AI for good.

The idea prevails that AI needs to be ethical and respond to normative tenets that we, society, determine as being beneficial to us and for the progress of the world we inhabit.

Thousands of people, myself included, have dedicated their professional careers to advocating for more ethical AI practices, but how many more conferences do we need to restate these principles, to once again recognize fairness and argue that algorithms are biased and so need to be fixed? I wrote this book expressly to go beyond that. Ethical tourism feels superficially good, but what purpose does it really serve?

Now, when I attend round tables and events to discuss the intended and unintended consequences of AI, I often hear fears around facial recognition, surveillance and control – and the finger gets pointed at China, and often only at China. We know China is developing the Orwellian ‘Social Credit System’, which should be in place by the end of 2020.

China’s Social Credit System rates citizens on their trustworthiness and adherence to rules, including speed limits. There are many systems, not one, and they are operated by companies like Sesame Credit, the financial arm of Alibaba. To calculate the scores, these systems require a high degree of surveillance, gathering citizen data through social media, smartphones and any connected devices. All this information is then churned through algorithms that work out individual scores. The city of Rongcheng, for example, gives all residents 1,000 points, and authorities make deductions for bad behaviour such as going through red traffic lights, and add points for charity donations or similar examples of good behaviour.

Through the use of facial recognition, people do not need to be reported or referred for bad or good behaviour; this happens automatically. Allegedly, a lot of acts of kindness are already being seen in the streets – acts that citizens hope will be caught on camera, thus increasing their scores. What citizens get in return for compliant behaviour is higher-achieving schools for their kids, priority for housing assignments and similar privileges. The system is therefore based on three things: social control (or surveillance), reward and punishment. This is not new in China: of the ten cities in the world with the highest number of CCTV cameras, nine are Chinese. (The tenth is London.)

China has created a deeply penetrative surveillance state, and we hear reports about how DNA-, voice- and facial-recognition technology is used to track and target the mostly Muslim Uighurs, a minority in China but the biggest group in Xinjiang, China’s largest and westernmost region. It is reported that at least one million Uighurs are detained in what Beijing calls re-education camps, although some human-rights groups have defined them as concentration camps.100

However, pointing the finger at the Chinese is naive and narrow-minded, because they did not create all of this in isolation. The Americans – some knowingly, some unwittingly – have helped advance this system. As The Intercept magazine reported, the

OpenPower Foundation – a non-profit led by Google and IBM executives with the aim of trying to ‘drive innovation’ – has set up a collaboration between IBM, Chinese company Semptian, and US chip-manufacturer Xilinx. Together, they have worked to advance a breed of microprocessors that enable computers to analyse vast amounts of data more efficiently.101

This is the context of the ethics debate: despite the nationalist mood of the time, big tech is mobile, and global.

Consider this: in a public discussion with Arkady Volozh, the CEO of Yandex, about AI, Russian President Vladimir Putin asserted that ‘whoever becomes the leader in this sphere will become the ruler of the world’.102 Elon Musk later shared Putin’s words on Twitter, adding, ‘Competition for AI superiority at the national level most likely cause of WW3.’103

So how will ethics fit into this global race? Will it be used only as a soporific, to lull consumers into a passive state when it comes to emergent technology? And the big question must never be overlooked: what is AI actually for in the first place? Without considering this question, we are not challenging the brutal power dynamics that underpin AI.

Look at the impact that home technology has had on the lives of women. Many argue that dishwashers and hoovers were instrumental in liberating women from time-consuming household chores. But have domestic machinery and now tech really helped women escape the home realm? Isn’t the opposite true, that the narrative of tech as empowerment has been turned against women to continue imprisoning us into accepting a disproportionate burden of housework?104 Are smart fridges that tell us when we have run out of butter really reducing the strain on women? And the same for devices that clean automatically and super-efficiently? These are questions that we must ask ourselves as the statistics show that women are still allotted more house-cleaning responsibilities than men. Haven’t the expectations placed on women just become higher? Has house tech not merely become another device to keep women in the house in a modernized version of their traditional caregiving roles?

This is all because the answer to structural inequality is a political and not a technological one, and technology, in itself, does not prevent us from reiterating pre-existing power hierarchies. This is not to say that tech cannot help – quite the opposite. The real challenge, however, is understanding what tech is for and who it is there to serve.

In July 2019, when the British Conservative MP and Secretary of State for Health and Social Care Matt Hancock announced the deployment of Alexa ‘to allow elderly people, blind people and other patients who cannot easily search for health advice on the internet to access the information through the voice-powered voice assistant’,105 the news was widely welcomed. The argument was that we can train Alexa on the information provided on the NHS Direct website (medical information that helps assess whether or not to seek urgent help), so that patients can instantly ask the software for information and avoid going directly to the overstretched, state-funded doctor. The undeniable truth is that the British National Health Service is chronically underfunded, and the dismantling of public services has been a feature of our society – and many others – in recent years.

Technology can help in reducing inefficiency and paperwork, as well as improving diagnosis. But the NHS is also a realm of technochauvinism. I went on the BBC in July 2019, and, while I praised the idea of technological innovation in a healthcare system that definitely needs improving, I did question whether the ethical impact of this had been thoroughly considered.

As just one concern among many, what about the thousands of women who suffer domestic abuse daily, or the two women who die every week at the hands of angry men, partners and husbands? A doctor’s practice may be one of the few escape routes or ‘excuses’ for an entrapped woman to leave the house. She might go for something unrelated to her abuse and, in the relative safe space of a doctor’s clinic, confide her domestic suffering. Or someone could ask the right question, and a single question, asked by people who are trained at spotting the signs of abuse, may well offer a lifeline for many women. Let’s not forget that male violence is as serious a cause of death and incapacity for women worldwide as cancer, above malaria, road accidents and war.106

Considerations around ethics, to be really transformative, must start by challenging whose ethics we are actually talking about. This book insists that we need to talk about power. The ethics we are discussing must shift the parameters of society as it is now. Our current position with AI reminds me of the marches of feminists in the 1960s and 1970s where women were demanding the right to abortion with the defiant call to arms, ‘Whose body? My body!’ Now, our drumbeat is, ‘Whose AI? My AI!’

The consensus now, at least in the West, seems to be that ethics needs to be taught on computer engineering degrees so that coders appreciate the complexity of the issue.

Let’s think of a real case involving so-called bias. A Microsoft customer was testing a financial-services algorithm that did risk-scoring for loans. They spotted that the model was favouring men. The problem was that as the developers were training the data set, the data was historical, taken from all previously approved loans. As most of the people who had applied for loans with their personal information had been men, the algorithm drew the conclusion that men posed a lower risk.107

Conscientious developers can make the active choice to mitigate the risk of gender discrimination by removing all clues about gender from every CV. However, it is more complex than this as other forms of discrimination can, and do, still arise – in relation to postcodes, for example. Postcodes are not just places where people live; postcodes can signal socio-economic background, ethnicity, even sexual orientation (think of a particularly gay-friendly area, for instance). This is to say that the difficulty of tackling the unintended consequences that algorithms are bringing is an enormous task, as algorithms reflect the structural problems of our societies.

Teaching technical solutions to these problems would never solve the problem entirely.

The fact that there is a whole movement working around standards, benchmarks and all sorts of auditable criteria to incorporate into product development, and check against, should not come as a surprise either. Confirming compliance will be beneficial but also shifts responsibility. While useful, compliance does not solve the substantial structural issues underpinning the AI industry.

In theory, we could have a product that meets all the standards and is totally unbiased but is still deployed for the wrong reasons. Or we could have new products that appear progressive but are based on bogus science, products whose deployment could lead to dangerous, unintended consequences.

As I write, it appears that Facebook is funding research on brain-computer interfaces that can pick up thoughts directly from our neurons and translate them into words. The researchers involved claim that they have an algorithm that is already able to do this.108 In 1984, George Orwell wrote that, ‘Nothing was your own concept except the few cubic centimetres around your skull.’ Well, Facebook and Neuralink seem to suggest that this sequestered space may too become an exposed part of the public domain. Other companies are doing the same, making us think that the last frontier of privacy – our own brain – may one day disappear.

Apart from the fact that many may question the legitimacy of the science behind these claims, what – and whose – interest would these brain-computer interfaces serve? And where will these technologies take us without tight controls? Some argue that paralysed patients could benefit from implants capable of reading directly from their brains, allowing them to perform activities they otherwise couldn’t. However, the risks of these technologies are immense, and frameworks need to be established before moving forward.

Another example of such dubious science is the AI lie detector funded by the European Union. The product, called iBorderCtrl, is allegedly in use at checkpoints in Hungary, Greece and Latvia.109 Scientists and academics have long criticized these ‘deception identifier tools’ for being pseudoscientific and likely to lead to unfair outcomes for individuals.110

How necessary and ethical is a lie detector in any situation? We could theoretically construct a completely unbiased lie detector. The question remains: what would make its use ethical? That is not just a judgement for ethics classrooms – this is a political argument. No surprise given that AI is, as we have established, about power, so we must deal with it politically.

For women, being political means interrogating the essential nature of the inequalities that AI is at risk of embedding even further. MIT, in an attempt to teach ethics to engineering students, recently proposed a curriculum comprised of Aristotle, Machiavelli, Bacon, Hobbes, Locke, the Founding Fathers and the Bible.111 Can you spot the problem?