SIXTEEN

Algorithmic Injustice

‘We cannot know why the world suffers. But we can know how the world decides that suffering shall come to some persons and not to others.’

Guido Calabresi and Philip Bobbit, Tragic Choices (1978)

In the digital lifeworld, social engineering and software engineering will become increasingly hard to distinguish. This is for two reasons. First, as time goes on, algorithms will increasingly be used (along with the market and the state) to determine the distribution of important social goods, including work, loans, housing, and insurance. Second, algorithms will increasingly be used (along with laws and norms) to identify, rank, sort, and order human beings. Distribution and recognition are the substance of social justice, and in the future they’ll increasingly be entrusted to code. That’s been the subject of the last two chapters. To put it mildly, it’s an important development in the political life of humankind. It means that code can be used to reduce injustice or it can reproduce old iniquities and generate new ones. Justice in the digital lifeworld will depend, to a large degree, on the algorithms that are used and how they are applied. I speak of the application of algorithms because often it’s the alchemy of algorithms and data together that yields unjust results, not algorithms alone. This is explained further below.

In the next few pages I offer a broad framework for thinking about algorithmic injustice: where an application of an algorithm yields consequences that are unjust. We start with two main types of algorithmic injustice: data-based injustice and rule-based injustice. Then we examine what I call the neutrality fallacy, the flawed idea that what we need are algorithms that are neutral or impartial. At the end of the chapter we see that, lurking behind all the technology, most algorithmic injustice can actually be traced back to the actions and decisions of people, from software engineers to users of Google search.

The Rough and Ready Test

Before getting down to specifics, there’s an easy way of telling whether a particular application of an algorithm is just or unjust: Does it deliver results that are consistent or inconsistent with a relevant principle of justice? Take the example of an algorithm used to calculate health insurance premiums. To be consistent with sufficientarian principles, for instance, such an algorithm would have to focus resources on the most deprived parts of the community, ensuring that they had an acceptable minimum standard of coverage. If instead it demanded higher premiums from those with conditions prevalent among the poor, then obviously its effect would be to make it harder for the poor to secure cover. From a sufficientarian perspective, therefore, that application would be unjust. Simple! This rough and ready test takes a consequentialist approach, meaning it doesn’t try to assess whether an application of code is inherently virtuous or right. Nor does it require close technical analysis of the algorithm itself. It simply asks whether an application of an algorithm generates results that can be reconciled with a given principle of justice. This is just one way of assessing algorithmic injustice. One of the tasks for political theorists will be to find more.

‘Algorithmic Discrimination’

Different types of algorithmic injustice are sometimes lumped together under the name ‘algorithmic discrimination’. I avoid this term, along with the term algorithmic bias, because it can lead to confusion. Discrimination is a subtle concept with at least three acceptable meanings. The first is neutral, referring to the process of drawing distinctions between one thing and another. (If I say you are a highly discriminating art critic, I am praising your acuity and not calling you a bigot.) The second relates to the drawing of apparently unjust distinctions between groups—like a father who refuses to allow his child to associate with children of other ethnic backgrounds. The third sense is legalistic, describing rules or acts contrary to a particular law that prohibits the less favourable treatment of specified groups. In my work as a barrister, I often act in cases involving allegations of discrimination.

It’s easy to see from these distinctions that not all discrimination is unjust, and not all injustice is unlawful. In English law, for instance, employers are prohibited from discriminating on the basis of age, disability, gender reassignment, pregnancy and maternity, race, religion and belief, sex, and sexual orientation—but they can discriminate on the basis of class (so long as that doesn’t fall foul of some other law). Class-based discrimination is not illegal but it’s arguably still unjust. The distinction is important because journalists and lawyers too often reduce issues of algorithmic injustice to a question of legalistic discrimination—is this application of code permissible under US or European law? Important though this sort of question is, the approach of political theory is broader. We need to ask not only what is legal, but what is just; not only what the law is but also what it should be. So let’s take a step back and look at the bigger picture.

There are two main categories of algorithmic injustice: data-based injustice and rule-based injustice.

Data-Based Injustice

Injustice can occur when an algorithm is applied to data that is poorly selected, incomplete, outdated, or subject to selection bias.1 Bad data is a particular problem for machine learning algorithms, which can only ‘learn’ from the data to which they are applied. Algorithms trained to identify human faces, for instance, will struggle or fail to recognize the faces of non-white people if they are trained using majority-white faces.2 Voice-recognition algorithms will not ‘hear’ women’s voices if they are trained from datasets with too many male voices.3 Even an algorithm trained to judge human beauty on the basis of allegedly ‘neutral’ characteristics like facial symmetry, wrinkles, and youthfulness will develop a taste for Caucasian features if it is trained using mostly white faces. In a recent competition, 600,000 entrants from around the world sent selfies to be judged by machine learning algorithms. Of the forty-four faces judged to be the most attractive, all but six were white. Just one had visibly dark skin.4 The image-hosting website Flickr autotagged photographs of black people as ‘animal’ and ‘ape’ and pictures of concentration camps as ‘sport’ and ‘jungle gym’.5 Google’s Photos algorithm tagged two black people as ‘Gorillas’.6 No matter how smart an algorithm is, if it is fed a partial or misleading view of the world it will treat unjustly those who have been hidden from its view or presented in an unfair light. This is data-based injustice.

Rule-Based Injustice

Even when the data is not poorly selected, incomplete, outdated, or subject to selection bias, injustice can result when an algorithm applies unjust rules. There are two types: those that are overtly unjust, and those that are implicitly unjust.

Overtly Unjust Rules

An overtly unjust rule is one that is used to decide questions of distribution and recognition according to criteria that (in that context) are unjust on their very face. A robot waiter programmed to refuse to serve Muslim people because they are Muslim; a security system programmed to target black people because they are black; a resumé-processing system programmed to reject female candidates because they are female—these are all applications of overtly unjust criteria. What makes them overtly unjust is that there is no principled connection between the personal characteristic being singled out (religion; race; sex) and the resulting denial of distribution or recognition (a plate of food; access to a building; a job).

Overtly unjust rules are most obvious when they relate to characteristics like race and sex that have typically been the basis of oppression in the past, and which are irrelevant to the context in which the rule is being applied. But there are many other criteria that might also result in injustice. Take ugliness: if I own a nightclub, would it be unjust to install an automated entry system (call it robobouncer) that scans people’s faces and only grants entry to those of sufficient beauty? Real-life bouncers do this all the time. Would it be unjust for a recruitment algorithm to reject candidates on the basis of their credit scores, notwithstanding their qualification for the job? In turn, would it be unjust for a credit-scoring algorithm to give certain individuals a higher score if they are friends with affluent people on Facebook?7 These examples might not amount to discrimination under the current law, but they are arguably still unjust, in that they determine access to an important social good by reference to criteria other than the directly relevant attributes of the individual.

Rules can be overtly unjust in various ways. There’s the problem of arbitrariness, where there’s no relationship between the criterion being applied and the thing being sought. Or they fall foul of the group membership fallacy: the fact that I am a member of a group that tends to have a particular characteristic does not necessarily mean that I share that characteristic (a point sometimes lost on probabilistic machine learning approaches). There’s the entrenchment problem: it may well be true that students from higher-income families are more likely to get better grades at university, but using family income as an admission criterion would obviously entrench the educational inequality that already exists.8 There’s the correlation/causation fallacy: the data may tell you that people who play golf tend to do better in business, but that does not mean that business success is caused by playing golf (and to hire on that basis might contradict a principle of justice which says that hiring should be done on merit). These are just a few examples—but given what we know of human ignorance and prejudice, we can be sure they aren’t the only ones.

Implicitly Unjust Rules

An implicitly unjust rule is one that does not single out any specific person or group for maltreatment but which has the indirect effect of treating certain groups less favourably than others. A rule of recruitment that required candidates to be six feet in height with a prominent Adam’s Apple would obviously treat women less favourably than men, despite the fact that it makes no mention of sex.

Implicitly unjust rules are sometimes used as fig-leaves for overt sexism or racism, but often injustice is an unwanted side-effect. Imagine, for instance, a recruitment algorithm for software engineers that gives preference to candidates who began coding before the age of eighteen. This rule appears justifiable if you believe, as many do, that early experience is a good indicator of proficiency later in life. It does not directly single out any social group for less favourable treatment and is therefore not overtly unjust. But in practice it would likely damage the prospects for female candidates who, because of cultural and generational factors, may not have been exposed to computer science at a young age. And it would limit the chances of older candidates who did not grow up with personal computers at home. Thus a rule that seems reasonable can indirectly place certain groups at a disadvantage.9

Unjust rules can be subtle in form. Let’s return for a moment to face-identification algorithms, discussed earlier in the context of data-based discrimination. These will be common in the digital lifeworld as powerful systems identify and interact with us on a daily basis. Think of the horror of trying to interact with a digital system that failed to recognize your face because of its colour, or because of features like scarring or disfigurement. Yet even the simple business of locating a human face, let alone identifying it as belonging to a particular person, can be fraught with peril. One method is to use algorithms that locate ‘a pattern of dark and light “blobs”, indicating the eyes, cheekbones, and nose’. But since this approach depends on the contrast between the colour of a face and the whites of its eyes, it could present difficulties for certain racial groups depending on the lighting. An ‘edge detection’ approach instead tries to draw distinctions between a face and its surrounding background. Again, this might still pose problems, depending on the colour of the person’s face and the colour of the backdrop. Yet another approach is to detect faces depending on a pre-programmed palette of possible skin colours—a technique that might be less conducive to offensive outcomes but which would require considerable sensitivity on the part of the programmer in defining what colours count as ‘skin’ colours.10 You might ask what all the fuss is about, but if a human consistently failed to recognize certain groups because of their race, it would be seen as an obvious failure of recognition and an affront to the dignity of the person on the receiving end. Whether it is worse to be treated offensively by a machine or by a human is not immediately obvious. Both are matters of justice.

Take another tricky example. The Princeton Review offers online SAT tutoring courses and its software charges different prices to students depending on their zip code. The aim seems to be that that the rich should pay more for private tuition. In more deprived areas the course might cost $6,000, while in wealthier ones it could be up to $8,400. On its face, this rule might be justifiable according to recognized principles of justice: it gives priority to the less well-off and appears to encourage equality of opportunity. But it also has the side-effect of charging more to Asian-American students, who (in statistical terms) tend to be concentrated in wealthier areas and are therefore almost twice as likely as other groups to be charged a higher price.11 What makes this a tricky example is that it demands a trade-off between two conflicting principles of justice: is it more important that we give educational advantages to the poor, or that we avoid treating certain ethnic groups less favourably than others? Your answer may reasonably differ from mine.

The example in the previous paragraph serves as a helpful reminder that not all rules drawing distinctions between groups are necessarily unjust, even if they treat some less favourably than others. Sometimes ‘discrimination’ can be justified according to principles of justice. A few years ago, trying to decide what to do with my life, I considered the idea of joining the army. Like many delusional young men I was drawn to the idea of joining an élite British regiment, the Special Air Service (SAS). Pen and notepad in hand, I eagerly sat down to study the SAS recruitment website. There I learned that ‘Many try to get into the Special Air Service regiment. Most of them fail. Out of an average intake of 125 candidates, the gruelling selection process will weed out all but 10.’ ‘Excellent,’ I told myself, ‘Sounds like a challenge.’ Reading on, I learned that the selection process involves three stages. The first is ‘endurance’, a three-week fitness and survival test ending with a 40-mile schlep, carrying a 55lb (25 kg) backpack, through the notoriously hostile terrain of the Brecon Beacons and Black Hills of South Wales. Those who survive can then look forward to the ‘jungle training’ stage in Belize, which ‘weeds out those who can’t handle the disci­pline’ in order to find ‘men who can work under relentless pressure, in horrendous environments for weeks on end’. Sounds great. After their sojourn in Belize, cadets can relax in the knowledge that the third stage of the process, ‘Escape, Evasion, and Tactical Questioning’, only involves being brutally interrogated and made to stand in ‘stress positions’ for hours at a time with white noise ‘blasted’ at them. ‘Female interrogators’, the website patiently explains, may even ‘laugh at the size of their subject’s manhood’.

I became a lawyer instead.

The SAS recruitment process is plainly discriminatory in the neutral sense of the word. It distinguishes, first and foremost, between genuine warriors and corpulent couch-dwellers who enjoy the thought of wearing a cool uniform without any of the effort. More directly, in seeking ‘men’ with the requisite qualities it entirely excludes women from the opportunity to serve in the unit. The first distinction is plainly a legitimate form of ‘discrimination’. The second is debatable: some might say if women can pass the stages, why shouldn’t they join the SAS? Women now play combat roles in various armies around the world. My point, besides the fact that most lawyers make lousy soldiers, is that identifying ‘discrimination’ is only the start of the conversation. That’s why, to labour the point, the term isn’t always helpful. The real question for an implicitly or overtly unjust rule is always whether its results can be justified according to principles of justice.

The Neutrality Fallacy

One of the most frustrating things about algorithms is that even when they apply rules that are studiously neutral between groups they can still result in injustice. How? Because neutral rules reproduce and entrench injustices that already exist in the world.

If you search for an African-American sounding name on Google you are more likely to see an advertisement for instantcheckmate.com, a website offering criminal background checks, than if you enter a non-African-American sounding name.12 Why is this? It could be because Google or instantcheckmate.com have applied an overtly unjust rule that says African-American sounding names should trigger advertisements for criminal background checks. Unsurprisingly, both Google and instantcheckmate.com strongly deny this. What instead seems to be happening—although we can’t know for sure—is that Google decides which advertisements should be displayed by applying a neutral rule: if people who enter search term X tend to click on advertisement Y, then advertisement Y should be displayed more prominently to those who enter search term X. The resulting injustice is not caused by an overtly unjust rule or poor quality data: instead we get a racist result because people’s previous searches and clicks have themselves exhibited racist patterns.

Something similar happens if you use Google’s autocomplete system, which offers full questions in response to the first few words typed in. If you type ‘Why do gay guys . . . ’, Google offers the completed question, ‘Why do gay guys have weird voices?’ One study shows that ‘relatively high proportions’ of the autocompleted questions about black people, gay people, and males are ‘negative’ in nature:13

For black people, these questions involved constructions of them as lazy, criminal, cheating, under-achieving and suffering from various conditions such as dry skin or fibroids. Gay people were negatively constructed as contracting AIDS, going to hell, not deserving equal rights, having high voices or talking like girls.

These are clear cases of algorithmic injustice. A system that ­propagates negative stereotypes about certain groups cannot be said to be treating them with equal status and esteem. And it can have distributive consequences too. For instance, more advertisements for high-income jobs are shown to men than to women. This necessarily means an expansion in economic opportunity for men and a contraction in such opportunity for women.14

What appears to be happening in these cases is that ‘neutral’ algorithms, applied to statistically representative data, reproduce injustices that already exist in the world. Google’s algorithm autocompletes the question ‘Why do women . . .’ to ‘Why do women talk so much?’ because so many users have asked it in the past. It raises a mirror to our own prejudices.

Take a different set of examples that are likely to grow in importance as time goes on: the ‘reputation systems’ that help to determine people’s access to social goods like housing or jobs on the basis of how other people have rated them. Airbnb and Uber, leading lights of the ‘sharing economy’, rely on reputation systems of this kind. There are also ways of rating professors, hotels, tenants, restaurants, books, TV shows, songs, and just about anything else capable of quantification. The point of reputation systems is that they allow us to assess strangers on the basis of what other people have said about them. As Tom Slee puts it, ‘reputation is the social distillation of other people’s opinion.’15 You would tend to trust an Airbnb host with a five-star rating more than an alternative with only two stars.

Reputation systems are relatively young and are likely to become more common in the digital lifeworld. Services like reputation.com already exist to help you get better scores.16 I’ve suggested the possibility that our access to goods and services in the digital lifeworld might eventually be determined by what others think of us. Recall the Chinese example: more than three dozen local governments are compiling digital records of social and financial behaviour in order to rate their citizens, so that ‘the trustworthy’ can ‘roam everywhere under heaven’ while making it hard for the ‘discredited’ to ‘take a single step’.17

The algorithms that aggregate and summarize people’s ratings are generally said to be neutral. People do the rating; the algorithm just tots up the ratings into an overall score. The problem is that even if the algorithms are neutral, there’s ample evidence to suggest that the humans doing the actual scoring are not.18 One study shows that on Airbnb, applications from guests with distinctively African-American names are 16 per cent less likely to be accepted as compared with identical guests with distinctively white names. This is true among landlords big and small, from individual property owners to larger enterprises with property portfolios.19 As Tom Slee observes:20

reputation . . . makes it difficult for John to gain a good reputation—no matter how trustworthy he is—if he is a black man trying to find work in a white community with a history of racism, or difficult for Jane the Plumber’s skills to be taken seriously if the community has traditional norms about women’s roles.

So it is that neutral algorithms can reproduce and institutionalize injustices that already exist in the world.

As time goes on, digital systems learning from humans will pick up on even the most subtle of injustices. Recently a neural network unleashed on a database of 3 million English words learned to answer simple analogy problems. Asked Paris is to France as Tokyo is to [?], the system correctly responded Japan. But asked man is to computer programmer as woman is to [?] the system’s response was homemaker. Asked Father is to doctor as mother is to [?], the system replied nurse. He is to architect was met with she is to interior designer. This study revealed something shocking but, on reflection, not all that surprising: that the way humans use language reflects unjust gender stereotypes. So long as digital systems ‘learn’ from flawed, messy, imperfect humans, we can expect neutral algorithms to result in more injustice.21

These examples are troubling because they challenge an instinct shared by many—particularly, I have noticed, in tech circles—which is that a rule is just if it treats everyone the same. I call this the neutrality fallacy. To be fair, it has a long history. It originates with the Enlightenment ideal of universality—the idea that differences between people should be treated as irrelevant in the public sphere of politics.22 This ideal evolved into the contemporary belief that rules should be impartial as between people and groups. Iris Marion Young, writing at the end of the twentieth century, described impartiality as ‘the hallmark of moral reason’ at that time.23 Those who unthinkingly adopt the neutrality fallacy tend to assume that code offers an exciting prospect for justice precisely because it can be used to apply rules that are impersonal, objective, and dispassionate. Code, they say, is free from the passions, prejudices, and ideological commitments that lurk inside every flawed human heart. Digital systems might finally provide the ‘view from nowhere’ that philosophers have sought for so long.24

The fallacy is that neutrality is not always the same as justice. Yes, in some contexts it’s important to be neutral as between groups, as when a judge decides between two conflicting accounts of the same event. But the examples above show that treating disadvantaged groups the same as everyone else can in fact reproduce, entrench, and even generate injustice. The Nobel Peace Prize winner Desmond Tutu once remarked, ‘If an elephant has its foot on the tail of a mouse and you say that you are neutral, the mouse will not appreciate your neutrality.’ His point was that a neutral rule can easily be an implicitly unjust rule. To add insult to injury, the neutrality fallacy gives these instances of injustice the veneer of objectivity, making them seem natural and inevitable when they are not.

The lesson for technologists is that justice sometimes demands that different groups be treated differently. This idea underpins affirmative action and the subsidizing of minority arts. And it should underpin all our efforts to avoid algorithmic injustice. An application of code should be judged by whether the results it generates are consistent with a relevant principle of justice, not by whether the algorithm in question is neutral as between persons. ‘Neutrality,’ taught the Nobel laureate Elie Wiesel, ‘helps the oppressor, never the victim.’

A Well-Coded Society

Algorithmic injustice already seems to be everywhere. But just think about the sheer amount of code there will be in the digital lifeworld, the awesome responsibilities with which it will be entrusted, and the extensive role it will play in social and economic life. We’re used to online application forms, supermarket self-checkout systems, biometric passport gates in airports, smartphone fingerprint scanners, and early AI personal assistants like Siri and Alexa. But the future will bring an unending number of daily interactions with systems that are radically more advanced. Many will have a physical, virtual, or holographic physical presence. Some will have human or animal-like qualities designed to build empathy and rapport.25 This will make it all the more hurtful when they disrespect, ignore, or insult us.

As the reach and remit of code grows, so too does the risk of algorithmic injustice.

If we’re going to grant algorithms more control over matters of distribution and recognition, we need to be vigilant. But it can sometimes be hard to know why a particular application of code has resulted in injustice. In the past, discriminatory intent was hidden in people’s hearts. In the future it may be hidden deep within machine learning algorithms of mind-boggling size and complexity. Or it might be locked up in a ‘black box’ of code protected by confidentiality laws.26

Another difficulty is that potential injustice seems to lurk everywhere—in bad data, in implicitly unjust rules, even in neutral rules—waiting to catch us out. Regrettably so. But the responsibility falls to people to create a world in which code is an engine of opportunity and not injustice. It’s too easy to treat algorithms, particularly machine learning algorithms, as disembodied forces with their own moral agency. They’re not. The fact that machines can increasingly ‘learn’ does not absolve us of the responsibility to ‘teach’ them the difference between justice and injustice. Until AI systems become independent of human control, and perhaps even then, it’ll be for humans to monitor and prevent algorithmic injustice. This work cannot be left to lawyers and political theorists. Responsibility will increasingly lie with those who gather the data, build the systems, and apply the rules. Like it or not, software engineers will increasingly be the social engineers of the digital lifeworld. It’s an immense responsibility. Unjust applications of code will sometimes creep into digital systems because engineers are unaware of their own personal biases. (This isn’t necessarily their fault. The arc of a computer science degree is long, but it doesn’t necessarily bend toward justice.) Sometimes it’ll be the result of wider problems in the culture and values of firms. At the very least, when it’s learning machines coming up with the rules and models, their outputs must be carefully examined to see if they are overtly or implicitly unjust in that context. Failure to do so will result in algorithmic injustice. In chapter nineteen we look at some other possible measures to avoid such injustice, including the regulation of tech firms and auditing of algorithms. But why not consciously engineer systems with justice in mind—whether equal treatment, equality of opportunity, or whatever other principle might be applicable to that particular application? Code could offer exciting new prospects for justice, not merely another threat to worry about.

We need a generation of ‘philosophical engineers’ of the kind imagined by Tim Berners-Lee, and this cohort must be more diverse than today’s. It can’t be right that algorithms of justice are entrusted to an engineering community that is overwhelmingly made up of men.27 African-Americans receive about 10 per cent of computer science degrees and make up about 14 per cent of the overall workforce, but make up less than 3 per cent of computing occupations in Silicon Valley.28 At the very least, a workforce more representative of the public might mean greater awareness of the social implications of any given application of code.

For now, it’s time to leave algorithmic injustice and turn to another potential challenge for social justice in the digital lifeworld: technological unemployment.