Why do we protect the rights of others? Moral arguments concentrate on it being somehow “wrong” to harm the entity concerned.1 Pragmatic arguments proceed on the basis that protecting others is helpful to those who are prevented from doing harm. Moral grounds are an end in themselves and pragmatic ones are a means to an end. The two justifications can apply independently from each other, but they are not mutually exclusive. For instance, it is morally unacceptable to wound another human being without just cause; in addition, it is sensible not to cause wanton harm to a human, lest they (or their family or friends) seek revenge against the wrongdoer.
This chapter concentrates predominantly on the moral reasons why some AI systems might one day be deemed worthy of protection. Moral rights are addressed before legal ones because, as will be suggested below, recognition of the former tends to predate (and indeed to precipitate) the latter. Chapter 5 will discuss additional pragmatic reasons for protecting AI rights, as well as endowing it with responsibilities. Together, these justifications might form the basis for granting AI legal personality, though this is far from the only way of protecting AI’s moral rights.
Granting rights to robots might sound ridiculous. However, protecting AI in this way could be in accordance with widely–held moral precepts. Chapter 4 aims to answer three questions: What do we mean by rights? Why do we grant rights to other entities? And could AI and robots qualify for rights by virtue of the same principles? In so doing, we will seek to challenge common preconceptions about why certain entities deserve rights, and not others.
1 What Are Rights?
1.1 Hohfeld’s Incidents
The word “right” is used in many different contexts: workers’ rights, animal rights , human rights , a right to life, to water, to free speech , to equal treatment, to privacy, to property and so on. But without clarifying what is meant by a “right”, we risk talking at cross-purposes.
This book adopts the approach of legal theorist Wesley Hohfeld , who separated rights into four categories, or “incidents”: privileges, powers, claims and immunities.2 In addition to distinguishing between the different types of rights, Hohfeld’s other key insight was to pair each category of right in a reciprocal relationship with a right held by another person. The four categories listed above correspond to the following: duty, no-claim, liability, disability. Thus, if Person A has a claim to something, Person B must have a liability to provide Person A with that thing.
There are three advantages to Hohfeld’s categorisation. First it is exhaustive of the various different “rights” mentioned in common parlance as well as legal treatises. Secondly, it acknowledges the differences between various types of rights.3 Thirdly, Hohfeld’s model explains how the different categories of rights interact with each other, and with those of other people. Hohfeld’s framework demonstrates that rights are social constructs. The correlatives to each right show that they do not exist in a vacuum. Rather, they are held against other people or entities. For instance, it would not make sense for a person marooned alone on a desert island to claim that she has a right to life, because there is no one else against whom she can claim that right. To hold rights, therefore, is to coexist with others capable of upholding or infringing upon those rights.
This social feature of rights underscores the importance of considering how humans (as well as other entities already afforded rights, like corporations and animals) are to live alongside AI as it becomes increasingly prevalent. As science journalist and author John Markoff has written, we will need to ask ourselves whether robots are to become “our masters, slaves, or partners”.4
Following Markoff’s lead, this chapter asks whether AI can or should be treated as a “moral patient”, namely the subject of certain protections from the actions of “moral agents”. As explained in Chapter 2 at Section 2.1, agency involves a party being capable of understanding and acting on certain rules and principles. In moral terms, below a certain level of mental sophistication, a human actions are not deemed to be blameworthy. Nonetheless, a young child who lacks moral agency is still entitled to be protected as a moral patient. Moral agency and moral patiency can coincide, but it is not necessary for this to be the case.5
1.2 Rights as Fictions
The social nature of rights is connected to another feature: they are communal inventions which do not have any independent, objective existence beyond our collective imagination. Like companies, countries and laws themselves, rights are collective fictions, or as Harari calls them, “myths”.6 Their form can be shaped to any given context. Certainly, some rights are treated as more valuable than others, and belief in them may be more widely shared, but there is no set quota of rights which prevents new ones from being created and old ones from falling into abeyance.
Jenna Reinbold has said of the drafting of Universal Declaration of Human Rights : “…the first Commission on Human Rights undertook its work in a way that smacks of the time-honored logic of mythmaking - a logic wherein language is set to the task of unequivocally presenting a vision of the world as well as a set of mandates appropriate to the maintenance of that world”.7
None of this means rights are unimportant. To the contrary, they make life meaningful and allow societies to function effectively. Describing rights as fictions or constructs is by no means pejorative; when used in this context, it does not entail duplicity or error.8 It simply means that they are malleable and can be shaped according to new circumstances.9
Moral rights are not the same as legal rights. There are some moral relationships—for instance, the duty to tell the truth and the correlative claim right not to be lied to—which are not always protected by law.10 If Alfred asks Marianne whether he looks fat in his expensive new trousers, Marianne will not be held legally liable if she fails to tell the truth. Generally speaking, the law reflects and supports society’s moral values but the two are not coterminous.
The present discussion is concerned primarily with the rights that actually are recognised by humans as well as those which have been in the past. This is a sociological exercise and is therefore capable of objective verification. The argument made here is that if we recognise certain moral and legal rights, then as a matter of logical consistency we ought to recognise others in analogous circumstances.
One of the reasons why the idea of rights for robots provokes an instinctive negative reaction for some people11 may be an unspoken assumption that rights are a fixed quantity, like unchanging commandments written on tablets of stone. If it is accepted that rights are fictions—albeit valuable ones for the functioning of society—this objection falls away and the path is cleared for robot rights to be recognised.
2 Animals: Man’s Best Friends?
Humanity’s changing attitude towards animals provides a good analogy for how we might come to see AI. The comparison with rights for animals illustrates two things: first, rights for animals are culturally relative, and second, animal rights have varied considerably over time.
2.1 Cultural Relativity of Attitudes to Animals
Rules to protect animals are not a new idea. The story of Noah saving two of each species from the great flood (which appears also in the Quran and pre-dates the Judeo-Christian Bible by several millennia)12 might be read as a cautionary tale on the need to preserve biodiversity.13 The Book of Proverbs says “A righteous man regardeth the life of his beast”.14 However, more generally in the Judeo-Christian tradition, humanity appears to enjoy a position of dominance over all other creations. In the Old Testament, at Genesis 1:26, God says: “Let us make man in our image, after our likeness: and let them have dominion over the fish of the sea, and over the fowl of the air, and over the cattle, and over all the earth, and over every creeping thing that creepeth upon the earth”.15
Other cultures and religions seem to accord animals a greater importance than the above. For instance, animism ascribes souls to various entities, whether sentient (including animals and insects), living (including plants, lichens and corals) and even non-living (including mountains, rivers and lakes).16 Hindu teachings provide that an atman or soul can be reincarnated in many different forms, not just humans but also as various animals.17 Indeed, several Hindu Gods have the features of animals.18 Further, the cow is treated as a holy creature. In India, 18 states ban the slaughter of cows19 and vigilante groups even seek to protect cows through extrajudicial violence.20 As noted in Chapter 2, in the Japanese Shintō religion, many different creatures and objects have kami, which translates as “spirit”, “soul” or “energy”.21
… fear of robots is not felt in the Far East. After the Second World War, Japan saw the birth of Astro Boy, a manga series featuring a robotic creature, which instilled society with a very positive image of robots. Furthermore, according to the Japanese Shintoist vision of robots, they, like everything else, have a soul. Unlike in the West, robots are not seen as dangerous creations and naturally belong among humans.22
2.2 Animal Rights Through History
There is growing acceptance of the proposition that it is wrong to cause unnecessary suffering to animals.23 This has not always been the case. Across the world, animal rights laws were minimal at best 200 years ago. Animals were regarded as property to be treated as their owners saw fit—not entities which could have rights themselves.24 In England, in 1793, a man called John Cornish was found not guilty of any crime when he pulled out a horse’s tongue. The court ruled that Cornish could be prosecuted only if there was evidence of malice towards the horse’s owner.25
Descartes wrote that animals were merely “beast-machines”, and “automata”26 with no soul, no minds and no ability to reason.27 It followed that we should be no more concerned with animals’ squeals of pain than we are with the creaks and crashes of machinery. In moral terms, harming them was no different from tearing a piece of paper or chopping a block of wood. The modern philosopher Norman Kemp Smith described Descartes’ views as a “monstrous” thesis that “animals are without feeling or awareness of any kind”.28
However, from the seventeenth century onwards, animal rights gradually came to be protected.29 In 1641, the General Court of Massachusetts passed the “Body of Liberties”, an early charter of fundamental rights, which included a section on animals providing: “No man shall exercise any tiranny or crueltie towards any bruite creature which are usuallie kept for man’s use”.30 When in 1821 a UK politician, Colonel Richard Martin, first proposed a statute to protect horses, he was met with derision and even laughter in Parliament.31 Things changed quickly though; the following year Parliament enacted the Ill Treatment of Horses and Cattle Act 1822 at Colonel Martin’s behest. In 1824, the Society for the Prevention of Cruelty to Animals was founded in London, the first such organisation of its kind.32 In 1840, it received a Royal Charter from Queen Victoria.
Throughout the nineteenth and twentieth centuries, increasing protection was granted to animals in various countries around the world.33 The American Society for the Prevention of Cruelty to Animals was established in 1866; in the UK , major pieces of animal rights legislation enacted after 1822 included the Cruelty to Animals Act 1876 and the Protection of Animals Act 1911; India passed the Prevention of Cruelty to Animals Act in 1960.34
‘[I]f Congress and the President intended to take the extraordinary step of authorizing animals as well as people and legal entities to sue, they could, and should, have said so plainly.’ In the absence of any such statement in the [the relevant statutes], we conclude that the Cetaceans do not have statutory standing to sue.36
Extensions of animal rights law can be controversial. As Hohfeld’s structure shows, granting rights to one group, in this case animals, entails a restriction on another group—usually humans. This means that there is often resistance from those who stand to lose out. When in 2004 the UK Government introduced legislation to ban foxhunting, much of the rural population objected, seeing this move as an attack by urbanites on their way of life. This in turn sparked a constitutional crisis in which the elected part of the legislature, the House of Commons, invoked a rarely used mechanism37 to overrule the non-elected part, the House of Lords.38 Approximately 200,000 people demonstrated in London against the proposed foxhunting ban.39
From this brief historical survey, it can be seen that human attitudes towards animal rights have varied greatly over time and continue to develop.
3 How the Human Got Her Rights
The entitlements we now describe as fundamental human rights were not always thought of as beyond dispute. The idea of universal human rights and even the concept of human rights are both relatively recent inventions.
Slavery is one of the most extreme infringements on human rights and therefore provides a useful case study of changing attitudes. The analogy of slavery is also instructive because one can readily draw comparisons with our treatment of robots. Indeed, it is no coincidence that in Rossum’s Universal Robots, the screenplay which brought the term robots into popular use, Karel Capek used “roboti”, the Czech word for “slaves”, to refer to the intelligent mechanical servants who eventually rose up against their human masters.40
As little as 150 years ago, in large parts of the world human slavery was legal. Similarly to animals, slaves were treated primarily as property. At the beginning of the nineteenth century, slavery was permitted under international law. The UK abolished the slave traffic throughout its colonies in 1807 and, in 1814, induced France to do likewise. In 1815, the “Powers” of Europe collectively condemned slavery at the Conference of Vienna.41
The transition towards abolishing slavery was not all one way. In the infamous Dred Scott v. Sandford case of 1857, the US Supreme Court ruled that slavery was legal, holding that when the US Constitution was drafted, “neither the class of persons who had been imported as slaves, nor their descendants, whether they had become free or not, were then acknowledged as a part of the people”.42
Nowadays, few would disagree that slavery is morally wrong.43 The prohibition of slavery is one of the central tenets of modern international law, having the status of jus cogens: a norm which binds all nations whether or not they have expressly agreed to it, and from which no derogation is permitted.44 The Universal Declaration on Human Rights 1948 provides that “no one shall be held in slavery or servitude; slavery and the slave trade shall be prohibited in all their forms”.45
Even aside from slavery , for centuries it was seen as perfectly legitimate in many cultures to stratify the value of fellow humans by features including their gender, religion, race, nationality or even social class. In consequence, vast numbers of people were seen as being expendable throughout the twentieth century. This led to acts of great malice, such as the Holocaust, the Rwandan genocide and other such deliberate ethnic massacres. Belief that some humans were superior to others also facilitated much death and suffering through indifference, where certain groups were sacrificed supposedly to serve greater ends.
In George Orwell’s Animal Farm, the horse Boxer lives by the maxim: “I will work harder”, until he is eventually worn out by the hard labour, and sent to be killed and turned into glue at the knacker’s yard.46 Today, we treat machines the same way. When they are broken, out of date or obsolete, we discard them and sell their pieces for scrap.
Proponents of slavery evinced pseudo-scientific arguments that certain ethnicities were biologically different—and therefore inferior—to others.47 Such theories of racial superiority and inferiority have now been debunked, but modern evolutionary biology suggests that there are in fact small but significant genetic differences between ethnicities.48 These discoveries have not caused the world to doubt that we should give people of all ethnicities the same human rights . They show that we do not necessarily protect human rights because we have no differences, but rather in spite of our differences.
Although at a genetic level humans have not changed significantly in the last several thousand years, our attitudes towards rights for humans have shifted significantly during that period (a trend similar in some ways to the treatment of animals described above). By contrast, AI has only come into existence fairly recently—with significant advances in the last 10 years. In consequence, there is likely to be far greater scope for changes in societal attitudes, as AI acquires new capabilities and traits. Despite what our gut instincts might suggest at present, developments in animal and human rights indicate that societal opinions could shift in favour of granting AI rights. The next question is whether and if so, when we should grant AI rights. The following sections investigate and identify the features humanity deems worthy of protection.
4 Why Robot Rights?
There are three general reasons for protecting the rights of others, which could be applied to at least some types of AI and robots: first, the ability to suffer; secondly , compassion; thirdly , their value to humans. There is also a fourth specific reason for protecting AI rights: situations where humans and AI are combined.
4.1 The Argument from Pain: “Suffer Little Robots”
4.1.1 Consciousness and Qualia
The day may come, when the rest of the animal creation may acquire those rights which never could have been withholden from them but by the hand of tyranny… The French have already discovered that the blackness of the skin is no reason why a human being should be abandoned without redress to the caprice of a tormentor… The question is not, Can they reason? nor, Can they talk? but, Can they suffer?49
The argument that an entity should have rights because it can suffer seems to assume that the entity in question is aware, or conscious of itself suffering. Otherwise, there is no they or it which can be said to suffer.
Consciousness therefore becomes a prerequisite for protection based on the ability to suffer. For the avoidance of doubt, this section is not seeking to suggest that all or even some AI is conscious and as such deserving of protection. The point is simply that if an AI system was to acquire this quality, then it should qualify for some moral rights.
An agreed definition of consciousness is a matter which continues to elude philosophers, neurologists and computer scientists. One popular definition, which this book adopts, is to say that consciousness describes “the way things seem to us”, an experience referred to more formally as qualia .50 It is suggested that consciousness as qualia can be broken down into three stages as follows:
For an entity to be conscious, it must be capable of (i) sensing stimuli, (ii) perceiving sensations and (iii) having a sense of self, namely a conception of its own existence in space and time.
Sensations are the raw data of the external world that are observed or felt by an entity. The first stage is achieved by even rudimentary technology which does not constitute AI. Any sensor—whether of light, heat, humidity, electromagnetic signals or any other stimuli—meets this low threshold. Clearly, today’s AI systems and robots are able to take in such raw data.
Even though cameras can capture more data about a scene than the human eye, roboticists are at a loss as to how to stitch all that information together to build a cohesive picture of the world.52
In robots, as in animals, a primary function of emotion is to make rapid assessments of external or internal situations and to ready the robot to respond to them with action or information processing… These processes will be monitored by interoceptors (internal sensors) that measure these and other physical properties (positions, angles, forces, stresses, flow rates, energy levels, power drains, temperatures, physical damage, etc.) and send signals to higher cognitive processes for supervision and control.53
When an AI system uses rules or principles to draw conclusions from data, it can be said to “perceive” that data, by virtue of whatever heuristic has been applied. This process of being able to radically simplify huge amounts of information by sorting it into groups known as “clusters” functions in a similar way to that in which human and (probably) animal minds seem to work, when seeking to make sense of the world.
Another example of perception stage of consciousness in AI is the use of artificial “neural nets” where initial data is taken in at one level, stimulating an “input” layer of thinking cells or neurons. Those neurons in turn stimulate a further layer, which is capable of more abstract thought processes. This keeps on going through the system, allowing the AI system to develop complex conclusions before it reaches its output. It is no coincidence that the most basic neural nets developed in the 1950s were referred to by developers as “perceptrons”, which were able to develop internal “representations” of concepts.54
The third and final stage is for there to be an entity which knows that it is experiencing the sensations. This is the “I” in “I am feeling…”. In German, this is referred to as “ich-gefühl”, translating literally to “I-feeling”.55 For AI, as well as perhaps some living organisms, the “I” could be a “we”. Consciousness can be formed by collective experience, as opposed to individual. A bee may not have a particularly strong conception of a singular self which suffers, but it certainly appears to know of a greater self—the colony or the nest—of which it is part, and which can collectively suffer or thrive. One example of collective consciousness portrayed in popular culture is the Borg, an alien race in Star Trek: a vast collection of drones linked to a hive mind known as “the Collective”.56 Some experimental AI systems which are based on swarm intelligence displayed collectively by a number of individual “bots” may eventually develop group consciousness .57 Without this I or we then there is no thing which can be said to suffer. Psychologists Daniel Kahneman and Jason Riis suggest that the human mind comprises an “experiencing self” and an “evaluating self”. The experiencing self is that which lives life as a series of moments. The evaluative self then tries to make sense of those moments, by a variety of different shortcuts or heuristics.58 Their account of a distinct “self” existing through time and remembering past experiences is an example of the third element of consciousness .
Generally speaking, the third stage of consciousness remains elusive in AI. That said, some experiments and theories relating to an off-switch or kill button for AI may provide some evidence as to how AI could acquire a sense of “self”.59 Whereas most AI executes a particular task, these experiments consider how AI might be incentivised to allow a human to turn it off (a process known as safe interruptibility ).60 The reason why this is significant to consciousness is that the AI might have some conception of its own existence in order to resist or willingly allow that existence to be terminated.
In a 2016 paper, researchers from the University of California at Berkeley led by Stuart Russell reported on an experiment which they called “The Off-Switch Game”.61 The starting point of this game is that AI can possess instrumentalist goals beyond those with which it has originally been programmed, which can include self-preservation.62
Our key insight is that for R to want to preserve its off switch , it needs to be uncertain about the utility associated with the outcome, and to treat H’s actions as important observations about that utility. (R also has no incentive to switch itself off in this setting.) We conclude that giving machines an appropriate level of uncertainty about their objectives leads to safer designs, and we argue that this setting is a useful generalization of the classical AI paradigm of rational agents.63
Russell et al. demonstrate with formal mathematical proofs that so long as an AI entity is unsure whether or not it is doing what a human wants, it will always allow itself to be turned off. The AI can function, but at every decision point it must ask whether or not it is doing the right thing, and if not, whether it should be “killed” as a sanction for its failure. In other words, the AI modelled must ask itself: “To be or not to be?”.64 Though not the focus of Russell et al., the experiment arguably indicates one route for AI to display the third element of consciousness identified above.65
There is more than one route to the third element of consciousness . Another avenue might be along the lines of that suggested by Hod Lipson and colleagues in a paper published in the journal Science in 2006. Lipson and colleagues showed how a four-legged robot which had no prior knowledge of its own appearance or capabilities was able to learn to move through continuous self-modelling.66
Many will object that neither of the two experiments described above truly indicate consciousness in a metaphysical sense, or even under this book’s own definition. However, so long as it is acknowledged that (a) consciousness is an objective property which is capable of being defined and observed; and (b) consciousness is not restricted to humans, then the possibility remains open for AI to be developed which can be conscious.67
If and when an AI becomes conscious, the final question is whether the conscious AI can suffer. AI technology today appears capable of achieving this result—even if it is not yet conscious. Reinforcement learning involves programs analysing data, making decisions and then being informed by a feedback mechanism whether these decisions were more or less correct. The feedback mechanism qualifies results by assigning them a score according to how desirable the decision was. Each time this process takes place, the computer learns more about its task and its environment, gradually honing and perfecting its abilities. Most human children discover at some point that if they touch a sharp object, it can hurt. If pain is just a signal which encourages an entity to avoid something undesirable, then it is not difficult to acknowledge that robots can experience it. In 2016, German researchers published a paper indicating that they had created a robot which could “feel” physical pain when its skin was pricked with a pin.68
4.1.2 Degrees of Consciousness
Consciousness is not a binary quality; it can exist in degrees.69 It can vary on at least three levels. First, within a living organism, there is a spectrum of consciousness from a minimally conscious state such as deep sleep to being fully awake. For instance, in October 2013, Oxford University researchers led by Irene Tracey were able to pinpoint different degrees of consciousness during anaesthesia of a human subject.70 Secondly , and again within any given species (particularly mammals which continue to develop significantly after birth), a newborn infant appears to have less consciousness than a fully developed adult.71 Thirdly, consciousness may differ between species.72
If there are degrees of consciousness , there is no logical reason why humans in a normal waking state should occupy the pinnacle of such conscious experience. Indeed, we know that certain animals possess the ability to sense phenomena outside the bounds of human senses or even comprehension. It is generally accepted that humans are limited to five senses through which the world can be experienced: taste, sight, touch, smell and sound.73 Bats experience the world through sonar. Other animals can sense and act on the basis of electromagnetic waves. Humans can observe such forces via other media, such as visual representations on a computer screen, but we cannot know or even accurately imagine what it is to experience them directly.74 Owing to these additional senses, some animals are arguably more conscious than us, or at the very least conscious in different ways.75
Unlike humans, AI is not constrained by the finite physical space that a biological brain can occupy and the number of neurons it can contain. Just as a relatively simple computer can now undertake many more calculations in a given period of time than can even the greatest human mathematician, we cannot exclude the possibility that AI may one day acquire a greater sense of consciousness than any human and perhaps become capable of experiencing suffering of a far greater magnitude.
… consciousness may be limited to carbon substrates only. Carbon molecules form stronger, more stable chemical bonds than silicon, which allows carbon to form an extraordinary number of compounds, and unlike silicon, carbon has the capacity to more easily form double bonds…If the chemical differences between carbon and silicon impact life itself, we should not rule out the possibility that these chemical differences also impact whether silicon gives rise to consciousness , even if they do not hinder silicon’s ability to process information in a superior manner.76
4.1.3 The Role of Scepticism
We have only a limited understanding of the human mind, let alone animal or even artificial ones. We can ask other people how they feel, and we can observe brain scans, but none of these things equate to knowing exactly what the other person experiences.77 David Chalmers has termed these difficulties as “the hard problem of consciousness ”.78
The same issue applies even more so with animals. In an influential paper on consciousness , the philosopher Thomas Nagel asked “What is it to be a bat?”, concluding that “reductionist” objective accounts of a subjective experience are not possible.79 A dog may appear sad, and a chimpanzee can recoil as if in pain, but we cannot actually ask them to describe it and even if we could, we would have no way to actually know what it is like being a dog, chimpanzee or indeed a bat. Nonetheless, we continue to act as if both humans and animals are conscious, and can suffer.
This sceptical perspective on consciousness is important because it shows that even when we assume we are granting rights to others based on their capability for suffering, we cannot truly be sure of what they are feeling. It seems, therefore, that we protect the rights of others based not on what they are actually feeling, but on what we believe they are feeling. The following section expands on why we act in this way, and whether similar motivations might apply to robots and AI.
4.2 The Argument from Compassion
4.2.1 Evolutionary Programming and Intuition
If a man has his dog shot, because it can no longer earn a living for him, he is by no means in breach of any duty to the dog, since the latter is incapable of judgment, but he thereby damages the kindly and inhumane qualities in himself, which he ought to exercise in virtue of his duties to mankind… a person who already displays such cruelty to animals is also no les hardened towards men.80
Kant even extended this theory to “duties to inanimate objects”, explaining that “[t]hese allude, indirectly, to our duties to men. The human impulse to destroy things that can still be used is very immoral… Thus all duties relating to animals, other beings and things have an indirect reference to our duties towards mankind”.81
Failing to protect the rights of other humans undermines the moral fibre of a community. It denies the basic emotional reaction we have to the perceived suffering of another. The same emotional reactions also govern our feelings towards animals, although to a lesser extent. If we treat animals with contempt, then we might start to do so with humans also. There is a link between the two because we perceive animals as having needs and sensations—even if they do not have the same sort of complex thought processes as we do. Essentially, animals exhibit features which resemble humans, and we are biologically programmed to feel empathy toward anything with those features. This phenomenon is also likely to explain why we feel greater empathy to mammals, which are anatomically closer to humans than reptiles, amphibians, insects or fish. For instance, baby mammals often have large heads and large eyes, just like human babies.82
The emotional response of empathy is a successful evolutionary technique which allows us to collaborate with others in our species, because we can imagine what it is like to be them. Such collaboration, which (unlike in other species) can extend beyond family, tribe or colony, is one of the factors which led to the success of the human race.83 Although there is no obvious evolutionary advantage to feeling empathy for an injured animal as opposed to an injured human, it appears that the same neural pathways are activated when we see animals are in pain.84
Sometimes our empathy and compassion for animals even exceed that for other humans. In 2013, scientists at Regent’s University Augusta and Cape Fear Community College conducted a study in which 40% of participants said that they would save their pet rather than a human foreigner from being hit by a bus.85 Around the world, there was outcry when Harambe, a male gorilla, was shot by zookeepers after he snatched a three-year-old boy who had wandered into his enclosure.86
4.2.2 Sex, Robots and Rights
The moral debates concerning robots designed for sexual activities may cast some light on the social significance of AI. If we see certain acts with robots as being unacceptable, we must ask ourselves why this is so.
Capek’s film Rossum’s Universal Robots questioned whether advanced robot slaves should have some form of civil rights, or whether they are merely machines that can be harmed or destroyed at will with no moral consequences.87
In Love and Sex with Robots: The Evolution of Human-robot Relationships, David Levy surmises that “[t]he robots in the middle of this century will not be exactly like us, but close”, and that “when robots reach a level of sophistication, at which they are able to engender and sustain feelings of romantic love in their humans… the social and psychological benefits will be enormous”.88 He justifies this statement on the basis that “[a]lmost everyone wants someone to love, but many people have no one. If this natural human desire can be satisfied for everyone who is capable of loving, surely the world will be a much happier place”.
Others might object that something intangible is lost when such feelings are lavished on artificial entities. Joanna Bryson acknowledges the psychological tendency in humans to develop feelings for things which look as if they are conscious, but her proposed solution is to argue that we should therefore avoid creating robots which display conscious tendencies: “If robots ever need rights we’ll have designed them unjustly”.89
Even more difficult are questions as to whether humans should be permitted to carry out activities on robots which are prohibited on humans: Is it wrong to allow a human to play out a rape fantasy using a robot as the victim? Would anything change if the robot was aware that it was being used for activities which, if carried out on a human, would be considered morally depraved?90
Would you rather live in, say, a Westworld universe filled with humans who feel free to rape and maim the park’s mechanical inhabitants, or on the deck of Star Trek: The Next Generation, where advanced robots are treated as equals? The humans of one world seem a lot more welcoming than the other, don’t they?91
4.2.3 Speciesism
…scientists have agreed that there is no ‘magical’ essential difference between humans and other animals, biologically-speaking. Why then do we make an almost total distinction morally? If all organisms are on one physical continuum, then we should also be on the same moral continuum.92
It is not necessary to go as far as Ryder in urging that animals, or AI should have the same rights as humans, but his extreme view contains an important insight: the human species is not as unique as we might think in certain respects. It is of course true that AI and robots are physically different from humans, but it should be recalled that racists and proponents of eugenics sought to support their arguments with scientific “evidence” on the differences between races. The probity of this science may certainly be doubted, but the more important question seems to be not whether there are physical differences between entities, but whether those differences are treated by society as important.
4.2.4 Robots and the Role of Physicality
In this chapter, readers may have noted that the term “robots” (the physical embodiment of AI) is used more frequently than elsewhere in the book. Whereas the other issues tackled in this book apply equally to embodied or unembodied AI, the granting of rights to an entity is dependent not just on that entity’s own consciousness or otherwise, but also on humanity’s attitudes to the entity. For the reasons given above, these attitudes can be shaped by the entity’s physical form and appearance.
A significant amount of the public discussion on AI ethics to date has focussed on robots because unlike a disembodied computer program, they are easy to picture.93 Whereas the emphasis on robots, as opposed to AI more generally, is undue in most legal contexts,94 when it comes to rights the position is slightly different. This psychological tendency has been recognised in various studies on human reactions to robots.95 It is this quality which Ryan Calo touched upon when he wrote that robots (as opposed to unembodied AI) are deserving of different legal treatment because of their “social valence”. Calo said that robots “… feel different to us, more like living agents”.96
Real-life examples illustrate this psychological tendency. A remote-controlled machine used for defusing improvised explosive devices in Afghanistan was nicknamed “Sergeant Talon” by the soldiers in its company. They even unofficially “awarded” him three Purple Hearts, the decoration given to soldiers in the US military who are injured whilst serving.97 Sergeant Talon was not a “robot” in the sense used in this book given that it did not utilise any AI and it was entirely under the control of a human operator.98 However, the physical and psychological value of Sergeant Talon was clearly recognised by those working with it.
Similarly, an automated minesweeper developed at the US Los Alamos National Laboratory resembled a giant millipede and was designed to destroy mines by stepping on them, blowing off a leg or two in the process. As the machine lost one leg after another crawling across the battlefield, a colonel overseeing its work demanded that the exercise be stopped because the test was “inhumane”.99
It is important to note that AI is not a necessary criterion in order for physical machines to generate human emotions. The two examples in the preceding paragraphs demonstrate that such reactions can arise in relation to remote-controlled mechanical entities which have no independent intelligence at all. However, it is suggested that AI-enabled entities will be all the more suited to evincing such reactions, owing to their ability to learn and improve their behaviour with a view to increasing human empathy.
The more that robots come to resemble living beings, the more it seems we will react to them as if they had feelings. In one experiment conducted by lawyer and AI ethicist Kate Darling, researchers asked people to play with toy mechanical dinosaurs called “Pleos”. After an hour of play, the instructor then asked participants to hurt their Pleo with weapons they had been given. All participants refused. Even when the instructors told them they could save their own robots by “killing” someone else’s, they still refused. Finally, the researchers told the participants that unless one person “killed” their Pleo, all the robots would be destroyed. Even so, only one of the participants was willing to do so.100 Darling has used the results of this experiment to support granting rights for AI on Kantian grounds, both out of sentiment and “to promote socially desirable behaviour”.101
4.2.5 Escaping the Uncanny Valley
There is a phenomenon in robotics known as the “uncanny valley ”, first identified by roboticist Masahiro Mori.102 The uncanny valley describes the slow rise, sharp drop and then relatively fast rise in feelings of familiarity, as robots become more like humans. The uncanny valley illustrates the tendency for human observers to feel uneasy when they encounter a robot which looks and acts like a human, but is not quite accurate. This could be a product of various subtle imperfections: its jerky movements, unnerving facial expressions, a flat and monotonous voice incapable of fully capturing the human range of emotions, and so on. The point is that when we see something which looks a lot like us but is definitely not human, we feel something strange is going on. We know we are being tricked.
It is possible to create robots designed specifically to avoid this phenomenon. Partly because of a fear of falling into the uncanny valley , most robots are not designed to have exact human features (though sexbots are an exception). In the late 1990s, researchers led by Cynthia Breazeal at the MIT Artificial Intelligence Laboratory built a robot called Kismet, which was designed to recognise and simulate human emotions, by manipulating its mechanical eyes, mouth and ears.103 The appearance of Kismet was very far from human. Instead, the builders of Kismet chose certain features of humans which our brains recognise as signalling emotions, but put them in an exaggerated and deliberately mechanical form.104 Kismet’s large eyes in relation to its other features resemble elements we naturally associate with babies and young animals. This too encourages us to feel empathy towards the robot.105
It might be objected that we could actively design robots so as to not evince compassion. As noted above, this is Joanna Bryson’s solution to avoiding robot rights. However, it seems surprisingly easy for a robot to provoke caring feelings in humans. In the Star Wars films, one of the most enduringly popular characters is R2-D2. The robot looks like little more than a painted metal trashcan on wheels, with a domed colander instead of a lid. But somehow, through a combination of its beeps, trills and deft movements, R2-D2 is imbued in the mind of the audience with a distinct personality, and certainly one which can elicit sympathy.106
4.3 The Argument from Value to Humanity
4.3.1 Reciprocity of Respect
We may find it difficult to formulate a human right of tormenting beasts in terms which would not equally imply an angelic right of tormenting men.107
Lewis’ theory bears some resemblance to the dystopic visions of superintelligent AI subjugating humanity suggested by some modern commentators. As noted in Chapter 1, such predictions often verge on hyperbole and are by no means an immediate concern.108 However, they do add a further argument in favour of protecting AI rights. It is fallacious to assume that there is a logical connection between humanity treating AI “well” and AI doing the same for humanity, should it ultimately obtain the whip hand. However, assuming that AI is rational and will seek to preserve itself and its own interests, then adopting an attitude of mutual coexistence towards it will likely engender a similar attitude from AI towards humans—at least for so long as humanity’s actions are capable of having a bearing on AI. Indeed, the assumption that humans will not wish to wantonly destroy robots forms part of the modelling employed by Russell et al. which enables them to be kept under control and yet still obedient to human commands.109
4.3.2 Inherent Value
The law protects a range of entities and objects not because they have a particular definable use, but rather for a panoply of cultural, aesthetic and historical reasons. We refer here to these justifications collectively as “inherent” value.
We might extend such protection to Methuselah, a bristlecone pine somewhere in California’s White Mountains which is said to be over 5000 years old.110 The same type of moral reasoning might apply to the protection of a Van Gogh painting or an ancient Babylonian temple. Whether such an “inherently valuable” entity is man-made or natural does not appear to make a difference to the value which it is ascribed. The world’s first cloned mammal, Dolly the sheep, was not treated with any less respect than any other sheep. In fact, owing to her unique status as the world’s first man-made sheep, she was treated far better than natural ones.111
The reason why we protect these objects goes beyond the fact that they might be someone’s property. Indeed, for many of the most valuable things in the world the reason why we feel they should be protected is that they are everyone’s property; they are meaningful to all humanity.
In 2002, Germany amended its constitutional basic law to include the following provision: “Mindful also of its responsibility toward future generations, the state shall protect the natural foundations of life and animals”.112 Notably, the German Constitution stipulates that this right is protected for the benefit of “future generations”, which presumably refers to humans. Thus, the recorded motivation for the protection of animal and natural life is not necessarily that life in and of itself but rather its impact on humanity.
There is a tendency to see computer programs as expendable—when one is updated then previous versions can be deleted or overwritten. However, there are sound pragmatic reasons for maintaining previous copies. For instance, it may be necessary for legal forensic purposes to preserve a version of an AI system at the time a relevant event took place so as to be able to make enquiries as to its functioning and thought process. Similarly, if an update or patch leads to unforeseen problems, it may well be necessary to “roll back” a program to its previous version in order to rectify the issue. Both of these motivations underline the importance to humanity of preserving types of AI in some way.
We may already be recognising the inherent value of some robots. Kismet is no longer an operational model used in experiments, but it is preserved in the MIT Museum. In London, the Science Museum held an exhibition in early 2017 dedicated to robots, which featured various different iconic designs. Although these examples both feature the physical robots, we may also wish to preserve the source code of seminal AI systems such as AlphaGo Zero for future generations to study and learn from.
4.4 The Argument from Post-humanism: Hybrids, Cyborgs and Electronic Brains
Machines and human minds are not always separate. The idea of a human augmented by AI often appears in popular culture—examples include the Cybermen in Doctor Who, or the Borg in Star Trek. In 2017, Elon Musk suggested that humans must merge with AI or become irrelevant in the AI age.113 Shortly afterwards, he launched a new company, Neuralink which aims to achieve this goal by “developing ultra high bandwidth brain-machine interfaces to connect humans and computers”.114
Various research projects and commercial enterprises are exploring how to use the human brain in the development of AI. Other scientists have demonstrated that tiny syringe-injectable electronics can be inserted into biological matter and then activated.115 These electronics could have all sorts of applications for humans, from improving memory to processing power. In a compelling 2018 article entitled “How to become a centaur”, Nicky Case argued that human and AI could combine to become greater than the sum of their parts: “Symbiosis shows us you can have fruitful collaborations even if you have different skills, or different goals, or are even different species. Symbiosis shows us that the world often isn’t zero-sum — it doesn’t have to be humans versus AI, or humans versus centaurs, or humans versus other humans. Symbiosis is two individuals succeeding together not despite, but because of, their differences”.116
The ship wherein Theseus and the youth of Athens returned from Crete had thirty oars, and was preserved by the Athenians down even to the time of Demetrius Phalereus, for they took away the old planks as they decayed, putting in new and stronger timber in their places, in so much that this ship became a standing example among the philosophers, for the logical question of things that grow; one side holding that the ship remained the same, and the other contending that it was not the same.117
This paradox, which questions the nature of continuous identity through shifting physical components, can be applied to combinations of humanity and AI. We would not deny someone their human rights if they were 1% augmented by AI. What about if 20%, 50% or 80% of their mental functioning was the result of computer processing powers? On one view, the answer would be the same—a human should not lose rights just because they have added to their mental functioning. However, consistent with his view that no artificial process can produce “strong” AI which resembles human intelligence, the philosopher John Searle argues that replacement would gradually remove conscious experience.118
Replacement or augmentation of human physical functions with artificial ones does not render someone less deserving of rights.119 Someone who loses an arm and has it replaced with a mechanical version is not considered less human. The same argument might be made in the future, for instance if someone suffers a brain injury causing persistent amnesia and undergoes surgery to fit a processor replacing this mental function.
The case of Phineas Gage provides a historical example of continued moral identity despite neurological change. Gage was a railway worker who suffered catastrophic brain injuries when an explosion sent an iron rod through his skull. He somehow survived but his personality was reported to be permanently altered as a result.120 There was no suggestion though that Gage was any less of a rights-bearing citizen or human as a result. If Gage’s rights were maintained following this accidental brain trauma and subsequent neurological alteration, it seems illogical for them to be reduced should such alteration take place voluntarily or even in response to an injury.
Indeed, the growing ubiquity of wearable technology blurs yet further the line between what is human and what is not. At the time of writing, humans can of course remove AI goggles, smartwatches and other personal AI devices from their bodies. There is a certain taboo about voluntary surgery to integrate technology for healthy participants.121 But this may not always be the case. It is accepted in most cultures, and even mandated in some, that humans may undergo voluntary surgery for aesthetic or religious reasons. Tattoos, piercings, circumcision and even more extreme forms of surgery are morally acceptable or even required in some cultures. It may be the case in the coming decades that the same becomes true of integrated technology. The precise boundaries between what is “human” and what is “artificial” for the purposes of ascribing rights are a matter beyond the scope of this chapter. The main point is that the distinction between human and technology may become increasingly fluid.122
A further biology-based route to AI through whole brain emulation does not aim to augment or update human brains, but rather to create an entirely new brain capable of intelligent thoughts, feelings and consciousness , using a combination of technology and bioengineering.123 As noted above, owing to her status as the first cloned mammal, throughout her life Dolly the sheep was monitored and cared for by teams of scientists and veterinarians, receiving state of the art care.124 In the same way that we treated this quasi-artificial sheep with equal or greater respect to a natural sheep, would we not do the same for an artificial human brain? This begs the question of whether it is ethically acceptable to clone a human brain in the first place. In many countries, cloning humans is heavily regulated or banned. Some have even questioned whether it was morally appropriate to clone Dolly.125
A related possibility—which at present is still in the realms of science fiction—is that a human’s personality or consciousness might be somehow uploaded and stored by a computer or on a network. Some scientists are already working on this idea.126 Others, such as neuroscientist and author Roger Penrose, contend that human thinking can never be emulated by a machine.127 If it is the case that a human mind can one day be uploaded to a computer, we will be faced with a dilemma as to whether and if so what rights it should hold. Even if the artificial mind is imperfect or rudimentary, as will likely be the case in the first iterations of any such technology, this is not necessarily a reason for denying it basic rights.
5 Conclusions on Rights for AI
Suggesting that robots deserve rights might be met with disgust or disdain. But we should remember that proponents of animal and indeed universal human rights faced exactly the same reaction at first.
Moral rights are not the same as legal rights, though protection in law often follows shortly after society has recognised a moral case for protecting something. The next chapter addresses the case for giving robots legal personality, providing additional suggestions based on pragmatism which might apply alternatively to or in addition to the ethical considerations outlined here.
If a society does decide robots should have rights, this raises further difficult questions as to what rights ought to be protected. If the justification is to reduce suffering, or the appearance of suffering, then it would seem to follow that one of the rights to be protected might involve minimising robot “suffering” except where necessary and proportionate to achieving more important aims.
In addition to minimising suffering, other rights which we might one day protect for AI may not necessarily resemble those that we protect for humans, or even animals. For example, human-centric rights bound up in social relationships such as dignity or privacy might not be appropriate for AI. In a similar vein, animals feel no shame if a human watches them mating. Instead, an AI entity’s rights might include those more unique to its nature, such as a better energy supply, or to more processing power. Of course, there may well be good reasons why such potential rights for AI will be overridden—just as some human and animal rights are subordinated to more important principles—but that does not mean that AI’s rights cannot exist in the first place.
These are questions which can and should be addressed via the types of consultative process outlined in Chapters 6 and 7 on building institutions for regulation.
As AI and robots become more advanced and more integrated into our societies, we will be forced to re-evaluate our notions of moral rights. If robots display the same capabilities as other protected creatures, the inquiry may switch from asking “why should we give robots rights?” to asking “why should we continue to deny them?”