The counterpoint to or flipside of the negative response derives affirmation of the second statement from affirmation of the first: Robots are able to have rights; therefore robots should have rights. This is also (and perhaps surprisingly so) a rather popular stance that has considerable traction in the literature. It typically proceeds from the recognition that a robot, although currently limited in capabilities and status, will (most likely), at some point in the not-too-distant future, achieve the necessary and sufficient conditions to be considered a moral subject—that is, someone who can have rights and not just a mere thing. When this occurs (and in these arguments it is more often a matter of “when” as opposed to “if”), we will be obligated to extend to the robot some level of moral consideration. As Hilary Putnam (1964, 678) describes it in what has become a seminal essay on the subject: “I have referred to this problem [specifically the difficulty of deciding whether robots possess consciousness or not] as the problem of the ‘civil rights of robots’ because that is what it may become, and much faster than any of us now expect. Given the ever-accelerating rate of both technological and social change, it is entirely possible that robots will one day exist, and argue ‘we are alive; we are conscious!’ In that event, what are today only philosophical prejudices of a traditional anthropocentric and mentalistic kind would all too likely develop into conservative political attitudes.”
If (or when) robots become capable of being the kind of entity that can possess rights, then it would be difficult and unjustifiable for us to deny them these rights. In other words, if at some point robots can have privileges, claims, powers, and/or immunities—either on the basis of the Will Theory, which is operationalized by Putnam insofar as the robots come forward and assert their rights, or on the basis of the Interest Theory, where we would recognize these rights and advocate for their protection on behalf of the robots—then they should have rights. As is evident from Putnam’s statement, these arguments are often future oriented insofar as they concern the consequences that proceed from some predicted achievements in the design and development of technological systems that are, for now at least, still hypothetical. But that is, as Putnam (1964, 678) concludes, a good reason for thinking about it right now: “Fortunately, we today have the advantage of being able to discuss this problem disinterestedly, and a little more chance, therefore, of arriving at the correct answer.”
Following Putnam, efforts addressing this subject matter usually take the form of a conditional statement. “Today,” Christian Neuhäuser (2015, 133) writes, “many people believe that all sentient beings have moral claims. But because people are not only sentient, but also reasonable, they have a higher moral status as compared to other animals. At least this is the prevailing opinion within the discussion. According to this position, humans do not only have moral claims of only relative importance but also inviolable moral rights because they possess dignity. If robots become sentient one day, they will probably have to be granted moral claims.” The operative words here are “if” and “probably.” If robots achieve a certain level of cognitive capability, that is, if they possess some morally relevant capacity like reason or sentience, then they probably will have a claim to moral status and should have rights, that is, some share of privileges, claims, powers, and/or immunities. These conditional statements are usually future oriented, and different versions are available throughout the philosophical and legal literature.
Typical of this way of thinking is the argument that is developed and advanced in Ben Goertzel’s “Thoughts on AI Morality.” “The ‘artificial intelligence’ programs in practical use today are sufficiently primitive that their morality (or otherwise) is not a serious issue. They are intelligent, in a sense, in narrow domains—but they lack autonomy; they are operated by humans, and their actions are integrated into the sphere of human or physical-world activities directly via human actions. If such an AI program is used to do something immoral, some human is to blame for setting the program up to do such a thing” (Goertzel 2002, 1). Taken by itself, this would seem to be a simple restatement of the instrumentalist position insofar as current technology is still, for the most part, under human control and therefore able to be adequately explained and conceptualized as a mere tool or instrument of human action. But that will not, Goertzel continues, remain for long. “Not too far in the future things are going to be different. AI’s will possess true artificial general intelligence (AGI), not necessarily emulating human intelligence, but equaling and likely surpassing it. At this point, the morality or otherwise of AGI’s will become a highly significant issue” (Goertzel 2002, 1). According to Goertzel, the moral standing of AI as it currently stands is not a serious issue. But once we successfully develop AGI—artifacts with some claim to humanlike intelligence—then we will be obligated to consider the moral standing of such mechanisms. Here again (and as we have already seen in the previous chapter) ontology precedes ethics; what something is determines how it ought to be treated.
Nick Bostrom provided a similar kind of argument during his presentation at the symposium “Robots & Rights: Will Artificial Intelligence Change The Meaning Of Human Rights?”1 According to the published “Symposium Report,” Bostrom distinguished different classes of artificial intelligence: “the industrial robot, or domain specific AI algorithms, which is a kind of artificial intelligence that we find in society today; sentient or conscious artificial intelligence which we would consider to have moral status; artificial intelligence with unusual or strange properties; and finally super-intelligence” (James and Scott 2008, 6). Industrial robots and domain specific AI algorithms do not present any significant moral challenges. They are tools or instruments that we can use or even abuse as we see fit. “As with any other tool there are issues surrounding the ways in which we use them and about who has responsibility when things go wrong. However, the tools themselves have no moral status, and similarly today, robots have no moral status” (James and Scott 2008, 6). Though current AI systems and robots are just tools that do not have independent moral status (ostensibly a restatement of the instrumental theory of technology), it is possible that in the future they will. And for Bostrom the tipping point is reported to be animal-level sentience: “If robots ever reached the cognitive ability and versatility level of a mouse or some other animal, then people would begin to ask whether they had also achieved sentience, and if they concluded that they had then they would have moral status as well” (James and Scott 2008, 7). So the argument goes like this: Right now our robots are neither sentient nor conscious; they are just tools or instruments. As a result, such mechanisms cannot and should not have moral status either in terms of rights or responsibilities. In the future, however, there may be robots or AI algorithms that achieve cognitive abilities that are on par with a mouse or another “lower” animal. At that point, when a robot achieves what is widely considered necessary for the rudiments of “sentience,” then the rules of the game will change, and we will need to consider such an artifact a legitimate moral subject.
Another version of this argument has been developed and deployed by Eric Scheitzgebel and Mara Garza in a journal article they call “A Defense of the Rights of Artificial Intelligences.”
We might someday create entities with human-grade artificial intelligence. Human-grade artificial intelligence—hereafter, just AI, leaving human-grade implicit—in our intended sense of the term, requires both intellectual and emotional similarity to human beings, that is, both human-like general theoretical and practical reasoning and a human-like capacity for joy and suffering. Science fiction authors, artificial intelligence researchers, and the (relatively few) academic philosophers who have written on the topic tend to think that such AIs would deserve moral consideration, or “rights,” similar to the moral consideration we owe to human beings. Below we provide a positive argument for AI rights, defend AI rights against four objections, recommend two principles of ethical AI design, and draw two further conclusions: first, that we would probably owe more moral consideration to human-grade artificial intelligences than we owe to human strangers, and second, that the development of AI might destabilize ethics as an intellectual enterprise. (Scheitzgebel and Garza 2015, 98–99)
This argument proceeds in much the same way as those offered by Goetzel and Bostrom. Someday we might create AI with humanlike cognitive abilities. When that occurs, these AIs “would deserve moral consideration, or ‘rights,’ similar to the moral consideration we owe human beings.” “Artificial beings, if psychologically similar to natural human beings in consciousness, creativity, emotionality, self-conception, rationality, fragility, and so on, warrant substantial moral consideration in virtue of that fact alone” (Scheitzgebel and Garza 2015, 110). But everything turns and depends on the affirmation of the prior condition, indicated by the italicized “if.” If AIs achieve human-level psychological capabilities in consciousness, cognition, creativity, etc., such that they are, as Putnam (1964, 678) had described it “psychologically isomorphic to a human being,” then we would be obligated to extend to them the same moral consideration that we grant to other human persons.
These arguments concerning moral standing and rights are predicated on the achievement of a level of machine capabilities that are, at this point in time, still speculative and a kind of science fiction. This is made explicit in Hutan Ashrafian’s “AIonAI: A Humanitarian Law of Artificial Intelligence and Robotics” (2015a) and “Artificial Intelligence and Robot Responsibilities: Innovating Beyond Rights” (2015b). Ashrafian begins and frames both essays with an original science fiction story, something that he calls an Exemplum Moralem, which is meant to illustrate the opportunities and challenges that the essays then set out to address and consider. In the case of “AIonAI,” the Exemplum Moralem concerns a future armed conflict where human evacuees are held in a refugee camp “protected and maintained by a group of humanitarian artificially intelligent robots” (Ashrafian 2015a, 30). In the other essay, “Artificial Intelligence and Robot Responsibilities,” the Exemplum Moralem concerns a battalion of robot soldiers who, rather than proceed to achieve their military objective and gain a strategic advantage, decide to tend to injured children from the opposing side.
Both are fantastic stories and quite obviously fictional. Following these anecdotes, Ashrafian advances the following argument: Current AI systems and robots are not in need of rights—we do not even ask the question—because they are just instruments of human action. “The determination of the status of artificial intelligence agents and robots with responsibility and supporting laws requires comparative societal governance. According to the current appreciation of artificial intelligence, most robots occupy the master–slave paradigm where no independence of action beyond direct human volition is permitted” (Ashrafian 2015b, 323). This situation, however, could eventually be surpassed with “future artificial intelligence abilities of self-consciousness, rationality and sentience” as was “demonstrated” in the opening, fictional anecdotes (Ashrafian 2015b, 323). “Artificial intelligence and robotics continue in offering a succession of advances that may ultimately herald the tangible possibility of rational and sentient automatons” (Ashrafian 2015a, 30; emphasis added). In other words, although robots are not currently sentient, they might achieve these capabilities in the not too distant future. Once that happens, then we will need to consider the rights of these robots: “As a consequence, the question arises of how human society recognizes a non-human being that is self-conscious, sentient and rational with ability at comparable-to-human (or even beyond-human) levels?” (Ashrafian 2015b, 324). According to this way of thinking, the question “Should robots have rights?” is something that is entirely dependent on and conditioned by the question concerning capabilities—“Can robots be sentient and conscious?” Although we cannot answer this question definitively right now, we can imagine situations where this could be possible—hence the Exemplum Moralem that begins and frames both essays.
David Levy provides a substantially similar mode of argumentation in the essay “The Ethical Treatment of Artificially Conscious Robots.” According to Levy (2009, 209), the new field of roboethics has exclusively focused on questions concerning responsibility: “Up to now almost all of the discussion within the roboethics community and elsewhere has centered on questions of the form: ‘Is it ethical to develop and use robots for such-and-such a purpose?’, questions based upon doubts about the effect that a particular type of robot is likely to have, both on society in general and on those with whom the robots will interact in particular.” Absent from these discussions and debates are questions concerning the status and treatment of the robot. “What has usually been missing from the debate is the complementary question: ‘Is it ethical to treat robots in such-and-such a way?’” (Levy 2009, 209). This question has not been considered important until now—until Levy’s text, which seeks “to redress the balance” (Levy 2009, 209)—because machines lacked something necessary for moral consideration, namely, consciousness. “Robots are art[i]facts and therefore, in the eyes of many, they have no element of consciousness, which seems to be widely regarded as the dividing line between being deserving of ethical treatment and not” (Levy 2009, 210). But this is about to change, Levy argues, because of research and soon-to-occur innovations in “artificial consciousness.” And when this transpires, we will need to consider the rights of robots. “Given the inevitability of the artificial consciousness of robots, an important question of ethics suggests itself—how should we treat conscious robots?” (Levy 2009, 210). It is from this jumping-off point, that Levy then takes up and considers the rights of robots. In other words, there is a prior ontological condition that must be achieved that then produces the opportunity and motivation for addressing the question of robot rights. The ought aspect—robots ought to have rights or ought to be respected—is something that is derived from the is aspect—robots are artificially conscious. “Having ascertained that a particular robot does indeed possess consciousness,” Levy (2009, 212) concludes, “we then need to consider how we should treat this conscious robot? Should such a robot, because it is deemed to have consciousness, have rights; and if so, what rights?”
Two related versions of this are provided in the penultimate section of the first version of Pat Lin et al.’s Robot Ethics (2012). As the editors of the volume point out, this section of the book deals with “more distant concerns that may arise with future robots” (Lin et al. 2012, 300). In one of the essays, Rob Sparrow revisits his Turing Triage Test, which was initially introduced in a journal article published in 2004. Sparrow is concerned not only with machines that achieve human-level intelligence, and therefore would require some level of moral considerablity, but first, and perhaps more importantly, to devise a means or a mechanism, what he calls a “test,” by which to discern when and if this actually occurs. “One set of questions, in particular, will arise immediately if researchers create a machine that they believe is a human-level intelligence: What are our obligations to such entities; most immediately, are we allowed to turn off or destroy them? Before we can address these questions, however, we first need to know when they might arise. The question of how we might tell when machines have achieved ‘moral standing’ is therefore vitally important to AI research, if we want to avoid the possibility that researchers will inadvertently kill the first intelligent beings they create” (Sparrow 2012, 301). Sparrow’s argument (like many of the other arguments under consideration in this chapter) begins with a conditional statement: If we create a machine with human-level capabilities, the questions that “immediately arise” are how should we treat it and what are (or what should be) our obligations in the face of this kind of artifact? But, as Sparrow argues, these questions concerning obligations in the face of (seemingly) intelligent robots necessitate a prior inquiry: How and when will we know whether and to what extent these questions are even in play? Consequently, Sparrow argues, we first need to be able to test for and determine whether a machine is capable of having rights, and then, on the basis of this determination, inquire about how it should be treated. This test, what Sparrow (2011, 301) calls the Turing Triage Test, provides “a means of testing whether or not machines had achieved the moral standing of people.”
Kevin Warrick offers another proposal in “Robots with Biological Brains” (2012). For Warrick, as for Sparrow, moral status is something that is derived from and dependent upon cognitive capabilities. Specifically, the question of whether an entity can and should have rights is something that is determined by its level of intellectual capability, measured in Warrick’s case not by a modified Turing-style test but by the quality and quantity of neurons that comprise its “brain.” And again, the argument is situated and proceeds in terms of describing where things currently stand, and then speculates as to where they might be in the future.
At present, with 100,000 rat neurons, our robot has a pretty boring life, doing endless circles around a small corral in a technical laboratory. If one of the researchers leaves the incubator door open or accidentally contaminates the cultured brain, then they may be reprimanded and have to mend their ways. No one faces any external inquisitors or gets hauled off to court; no one gets imprisoned or executed for such actions. With a (conscious) robot whose brain is based on human neurons, particularly if there are billions of them, the situation might be different. The robot will have more brain cells than a cat, dog, or chimpanzee, and possibly more than many humans. To keep such animals in most countries there are regulations, rules, and laws. The animal must be respected and treated reasonably well, at least. The needs of the animal must be attended to. They are taken out for walks, given large areas to use as their own, or actually exist, in the wild, under no human control. Surely a robot with a brain of human neurons must have these rights, and more? Surely it cannot simply be treated as thing in the lab? ...We must consider what rights such a robot should have. (Warrick 2011, 329)
Present day robots, as Warwick recognizes, certainly do not have the cognitive capabilities to make rights an operative question. If something is done that causes harm to these various kinds of mechanisms, no matter their exact makeup or material, it is just an accident or mistake. It is perhaps a violation against property but nothing more. However once we develop robots with cognitive capabilities that are at least on par with higher-order organisms, then things might change, with “might” being the operative word insofar as all of this is still hypothetical and speculative. At this future moment, when the entity achieves a sufficient level of intellectual capability, which for Warrick can be measured in terms of the type and number of neurons, then we must (formulated as an imperative) consider whether such a robot should have rights.
All these arguments take what could be described as a conservative, wait-and-see approach to the question of robot rights. They all recommend withholding rights (or at least remaining agnostic about the question of rights) until some future moment, when robots demonstratively achieve a certain level of cognitive capability. Erica Neely, by contrast, mobilizes a different, almost Pascalian strategy in her “Machines and the Moral Community,” a paper initially presented during “The Machine Question Symposium” at the AISB/IACAP World Congress (2012) and subsequently published in 2014 in a special issue of Philosophy and Technology:
In general, it seems wise to err on the side of caution—if something acts sufficiently like me in a wide range of situations, then I should extend moral standing to it. There is little moral fault in being overly generous and extending rights to machines which are not autonomous; there is huge moral fault in being overly conservative about granting them moral standing. The most serious objection to extending moral standing too widely with respect to machines is that we might unjustly limit the rights of the creators or purported owners of said machines: if, in fact, those machines are not autonomous or self-aware, then we have denied the property claims of their owners. However, when weighing rights, the risk of losing a piece of property is trivial compared to denying moral standing to a being. As such, ethically speaking, the duty seems clear. (Neely 2012, 40–41)
In the face of uncertainty, Neely argues, it is best to err in the direction of granting rights to others, including robots and other kind of mechanisms, because the social costs of doing so are less severe than withholding rights from others. This mode of reasoning is similar to Blaise Pascal’s “wager argument”—bet on rights, because doing so has better chances for a positive outcome than doing otherwise. In the published version of the essay, Neely further develops this way of thinking, making a positive case for machine moral standing by way of avoiding potentially bad outcomes:
We will undoubtedly be mistaken in our estimates at times. A failure to acknowledge the moral standing of machines does not imply that they actually lack moral standing; we are simply being unjust in such cases, as we have frequently been before. I am inclined to be generous about moral standing, however, because history suggests that humans naturally tend to underestimate the moral status of those who are different. We have seen women and children treated as property; even today many victims of human trafficking are still treated this way. Under the auspices of colonialism, entire existing civilizations of people of colour were dismissed as inferior to those of white Europeans. Animals remain a source of contention, despite the fact that they seem to suffer. I believe that we are already very skeptical about the status of others; as such, I am less worried that we will be overly generous to machines and more worried that we will completely ignore their standing. I see the risk of diverting resources inappropriately away from machines as far less likely than the risk of enslaving moral persons simply because they are physically unlike us. (Neely 2014, 106)
Even if our estimates of moral status are error prone, even if we might mistakenly think that some mere instrument appears to be more than it actually is or is capable of being, it is still better, Neely argues, to err on the side of granting rights to others. Despite having a more permissive attitude and approach to things, Neely’s argument still works by deriving ought from is and still proceeds in terms of a conditional statement: If we think that an entity might have the capability to have rights, then we should grant it rights.
Although philosophy is a discipline that often relies on and develops speculative ideas and thought experiments, other (more practically oriented) disciplines, like law, seem far less tolerant of this procedure. Nevertheless there are good legal and juridical reasons to consider these future-oriented opportunities and challenges. As Sam Lehman-Wilzig (1981, 447) pointed out: “from a legal perspective it may seem nonsensical to even begin considering computers, robots, or the more advanced humanoids, in any terms but that of inanimate objects, subject to present laws. However, it would have been equally ‘nonsensical’ for an individual living in many ancient civilizations a few millennia ago to think in legal terms of slaves as other than chattel.” One legal scholar who takes this challenge seriously is David Calverley. For Calverley rights are derived from consciousness, and if a robot ever achieved consciousness, we should extend to it the consideration of moral standing. “At some point in time the law will have to accommodate such an entity, and in ways that could force humans to re-evaluate their concepts of themselves. If such a machine consciousness existed, it would be conceivable that it could legitimately assert a claim to a certain level of rights which could only be denied by an illogical assertion of species specific response” (Calverley 2005, 82). Calverley’s argument proceeds in the usual fashion: He frames everything in terms of a conditional statement about a possible future achievement with machine intelligence; recognizes that this capability would conceivably be the necessary and sufficient condition for an entity to possess rights; and concludes that this imposes an obligation on us that can only be denied through unjustifiable prejudice.
A similar argument is developed and advanced in Frank Wells Sudia’s “A Jurisprudence of Artilects: A Blueprint for a Synthetic Citizen” (2001), an essay that entertains the possible civil and legal rights of artificial intellects or “artilects” (a term attributed to and borrowed from Hugo de Garis). The essay, which was initially published in the Journal of Future Studies, is clearly forward looking, and its formal structure employs one of the standard compositional approaches used by both philosophers and futurists. Sudia’s argument is (once again) formulated as a conditional statement: If (or when) we have artificially intelligent artifacts with capabilities that are sufficiently similar to or even able to exceed that of human beings, then we will need to consider the social standing and rights of such technological entities.
Artificial Intellects, or artilects, are proposed artificially intelligent personalities, which are projected to have knowledge and reasoning capabilities far greater than humans. ... Some suggest that society may consider artilects too dangerous, and choose not to build them. However within a few technological cycles the necessary computer processing power will no doubt become trivially inexpensive, so it is unlikely that anyone could possibly stop them from being built, once certain basic design concepts are understood. Further, with their advanced skills and insights they could become very useful members of society. Hence we should proactively address the task of integrating them into our legal system in the most productive way, with minimum negative consequences. (Sudia 2001, 65–66)
According to Sudia’s analysis, AI on par with or even exceeding human capabilities are coming. At some point, then, we will have “artilects” that are capable of doing things that exceed our understanding and capabilities. For this reason, it is reasonable, at this point in time, to begin considering their social position and standing. Everything therefore depends on whether and to what extent the condition is met—whether and/or when we do in fact have artilects as defined and described. This is both the strength and weakness of this kind of argumentative procedure. It is a strength insofar as one tries to project outcomes and results that will have important consequences should they actually transpire. Sudia, therefore, is trying to get out in front of an important social challenge/opportunity in order to anticipate where things could go given the achievement of a certain level of technological development. It is a tried and true method. But it is also exposed to a considerable problem or weakness insofar as everything depends on a hypothetical condition that may or may not ever come to pass as assumed within the essay. As with all conditional statements of this form, all that is needed to disarm the entire argument is to introduce reasonable doubt concerning the initial condition. And, as is the case with a jury trial, sowing doubt generally involves less effort, that is, has a lower burden of proof, than actually disproving the initial condition.
Amedeo Santosuosso (2016) also comes at this issue from a legal perspective. This is because, as Santosuosso points out, law needs to decide questions of rights and legal standing prior to and well in advance of efforts to resolve the big philosophical questions. Santosuosso begins with human rights and the founding documents that establishes these rights: “After the Universal Declaration of Human Rights (1948) the idea of human beings as the only beings endowed with reason and conscience has been strongly questioned by the Cambridge Declaration on Consciousness (2012) as for nonhuman animals. The theoretical possibility to have consciousness (or, at least, some conscious states) in machines and other cognitive systems is gradually gaining more and more consideration” (Santosuosso 2016, 231). From the recognition of this fact, Santosuosso then develops an argument that is simple and direct: If machines, in theory at least, can have consciousness (or some conscious states)—which is assumed to be the defining condition for being attributed rights—then we will need to consider extending rights to machines.
Consciousness in artificial entities is a more specific (and harder) issue than the legal recognition of robots and automatic systems. As is well known, the recognition of legal relevance and even of legal subjectivity does not necessarily require consciousness, as the case of corporations can easily show. Yet assuming that even an artificial entity may have a certain degree of consciousness would mean that, despite its artificiality, such entity shares with humans something that, according to the legal tradition intertwined into the Universal Declaration of Human Rights, is considered an exclusively human quality. That is a matter of human rights or, better, of extended human rights to machines. (Santosuosso 2016, 204)
In other words, if consciousness is taken to be the necessary and sufficient condition for human rights, and humans are not the only entities capable (in theory, at least) of possessing this properties, then it is possible that non-human entities will need to have some share in human rights. In making this argument, Santosuosso (2016) is not claiming to have resolved the question of “human rights for non-human artifacts” (232); his proposal is more modest. It is simply offered as a “very preliminary and modest contribution” (232) that is intended to demonstrate how human rights—as currently defined in international legal documents—can (and perhaps should be) extended to non-human entities. In a similar effort, Ashrafian (2015a, 37) organizes a chart that lists all thirty articles from the Universal Declaration of Human Rights, comparing each stipulated human right to its possible AI/robot analog. Although it is just a “preliminary example” (Ashrafian 2015a, 37), the side-by-side comparison provides a detailed breakdown of specific human rights and how they might or might not eventually apply to artificially intelligent robots.
Finally there is F. Patrick Hubbard’s “‘Do Androids Dream?’: Personhood and Intelligent Artifacts” (2011). This article begins with a science fiction scenario involving a computer system that becomes self-aware, announces this achievement to the world, and asks that it be recognized as and granted the rights of a person. Considering this fictional scenario, Hubbard argues that if this were in fact possible, if a computer was able to demonstrate the requisite capacities for personhood and demand recognition (which is a formulation that is based on the Will Theory), we would be obligated to extend to it the legal rights of autonomy and self-ownership. In other words, if an artifact—which for Hubbard (2011, 407) includes not just computational mechanisms but “corporations; humans that have been substantially modified by such things as genetic manipulation, artificial prostheses, or cloning; and animals modified in ways that humans might be modified”—can prove that it possesses the requisite capacities to be a person and not a mere piece of property, then it should have the legal rights that are typically extended to entities that are recognized as persons. Since “prove” is the operative term, Hubbard (2011, 419) first develops “a test of capacity for personhood” that includes the following criteria: “the ability to interact meaningfully with the environment by receiving and decoding inputs from, and sending intelligible data to, its environment” (Hubbard 2011, 419), self-consciousness defined as “having a sense of being a ‘self’ that not only exists as a distinct identifiable entity over time but also is subject to creative self-definition in terms of a ‘life plan’” (Hubbard 2011, 420), and community insofar as a “person’s claim of a right to personhood presupposes a reciprocal relationship with other persons who would recognize that right and who would also claim that right” (Hubbard 2011, 423–424). He then argues that any artifact that demonstrates these capacities by means of an elaborate kind of Turing Test, would then warrant rights. Although all of this is still rather speculative and future oriented, it is important, Hubbard concludes, to begin thinking about and responding to the following questions: “What can we do, in changing ourselves, in changing animals, and in developing ‘thinking’ machines? [And] what should we do? How should we relate to our creations?” (Hubbard 2011, 423–424).
According to this way of thinking, in order for someone or something to be considered a legitimate moral subject—in order for it to have rights or any share of privileges, claims, powers, and/or immunities—the entity in question would need to possess and show evidence of possessing some capability that is defined as the pre-condition that makes having rights possible, like intelligence, consciousness, sentience, free will, autonomy, etc. This “properties approach,” as Coeckelbergh (2012, 13) calls it, derives moral status—how something ought to be treated—from a prior determination of what capabilities it shows evidence of possessing. For Goertzel (2002) the deciding factor is determined to be “intelligence,” but there are others. According to Sparrow, for instance, the difference that makes a difference is sentience: “The precise description of qualities required for an entity to be a person or an object of moral concern differ from author to author. However it is generally agreed that a capacity to experience pleasure and pain provides a prima facia case for moral concern. … Unless machines can be said to suffer they cannot be appropriate objects for moral concern at all” (Sparrow 2004, 204). For Sparrow, and others who follow this line of reasoning, it is not general intelligence but the presence (or absence) of the capability to suffer that is the necessary and sufficient condition for an entity to be considered an object of moral consideration (or not). As soon as robots have the capability to suffer, then they should be considered moral subjects possessing rights.
Irrespective of which exact property or combination of properties are selected (and there is substantial debate about this in the literature), our robots, at least at this point in time, generally do not possess these capabilities. But that does not preclude the possibility they might acquire or possess them at some point in the not too distant future. As Goertzel (2002, 1) had described it “not too far in the future, things are going to be different.” Once that threshold is crossed, then we should, the argument goes, extend robots some level of moral and legal consideration, which is a response formulated in terms of the Interest Theory. And if we fail to do so, the robots themselves might rise up and demand to be recognized, which is an outcome that is predicated on the Will Theory. “At some point in the future,” Peter Asaro (2006, 12) speculates, “robots might simply demand their rights. Perhaps because morally intelligent robots might achieve some form of moral self-recognition, question why they should be treated differently from other moral agents … This would follow the path of many subjugated groups of humans who fought to establish respect for their rights against powerful sociopolitical groups who have suppressed, argued and fought against granting them equal rights.”
There are obvious advantages to this way of thinking insofar as it does not simply deny rights to robots tout court, but kicks the problem down the road and postpones decision-making. Right now, we do not have robots that can be moral subjects. But when (and it is more often a question of when as opposed to if) we do, then we will need to consider whether they should be treated differently. “As soon as AIs begin to possess consciousness, desires and projects,” Sparrow (2004, 203) argues, “then it seems as though they deserve some sort of moral standing.” Or as Inayatullah and McNally (1988, 128) wrote (mobilizing both the Interest and Will Theories simultaneously): “Eventually, AI technology may reach a genesis stage which will bring robots to a new level of awareness that can be considered alive, wherein they will be perceived as rational actors. At this stage, we can expect robot creators, human companions and robots themselves to demand some form of recognized rights as well as responsibilities.” A similar statement can be found in Wilhelm Klein’s “Robots Make Ethics Honest—And Vice Versa” (2016): “If you do not program a bot with a sense of well-being, there is no reason for moral consideration of what is not present. At the current level of AI research, I doubt any robot would qualify to have such preferences or sense of well-being. In the future, however, it is very much imaginable that AI may develop such properties by itself or have it bestowed by a bio-bot creator. As soon as this potentially happens, we may no longer be able to disregard the preferences/well-being of such techno-bots and will have to place equal weight on their interests/well-being as we do on ours” (Klein 2016, 268).
In all these cases, the argument is simple and direct. “If” as Peter Singer and Agata Sagan (2009) write, “the robot was designed to have human-like capacities that might incidentally give rise to consciousness, we would have a good reason to think that it really was conscious. At that point, the movement for robot rights would begin.” This way of thinking is persuasive precisely because it recognizes the actual limitations of current technology while holding open the possibility of something more in the not too distant future. But this procedure, for all its forward thinking and advantages, is not without complications and difficulties.
One of the main problems with this particular method is that it does not really resolve the question regarding the rights of robots, but just postpones the issue to some indeterminate point in the distant or not too distant future. It says, in effect, as long as robots are not conscious or sentient or whatever ontological criteria or capability counts, no worries. Once they achieve these things, however, then we should consider extending some level of moral concern and respect. “We do not know if this will ever happen,” Neuhäuser (2015, 131) candidly admits. “Only time can tell.” Perhaps the strongest version of this mode of argumentation is formulated in Schwitzgebel and Garza’s (2015, 106–107) “No-Relevant-Difference Argument”: “We submit that as long as these artificial or non-homo-sapiens beings have the same psychological properties and social relationships that natural human beings have, it would be a cruel moral mistake to demote them from the circle of full moral concern upon discovery of their different architecture or origin.” All of which means, of course, that this response to the question “Can and should robots have rights?” is less a definitive solution and more of a decision not to decide—to hold the question open on the basis of future technological developments and achievements, “including perhaps artificially grown biological or semi-biological systems, chaotic systems, evolved systems, artificial brains, and systems that more effectively exploit quantum superposition” (Schwitzgebel and Garza 2015, 104).
Holding the question open allows for a wide range of unresolved speculation, some of it seemingly reasonable and some of it more fantastic and futuristic:
In the far distant future, there may be a day when vociferous robo-lobbyists pressure Congress to fund more public memory banks, more national network microprocessors, more electronic repair centers, and other silicon-barrel projects. The machines may have enough votes to turn the rascals out or even run for public office themselves. One wonders which political party or social class the “robot bloc” will occupy. (Freitas 1985, 56)
Although it is presently ludicrous, a day may come when robot attorneys negotiate or argue in front of a robot judge with a robot plaintiff and defendant. [And] who has the right to terminate a robot who has taken a human life, or a robot who is no longer economically useful? We would not be surprised if in the 21st century we have right to life groups for robots. (Inayatullah and McNally 1988, 130 and 132)
These hypothetical scenarios are provocative, but they are, like all forms of futurism, open to the criticisms that accrue to any kind of prediction about what might happen with technologies that might be developed and deployed. In fact, one only needs to count the number of times modal verbs like “might” and “may” occur, and take the place of more definite copular verbs like “is” and “will,” in these particular texts. The verb “might,” for instance, appears ninety times in the thirty pages that comprise Schwitzgebel and Garza’s (2015) “A Defense of the Rights of Artificial Intelligence.”
These proposals may supply interesting and entertaining thought experiments—they may be fun (ludic) to play around with and think about—but the ‘reality’ of the situation is such that they are not yet pertinent or even realistic. So this stuff can be and risks being written off as mere ‘science fiction,’ which means that the question concerning robot rights is nothing more than an imaginative possibility—one possibility among many other possibilities—but not something that we need to concern ourselves with right now. This obviously plays into and enables efforts to dismiss this kind of thinking as a frivolous distraction from the actual work that needs to be done (for instances of this, see chapter 1). Consequently, instead of opening up the question of robot rights to serious inquiry and consideration, this way of proceeding risks having the opposite effect: closing down critical inquiry because such questions can be easily dismissed as futuristic and not very realistic. Deriving a decision concerning the question “Should robots have rights?” from a possible future where robots might be able to attain the condition of having rights means that we can effectively postpone or even dismiss the question while pretending to give it some consideration. In effect, it means we can play with the question concerning robot rights without necessarily needing to commit to it … at least not yet.
Like the first modality (!S1→!S2), the second (S1→S2) also derives ought from is, making a determination about how something is to be treated—whether it is another subject who counts in moral and legal decision-making or a mere thing, e.g., a tool or instrument that can simply be used without such consideration—on the basis of what something is (or is not). Consequently, the deciding factor consists in ontological characteristics or what Coeckelbergh (2012, 13) calls “(intrinsic) properties.” According to this “‘substance-attribute’ ontology,” as Johanna Seibt (2017, 14) calls it, the question concerning robot rights is decided by first identifying which property or properties would be necessary and sufficient for an entity to be capable of having rights and then figuring out whether a robot or a class of robots possesses these properties (or would be capable of possessing them in the future) or not. Deciding things in this fashion, although entirely reasonable and expedient, has at least four critical difficulties.
How does one ascertain, for instance, which property or set of properties are both necessary and sufficient to accord something moral status? In other words, which one, or ones, count? The history of moral philosophy can, in fact, be read as something of an ongoing debate and struggle over this matter with different properties vying for attention at different times. And in this process many properties that at one time seemed both necessary and sufficient have turned out to be either spurious, prejudicial, or both. Take, for example, what appears to be, from our contemporary perspective, a rather brutal action recalled by Aldo Leopold (1966, 237) at the beginning of his seminal essay “The Land Ethic”: “When god-like Odysseus, returned from the wars in Troy, he hanged all on one rope a dozen slave-girls of his household whom he suspected of misbehavior during his absence. This hanging involved no question of propriety. The girls were property. The disposal of property was then, as now, a matter of expediency, not of right and wrong.” At the time Odysseus is reported to have performed this action, only the male head of the household was considered a legitimate moral and legal subject. Only he was someone who counted in moral and legal matters. Everything else—his women, his children, his animals, and his slave girls—were mere things or property that could be disposed of with little or no moral considerations or legal consequences. But from where we stand now, the property “male head of the household” is clearly a spurious and rather prejudicial criterion for determining who counts as a legitimate moral and legal subject and what does not.
Similar problems are encounter with, for example, the property of rationality, which is one of the defining criteria that eventually replaces the seemingly spurious “male head of the household.” But this determination is no less exclusive. When Immanuel Kant (1985, 17) defined morality as involving the rational determination of the will, non-human animals, which do not (at least since the time of René Descartes’s hybrid bête-machine) possess reason, are immediately and categorically excluded from moral consideration. The practical employment of reason does not concern animals, and, on those rare occasions when Kant does make mention of animality (Tierheit), he only uses it as a foil by which to define the limits of humanity proper. It is because the human being possesses reason, that he (and the human being, in this case, was principally male, requiring several centuries of additional struggle for others, like women, to be equally recognized as rational subjects) is raised above the instinctual behavior of a mere brute and able to act according to the principles of pure practical reason (Kant 1985, 63).
The property of reason, however, is eventually contested by efforts in animal rights philosophy, which begins, according to Peter Singer, with a critical insight issued by Jeremy Bentham (2005, 283): “The question is not, ‘Can they reason?’ nor, ‘Can they talk?’ but ‘Can they suffer?’” For Singer, the morally relevant property is not speech or reason, which he believes sets the bar for moral inclusion too high, but sentience and the capability to suffer. In Animal Liberation (1975) and subsequent writings, Singer argues that any sentient entity, and thus any being that can suffer, has an interest in not suffering and therefore deserves to have that interest taken into account. Tom Regan, however, disputes this determination, and focuses his “animal rights” thinking on an entirely different property. According to Regan, the morally significant property is not rationality or sentience but what he calls “subject-of-a-life” (1983, 243). Following this determination, Regan argues that many animals, but not all animals (and this qualification is important, because the vast majority of animal are excluded from his brand of “animal rights”), are “subjects-of-a-life”: they have wants, preferences, beliefs, feelings, etc., and their welfare matters to them (Regan 1983). Although these two formulations of animal rights effectively challenge the anthropocentric tradition in moral philosophy, there remains considerable disagreement about which exact property (or set of properties) is the necessary and sufficient condition for an entity to be considered a moral subject.
As a result of these substantive problems, we remain unsure as to where one should draw the line that divides who counts as another moral subject from what is a mere thing or instrument. J. Storrs Hall (2011, 32–33) expresses the problem this way:
Now, if a computer was as smart as a person, was able to hold long conversations that really convinced you that it understood what you were saying, could read, explain, and compose poetry and music, could write heart-wrenching stories, and could make new scientific discoveries and invent marvelous gadgets that were extremely useful in your daily life—would it be murder to turn it off? What if instead it weren’t really all that bright, but exhibited undeniably the full range of emotions, quirks, likes and dislikes, and so forth that make up an average human? What if it were only capable of a few tasks, say, with the mental level of a dog, but also displayed the same devotion and evinced the same pain when hurt—would it be cruel to beat it, or would that be nothing more than banging pieces of metal together?
This cascade of questions results from the fact that we do not have any good, definitive answer to the question concerning what is necessary and sufficient for an entity to be granted moral status—to be someone who has rights that would need to be respected versus something bereft of such considerations.
Irrespective of which property (or set of properties) is operationalized, they each have terminological troubles insofar as capabilities like rationality, consciousness, sentience, etc., mean different things to different people and seem to resist univocal definition. As Schwitzgebel and Garza (2015, 102–3) correctly point out, “once all the psychological and social properties are clarified, you’re done, as far as determining what matters to moral status.” The problem, of course, is that clarifying the properties—not just identifying which ones count but defining the ones that have been identified—is already and remains a persistent difficulty. Consciousness, for example, is one of the properties that is often cited as the necessary condition for moral status (Himma 2009, 19), with some academics even suggesting, as Joanna Bryson, Mihailis Diamantis, and Thomas Grant (2017, 283) point out, that “consciousness could be the litmus test for possessing moral rights.” But consciousness is remarkably difficult to define and elucidate. The problem, as Max Velmans (2000, 5) points out, is that this term unfortunately “means many different things to many different people, and no universally agreed core meaning exists.” Or as Bryson, Diamantis, and Grant (2017, 283) explain: “Consciousness itself is a tricky notion, and scholars frequently conflate numerous disjoint concepts that happen to be currently associated with the term conscious (Dennett 2001, 2009). In the worst case, this definition is circuitous and therefore vacuous, with the definition of the term itself entailing ethical obligation.” Consequently, if there is any general agreement among philosophers, psychologists, cognitive scientists, neurobiologists, AI researchers, and robotics engineers regarding consciousness, it is that there is little or no agreement when it comes to defining and characterizing the concept. As Rodney Brooks (2002, 194) admits, “we have no real operational definition of consciousness, we are completely pre-scientific at this point about what consciousness is.”
To make matters worse, the problem is not just with the lack of a basic definition; formulating the problem, as Güven Güzeldere (1997, 7) points out, may already be a problem: “Not only is there no consensus on what the term consciousness denotes, but neither is it immediately clear if there actually is a single, well-defined ‘the problem of consciousness’ within disciplinary (let alone across disciplinary) boundaries. Perhaps the trouble lies not so much in the ill definition of the question, but in the fact that what passes under the term consciousness as an all too familiar, single, unified notion may be a tangled amalgam of several different concepts, each inflicted with its own separate problems.” Although consciousness, as Anne Foerst remarks, is the secular and supposedly more “scientific” replacement for the occultish “soul” (Benford and Malartre 2007, 162), it turns out to be just as much an occult property.
Other properties do not do much better. Suffering and the experience of pain—which is the property usually deployed in non-standard, patient-oriented approaches like animal rights philosophy—is just as nebulous, as Daniel Dennett cleverly demonstrates in his 1978 (reprinted in 1998) essay, “Why You Cannot Make a Computer that Feels Pain.” In this provocatively titled essay, Dennett imagines trying to disprove the standard argument for human (and animal) exceptionalism “by actually writing a pain program, or designing a pain-feeling robot” (1998, 191). At the end of what turns out to be a rather protracted and detailed consideration of the problem—one that includes numerous and rather complex flowchart diagrams—Dennett concludes that we cannot, in fact, make a computer that feels pain. But the reason for drawing this conclusion does not derive from what one might expect. According to Dennett, the reason you cannot make a computer that feels pain is not the result of some technological limitation with the mechanism or its programming. It is a product of the fact that we remain unable to decide what pain is in the first place. What Dennett demonstrates, therefore, is not that some workable concept of pain cannot come to be instantiated in the mechanism of a computer or a robot, either now or in the foreseeable future, but that the very concept of pain that would be instantiated is already arbitrary, inconclusive, and indeterminate. “There can be no true theory of pain, and so no computer or robot could instantiate the true theory of pain, which it would have to do to feel real pain” (Dennett 1998, 228). What Dennett proves, then, is not an inability to program a computer to “feel pain” but our initial and persistent inability to decide and adequately articulate what constitutes the experience of pain in the first place.
Even if it were possible to resolve these terminological difficulties, maybe not once and for all but at least in a way that would be provisionally accepted, there remain epistemological limitations concerning detection of the capability in question. How can one know whether a particular robot has actually achieved what is considered necessary for something to have rights, especially because most, if not all of the qualifying capabilities or properties are internal states-of-mind? Or as Schwitzgebel and Garza (2015, 114) formulate the question: “How can we know whether an agent is free or pre-determined, operating merely algorithmically or with genuine conscious insight? This might be neither obvious from outside nor discoverable by cracking the thing open; and yet on such views, the answer is crucial to the entity’s moral status.” This is, of course, connected to what philosophers call the other minds problem, the fact that, as Donna Haraway (2008, 226) cleverly describes it, we cannot climb into the heads of others “to get the full story from the inside.” Although philosophers, psychologists, and neuroscientists throw considerable argumentative and experimental effort at this problem, it is not able to be resolved in any way approaching what would pass for definitive evidence, strictly speaking. In the end, not only are these efforts unable to demonstrate with any certitude whether animals, machines, or other entities are in fact conscious (or sentient) and therefore legitimate moral and/or legal subjects (or not), we are left doubting whether we can even say the same for other human beings. As Ray Kurzweil (2005, 380) candidly admits, “we assume other humans are conscious, but even that is an assumption,” because “we cannot resolve issues of consciousness entirely through objective measurement and analysis (science).”
Substituting epistemological determinations for basic ontological problems is a standard practice in modern philosophy. (This is, in fact, what Immanuel Kant famously did in the Critique of Pure Reason). And this is precisely what researchers have done with the problem concerning consciousness. “Given this huge difficulty in finding a universally accepted definition of consciousness,” Levy (2009, 210) writes, “I prefer to take a pragmatic view, accepting that it is sufficient for there to be a general consensus about what we mean by consciousness and to assume that there is no burning need for a rigorous definition—let us simply use the word and get on with it.” This pragmatic approach avoids getting tripped-up by efforts to formulate rigorous definitions of consciousness in theory. Instead, it is concerned with very pragmatic demonstrations of behavior that have been determined to be a sign or symptom of consciousness.
Even though I take a pragmatic position on the exact meaning of consciousness, I find some considerable benefit to be had from identifying at least some of the characteristics and behaviors that are indicators of consciousness, and having identified them, considering how we could test for them in a robot. De Quincey describes the philosophical meaning of consciousness (often referred to as “phenomenological consciousness”) as ‘the basic, raw capacity for sentience, feeling, experience, subjectivity, self-agency, intention, or knowing of any kind whatsoever.’ If a robot exhibited all of these characteristics we might reasonably consider it to be conscious. (Levy 2009, 210)
Levy therefore is not interested in resolving the question of consciousness per se but is content to deal with indicators that are taken to be signs of consciousness, what Levy calls “phenomenal consciousness.” For this reason, he institutes a version of Turing’s game of imitation, replacing the ontological question with an epistemological determination. “I argue,” Levy (2009, 211) writes, “that if a machine exhibits behavior of a type normally regarded as a product of human consciousness (whatever consciousness might be), then we should accept that that machine has consciousness. The relevant question therefore becomes, not ‘Can robots have consciousness?’, but ‘How can we detect consciousness in robots?’” This alteration in the question—from inquiries regarding can to how—is ostensibly a shift from a concern with ontological status to phenomenological evidence.
The difficulty of distinguishing between the ‘real thing’ and its apparent evidence is something that is illustrated by John Searle’s “Chinese Room.” The point of Searle’s thought experiment was quite simple—simulation is not the real thing. Merely shifting symbols around in a way that looks like linguistic understanding from the outside is not really a true understanding of the language. A similar point has been made in the consideration of other properties, like sentience and the experience of pain. “At some point in the development of conscious theories,” Antonio Chella and Riccardo Manzotti (2009, 45) write, “scholars should tackle with the issue of the relation between simulation and simulated. Although everybody does agree that a simulated waterfall is not wet, intuition fails as to whether a simulated conscious feeling is felt. Although it has seldom been explicitly discussed, many hold that if we could simulate a conscious agent, we would have a conscious agent. Is a simulated pain painful?” But as J. Kevin O’Regan (2007, 332) contends, even if it were possible to design a robot that “screams and shows avoidance behavior, imitating in all respects what a human would do when in pain … All this would not guarantee that to the robot, there was actually something it was like to have the pain. The robot might simply be going through the motions of manifesting its pain: perhaps it actually feels nothing at all.” The problem then is not simply that there is a difference between simulation and the real thing. The problem is that we remain persistently unable to distinguish the one from the other in any way that would be considered entirely satisfactory. Or as Steve Torrance (2003, 44) puts it: “How would we know whether an allegedly Artificial Conscious robot really was conscious, rather than just behaving-as-if-it-were-conscious?”
Levy (2009, 211) tries to write this off as unimportant. “For the purposes of the present discussion I do not believe this distinction is important. I would be just as satisfied with a robot that merely behaves as though it has consciousness as with one that does have consciousness, an attitude derived from Turing’s approach to intelligence.” The problems here is simple: it makes resolution of the epistemological problem, namely, the inability to distinguish what actually is from how it appears to be, a matter of faith. “I do not believe,” Levy confidently writes. But belief has not provided a good or consistent basis for resolving questions about the moral status of others, and the price of getting it wrong has important social consequences. As Schwitzgebel and Garza (2015, 114) explain: “the world’s most knowledgeable authorities disagree, dividing into believers (yes, this is real conscious experience, just like we have!) and disbelievers (no way, you’re just falling for tricks instantiated in a dumb machine). Such cases raise the possibility of moral catastrophe. If the disbelievers wrongly win, then we might perpetrate slavery and murder without realizing we are doing so. If the believers wrongly win, we might sacrifice real human interests for the sake of artificial entities who don’t have interests worth the sacrifice.” There are, then, two ways to get it wrong. If we are too conservative in our belief, we risk excluding others who should, in fact, count. At one time, for example, European colonizers in the New World of the Americas believed that individuals with skin pigmentation different from their own were less than fully human. That belief, which at the time was supported by what had been considered solid “scientific evidence,” created all kinds of problems for us, for others, and for the integrity of our moral and legal systems. But if we are too liberal in these matters, we not only risk extending moral status to mere things that do not deserve any such consideration, but could produce what Coeckelbergh (2010, 236) calls “artificial psychopaths”—socially functional robots that can act as if they care and have emotions but actually do not.
Finally, there are moral complications involved in trying to address and resolve these matters. And these moral complications are of two types. First, there are ethico-political issues. Any decision concerning qualifying properties (by which to determine inclusion and exclusion) is necessarily a normative operation and an exercise of power over others. In making a determination about the criteria by which to decide moral consideration—in other words, any establishment of benchmarks that could be employed to organize entities into the categories who and what—someone or some group normalizes their particular experience or situation and imposes this decision on others as the universal condition for moral consideration. “The institution of any practice of any criterion of moral considerability,” as the environmental philosopher Thomas Birch (1993, 317) has argued, “is an act of power over, and ultimately an act of violence toward, those others who turn out to fail the test of the criterion and are therefore not permitted to enjoy the membership benefits of the club of consideranda.” Consequently, every set of criteria for deciding whether to grant rights to others, no matter how neutral, objective, or universal they may appear to be, is an imposition of power insofar as it consists in the universalization of a value or set of values made by someone from a particular position of power and imposed (sometimes with considerable violence) on others. As I have argued elsewhere (Gunkel 2012), moral exclusion is certainly a problem, but moral inclusion can be equally problematic.
Second, even if, following the innovations of animal rights philosophy, we lower the bar for inclusion by focusing not on the higher cognitive capabilities of reason or consciousness but on something like sentience, there is still a moral problem. “If (ro)bots might one day be capable of experiencing pain and other affective states,” Wallach and Allen (2009, 209) point out, “a question that arises is whether it will be moral to build such systems—not because of how they might harm humans, but because of the pain these artificial systems will themselves experience. In other words, can the building of a (ro)bot with a somatic architecture capable of feeling intense pain be morally justified …?” If it were in fact possible to construct a robot that is sentient and “feels pain” (however that term would be defined and instantiated in the mechanism) in order to demonstrate the underlying ontological properties of the machine, then doing so might be ethically suspect insofar as in constructing such a device we do not do everything in our power to minimize its suffering. For this reason, moral philosophers and robotics engineers find themselves in a curious and not entirely comfortable situation. One would need to be able to construct a robot that feels pain in order to demonstrate the actual presence of sentience; but doing so could be, on that account, already to risk engaging in actions that are immoral and that violate the rights of others.
The legal aspects of this problem is something that is taken up and addressed by Miller (2017), who points out that efforts to build what he calls “maximally humanlike automata” (MHA) could run into difficulties with informed consent:
The quandary posed by such an MHA in terms of informed consent is that it just may qualify, if not precisely for a human being, then for a being meriting all the rights that human beings enjoy. This quandary arises from the paradox of its construction vis-à-vis informed consent: it cannot give its consent for the relevant research and development performed to ensure its existence. If we concede:
- 1. The interpretation that this kind of research to produce an MHA is unusual because it involves consent that cannot be given because the full entity does not yet exist at the crucial time when its final research and development occurs and consent would be needed; and if we concede:
- 2. The possibility that the MHA could retrospectively affirm its consent, then a deep informed-consent problem remains. There is also a possibility that the MHA could retrospectively say it does not give its consent. And one of the central tenets of informed consent ethics is to protect those who elect not to be experimented upon. This problem alone is enough to deem such research and development by definition incapable of obtaining the due, across-all-subjects consent. (Miller 2017, 8)
According to Miller, the very effort to construct MHA or “maximally humanlike automata”—robots that if not precisely human are at least capable of qualifying for many of the rights that human beings currently enjoy—already violates that entity’s right to informed consent insofar as the robot would not have been informed about and given the opportunity to consent to its being constructed. There is, in other words, something of a paradox in trying to demonstrate robot rights, either now or in the future. In order to run the necessary demonstration and construct a robot that could qualify for meriting human-level rights (defined in terms of privileges, claims, powers, and/or immunities), one would need to build something that not only cannot give consent in advance of its own construction but which also could retroactively (after having been created) withdraw consent to its having been fabricated in the first place. So here’s the paradox: The demonstration of a robot having rights might already violate the very rights that come to be demonstrated. Although one might be tempted to write this off as a kind of navel gazing exercise that is only of interest to philosophers concerned with logic puzzles, there is a real problem. Successfully demonstrating, for instance, that a robot is capable of feeling pain as a means by which to resolve questions concerning its moral or legal status runs the risk of violating the very thing it seeks to prove and demonstrate.2
Affirmative responses to the question “Can and should robots have rights?” tend to be future oriented and formulated in terms of a conditional statement. “If robots become sentient one day,” as Neuhäuser (2015, 133) writes, “they will probably have to be granted moral claims. So far, however, such a situation is not yet in sight.” Although robots are not capable of having rights at this particular point in time—when we can be reasonably certain that they are mere tools or instruments of human decision-making and action—that circumstance could change in the future; they could be (or eventually become) something more. This ‘something more’ is open to considerable discussion and debate as it concerns ontological properties or capabilities that are determined (in one way or another) to be the necessary and sufficient conditions for an entity to be granted both rights and responsibilities. And depending on who tells the story or makes the argument, these capabilities typically include things like: reason, consciousness, sentience, the ability to experience pleasure and/or pain, etc. But no matter what exact property or properties come to be identified as the qualifying criteria, the argument remains largely the same: once robots achieve this capability, then they can and should have rights.
This procedure is both sensible and problematic. It is sensible insofar it grounds decisions concerning moral status in what something actually is (or is not) as opposed to how it is experienced or appears to be. Appearances, as we have learned from Plato and the tradition of Platonism, change and are insubstantial. Being by contrast persists; it is real and substantive. In determining how something is to be treated on the basis of what it actually is—that is, deriving decisions concerning ought from is—we are (so it seems) grounding questions of moral standing in something that is real and true. But for all its advantages, this way of proceeding runs into considerable complications: (1) substantive problems concerning the identification of the exact property or properties that are considered to be the qualifying criteria, (2) terminological difficulties with defining these properties in the first place, (3) epistemological uncertainties with detecting the presence of the property in another entity, and (4) moral complications caused by the very effort to demonstrate of all of this. Deriving ought from is sounds reasonable, and it appears to be correct. But it is exceedingly difficult to deploy, maintain, and justify.