2 
!S1→!S2: Robots Cannot Have Rights; Robots Should Not Have Rights

With the first modality, one infers negation of S2 from the negation of S1. Robots are incapable of having rights (or robots are not the kind of entity that is capable of being a holder of privileges, claims, powers, and/or immunities); therefore robots should not have rights. The assertion sounds intuitive and correct, precisely because it is based on what appears to be irrefutable ontological facts: robots are just technological artifacts that we design, manufacture, and use. A robot, no matter how sophisticated its design and operations, is like any other artifact (e.g., a toaster, a television, a refrigerator, an automobile, etc.); it does not have any particular claim to independent moral or legal status, and we are not and should not be obligated to it for any reason whatsoever. As Johannes Marx and Christine Tiefensee (2015, 83) accurately characterize it:

Robots are nothing more than machines, or tools, that were designed to fulfill a specific function. These machines have no interests or desires; they do not make choices or pursue life plans; they do not interpret, interact with and learn about the world. Rather than engaging in autonomous decision-making on the basis of self-developed objectives and interpretations of their surroundings, all they do is execute a preinstalled programme. In short, robots are inanimate automatons, not autonomous agents. As such, they are not even the kind of object which could have a moral status.

2.1 Default Understanding

This seemingly correct way of thinking is structured and informed by the default answer that is typically provided for the question concerning technology. “We ask the question concerning technology,” Martin Heidegger (1977, 4–5) writes, “when we ask what it is. Everyone knows the two statements that answer our question. One says: Technology is a means to an end. The other says: Technology is a human activity. The two definitions of technology belong together. For to posit ends and procure and utilize the means to them is a human activity. The manufacture and utilization of equipment, tools, and machines, the manufactured and used things themselves, and the needs and ends that they serve, all belong to what technology is.” According to Heidegger’s analysis, the presumed role and function of any kind of technology—whether it be a simple hand tool, a kitchen appliance like a toaster, a jet airliner, or a robot—is that it is a means employed by human users for specific ends. Heidegger terms this particular characterization of technology “the instrumental definition,” and he indicates that it forms what is considered to be the “correct” understanding of any kind of technological contrivance.

As Andrew Feenberg (1991, 5) summarizes it: “The instrumentalist theory offers the most widely accepted view of technology. It is based on the common sense idea that technologies are ‘tools’ standing ready to serve the purposes of users.” And because a tool or instrument “is deemed ‘neutral,’ without valuative content of its own” (Feenberg 1991, 5), a technological artifact is evaluated not in and of itself, but on the basis of the particular employments that have been decided by its human designer or user. Consequently, technology is only a means to an end; it is not and does not have an end in its own right. “Technical devices,” as Jean-François Lyotard (1984, 33) writes, “originated as prosthetic aids for the human organs or as physiological systems whose function it is to receive data or condition the context. They follow a principle, and it is the principle of optimal performance: maximizing output (the information or modification obtained) and minimizing input (the energy expended in the process). Technology is therefore a game pertaining not to the true, the just, or the beautiful, etc., but to efficiency: a technical ‘move’ is ‘good’ when it does better and/or expends less energy than another.” Lyotard’s explanation begins by affirming the traditional understanding of technology as an instrument, prosthesis, or extension of human faculties. Given this “fact,” which is stated as if it were something that is beyond question, he proceeds to provide an explanation of the proper place of the technological apparatus in epistemology, ethics, and aesthetics. According to his analysis, a technological device, whether it be a corkscrew, a toaster, or a computer, does not in and of itself participate in the big questions of truth, justice, or beauty. Technology is simply and indisputably about efficiency. A particular technological innovation is considered “good,” if and only if it proves to be a more effective means to accomplishing a desired end. And this decision is echoed by other critical thinkers of technology: “The moral value of purely mechanical objects,” as David F. Channell (1991, 138) explains, “is determined by factors that are external to them—in effect, by the usefulness to human beings.” Or as John Searle (1997, 190) explains concerning the computer: “I believe that the philosophical importance of computers, as is typical with any new technology, is grossly exaggerated. The computer is a useful tool, nothing more nor less.”

The instrumental theory not only sounds reasonable, it is obviously useful. It is, one might say, instrumental for making sense of things in an age of increasingly complex technological systems and devices. And the theory applies not only to simple devices like corkscrews, toothbrushes, and garden hoses, but also sophisticated technologies, like computers, artificial intelligence, and robots. “Computer systems,” Deborah Johnson (2006, 197) asserts, “are produced, distributed, and used by people engaged in social practices and meaningful pursuits. This is as true of current computer systems as it will be of future computer systems. No matter how independently, automatic, and interactive computer systems of the future behave, they will be the products (direct or indirect) of human behavior, human social institutions, and human decision.” According to this way of thinking, technologies, no matter how sophisticated, interactive, or seemingly social they appear to be, are just tools; nothing more. They are not—not now, not ever—capable of becoming moral subjects in their own right, and we should not treat them as such. It is precisely for this reason that, as J. Storrs Hall (2001, 2) points out, “we have never considered ourselves to have moral duties to our machines,” and that, as David Levy (2005, 393) concludes, the very “notion of robots having rights is unthinkable.”

2.2 Literally Instrumental

Because the instrumental theory possesses a kind of default status, different forms and configurations of it have been articulated and mobilized in the literature on robots and robotics. This occurs not only in the serious efforts of engineering and scientific research but also in works of fiction. Čapek’s R.U.R. has been instrumental in this regard, introducing a template for many of the contemporary debates and discussions about these matters. The play’s opening dramatic conflict pits the knowledge and experience of the technical staff—the engineers and scientists who manufacture the robotic appliances—against a seemingly naïve outsider, Helena Glory, who comes to the manufacturing facility as a representative of the League of Humanity, a human rights organization that seeks to liberate the robots.

Fabry (Technical Director): 

Can I ask you, what actually is it that your League … League of Humanity stands for?

Helena: 

It’s meant to … actually it’s meant to protect the robots and make sure … make sure that they’re treated properly.

Fabry: 

That is not at all a bad objective. A machine should always be treated properly. In fact I agree with you completely. I never like it when things are damaged. Miss Glory, would you mind enrolling all of us as new paying members of your organization?

Helena: 

No, you do not understand. We want, what we actually want, is to set the robots free!

Hallemeier (Head of the Institute for Robot Psychology and Behavior): 

To do what?

Helena: 

They should be treated … treated the same as people.

Hallemeier: 

Aha. So you mean that they should have the vote! Do you think they should be paid a wage as well?

Helena: 

Well of course they should!

Hellemeier: 

We’ll have to see about that. And what do you think they’d do with their wages?

Helena: 

They’d buy … buy things they need … things to bring them pleasure.

Hallemeier: 

This all sounds very nice; only robots do not feel pleasure. And what are these things they’re supposed to buy? They can be fed on pineapples, straw, anything you like; it’s all the same to them, they haven’t got a sense of taste. There’s nothing they’re interested in, Miss Glory. It’s not as if anyone has ever seen a robot laugh.

Helena: 

Why … why … why don’t you make them happier?

Hallemeier: 

We couldn’t do that, they’re only robots after all. They’ve got no will of their own. No passion. No hopes. No soul. (Čapek 2009, 27–28)

In this early scene from act one of the play, the dramatic conflict concerns, on the one side, authorized experts in robotics, namely the corporation’s chief engineer and lead behavioral scientist, who presumably know the technology inside and out, and are able, therefore, to speak about its operations and capabilities with considerable insight and authority. On the other side, there is a naïve and sympathetic “girl,” who believes that the robots are people and should be treated accordingly. As a representative of the League of Humanity, Helena Glory wants to free the robots, because they appear to be just like human beings with (as she assumes) similar feelings, desires, and interests. She therefore journeys to the R.U.R. facility to liberate the robots and advocate for the protection of their rights. In this initial scene, her apparently misguided presumptions are “corrected” by the engineer and scientist who explain—based on their knowledge of and experience with the technology—that the robots, despite the fact that they might look like human persons, are nothing more than appliances or technological instruments. As such, the robots, they confidently assure Miss Glory, need nothing, desire nothing, and deserve nothing. This debate operationalizes and turns on a number of important philosophical concepts.

2.2.1 Being vs. Appearance

What the robots are takes precedence over and trumps how they appear to be. Miss Glory (as she is called throughout the play) is portrayed as being “mistaken” in her opinions, because she has drawn conclusions about what robots are from how they appear to be. The chief engineer and lead scientist at R.U.R. are able to point out and correct the mistake, because they know not only how the robots are designed to appear (i.e., to look and act in ways that simulate human appearances and behaviors), but, more importantly, what they really are (i.e., mere instruments or appliances in the service of human users and organizations, bereft of personal interests, needs, or desires). All of this is based on and formulated in terms of a profound philosophical distinction that goes all the way back to Plato’s “Allegory of the Cave,” a kind of parable (or what contemporary philosophers typically call “a thought experiment”), that is recounted by Socrates at the beginning of book VII of the Republic.

The allegory concerns an underground cavern inhabited by men who are confined to sit before a large wall on which are projected shadow images. The cave dwellers are, according to the Socratic account, chained in place from childhood and are unable to see anything other than these artificial projections. Consequently, they operate as if everything that appears before them on the wall is, in fact, real and true. They bestow names on the different shadows, devise clever methods to predict their sequence and behavior, and hand out awards to each other for demonstrated proficiency in knowing such things (Plato 1987, 515a–b). At a crucial turning point in the narrative, one of the captives is released. He is unbound by some ambiguous but external action, dragged kicking and screaming out of the cave, and forced to confront the real world that exists outside the subterranean cavern. In beholding the real things in the light of day, the former inmate comes to realize that what he had once thought to be real was just a deceptive shadow and not really what it had appeared to be. And now, armed with this new knowledge, he goes back into the cave in order to correct the mistaken opinion of his colleagues, endeavoring to inform them about what he had learned, namely, that what appears on the wall is not what is really the case.

A more contemporary and AI specific version of this narrative has been developed by John Searle with his Chinese Room thought experiment. This intriguing and rather influential illustration, introduced in 1980 with the essay “Minds, Brains, and Programs” and elaborated in subsequent publications, was offered as an argument against the claims of strong AI—that machines are able to achieve actual thought:

Imagine a native English speaker who knows no Chinese locked in a room full of boxes of Chinese symbols (a data base) together with a book of instructions for manipulating the symbols (the program). Imagine that people outside the room send in other Chinese symbols which, unknown to the person in the room, are questions in Chinese (the input). And imagine that by following the instructions in the program the man in the room is able to pass out Chinese symbols which are correct answers to the questions (the output). The program enables the person in the room to pass the Turing Test for understanding Chinese but he does not understand a word of Chinese. (Searle 1999, 115)

The point of Searle’s imaginative, albeit somewhat ethnocentric, illustration (“ethnocentric” insofar as Chinese has been European philosophy’s exotic “other” at least since the time of Leibniz, see Perkins 2004) is quite simple—appearance is not the real thing. Merely shifting symbols around and exchanging one for another in a way that looks like linguistic understanding is not actually an understanding of language. Demonstrations like Searle’s Chinese Room, which differentiate between how something appears to be and what that thing actually is, not only mobilize the ancient philosophical distinction inherited from Plato, but inevitably require, as the Platonic allegory had already exhibited, some kind of privileged and immediate access to the true nature of things and not just how they appear to be. Although there remains considerable debate within philosophy about this matter, specifically the realism/antirealism debate, it is often not a difficulty in the empirical sciences and the practical efforts of engineering, where one can, as illustrated in R.U.R., simply differentiate between the authority of knowledgeable experts, who know the true nature of things, and the naïve opinions and beliefs of uniformed outsiders, who are often limited to how things appear to be.

2.2.2 Ontology Precedes Ethics

In confronting and dealing with other entities—whether other human persons, mere artifacts like a toaster, or the human-looking robots depicted in R.U.R.—one inevitably needs to distinguish between those beings who are in fact moral subjects and what remains a mere thing. As Jacques Derrida (2005, 80) explains, the difference between these two small and seemingly insignificant words—“who” and “what”—makes a big difference, precisely because it parses the world of entities into two camps: those Others who can and should have a legitimate right to privileges, claims, powers and/or immunities and mere things that are and remain objects, instruments, or artifacts.1 These decisions (which are quite literally a cut or de-caedere in the fabric of being) are often accomplished and justified on the basis of intrinsic, ontological properties. “The standard approach to the justification of moral status is,” Mark Coeckelbergh (2012, 13) explains, “to refer to one or more (intrinsic) properties of the entity in question, such as consciousness or the ability to suffer. If the entity has this property, this then warrants giving the entity a certain moral status.” In this transaction, ontology always takes precedence over ethics. What something is determines how it ought to be treated. Or as Luciano Floridi (2013, 116) describes it: “what the entity is determines the degree of moral value it enjoys, if any.” According to this standard procedure, the question concerning the status of others—whether they are someone who matters or something that does not—would need to be resolved by first identifying which property or properties would be necessary and sufficient for moral status, and then figuring out whether a particular entity possesses that property or not. As Hume already knew and criticized, decisions concerning how something ought to be treated are usually derived from what it is.

The problem (or “the opportunity,” since it all depends on how one looks at it) is that the decision between who and what has never been static or entirely settled; it is continuously challenged and open to considerable renegotiation. The history of moral philosophy can, in fact, be read as something of an ongoing debate and struggle over where one makes the cut or draws the line. Ethics has, therefore, evolved in fits and starts as the product of different “liberation movements,” as Peter Singer (1989, 148) calls it, whereby what had been previously considered a mere thing comes to be recognized, for one reason or another, as a legitimate subject of moral concern, passing from the side of what to the side of who. Despite these progressive developments whereby many things that had been regarded as mere things—at one time people of color, women, children, animals, etc.—come to be recognized as legitimate moral subjects, mechanisms like robots appear to be perennially stuck on the side of what. “Human history,” as Inayatullah and McNally (1988, 123) point out: “is the history of exclusion and power. Humans have defined numerous groups as less than human: slaves, woman, the ‘other races,’ children and foreigners. These are the wretched who have been defined as, stateless, personless, as suspect, as rightless. This is the present realm of robotic rights.” For this reason, and as I have previously argued (Gunkel 2012 and 2017b), machines in general, and robots in particular, have been (and apparently continue to be) the one thing that remains excluded from moral philosophy’s own efforts to achieve greater levels of inclusion. No matter how one comes to decide or to renegotiate the difference between who counts and what does not, robots are not moral subjects; they are and will remain mere things—an instrument or “means to an end,” but not an end in their own right.2 And this difference—the difference between what is a mere thing and who is another legitimate subject—is not only deployed and operationalized in R.U.R. but is an organizing principle throughout robot science fiction, including Bladerunner, Westworld, and Battlestar Galactica:

Admiral Adama: 

She was a Cylon, a machine. Is that what Boomer was, a machine? A thing?

Chief Tyrol: 

That’s what she turned out to be.

Admiral Adama: 

She was more than that to us. She was more than that to me. She was a vital, living person aboard my ship for almost two years. She couldn’t have been just a machine. Could you love a machine? (Battlestar Galactica, 2005)

2.2.3 Limited Rights

The robots of R.U.R. are introduced and characterized as mere mechanisms that are designed and employed by human beings to serve the purposes and interests of human society. They are, therefore, determined to be instruments, possessing no independent moral or legal status of their own. As such, and by definition, they neither can nor should they have rights. Or perhaps stated more accurately, whatever “rights”—whatever privileges, claims, powers, or immunities—they might enjoy or deserve are limited to what would be granted to any other piece of valuable property. This is, in fact, the root cause of the initial misunderstanding between Fabry, the engineer, and Helena Glory. For Miss Glory, “protection” means nothing less than freeing the robots from the burden of their labor. She assumes that the robots, because they look and appear to act like people, must also possess similar interests, and that, because of this, they have a claim-right to be protected from burdensome labor and abuse. For the engineer, however, “protection” has an entirely different denotation. On his account—or at his particular “level of abstraction,” to mobilize terminology developed by Floridi (2013)—sophisticated instruments, like the robots, are valuable artifacts and assets. They should, therefore, be treated properly to ensure their continued operation and to prevent damage to the instrument that would render it unusable. For this reason, the engineer’s position concerning the relative status of the robot is similar to that expressed by Laszlo Versenyi (1974, 252):

Questions such as whether robots should be blamed or praised, loved or hated, given rights and duties, etc., are in principle the same sort of questions as, “Should cars be serviced, cared for, and supplied with what they require for their operation?” To the extent that we depend on cars and servicing them is necessary for their well-functioning, it is as prudent not to withhold such services from them as it is not to withhold comparable, operationally required services from anything else on whose well-functioning we depend. In either case the only relevant considerations are: Do we need these things (men, animals, machines) for our well-functioning, and do they need this or that (care, love, food, gas, control) for theirs? As soon as both questions are answered there are no further considerations relevant to our decision.

A similar argument is recognized by Coeckelbergh. Even “if we regard robots as things,” Coeckelbergh (2010b, 240) writes, we may still have “some obligations to treat them well in so far as they are the property of humans or in so far as they have value for us in other ways.” Even if the robot cannot itself have rights (presumably because it is a mere thing and therefore beyond good and evil), there are reasons that we should treat it well. “The rationale to respect the robot here is not that the robot has moral agency or moral patiency, but instead that it belongs to a human and has value for that human person and that in order to respect that other human person we have certain indirect obligations towards robots-as-property. … Since things are valuable to us (humans) for various reasons, it is likely that robots will receive some degree of indirect moral consideration anyway” (Coeckelbergh 2010b, 240). In other words, just because something is a thing and not capable of having rights (i.e., privileges, claims, powers, and/or immunities) in its own right that certainly does not mean that we do not have obligations in the face of such things. “We do then,” as Joanna Bryson (2010, 73) recognizes, “have obligations regarding robots, but not really to them.” Consequently to say that robots are bereft of some basic protections is not entirely accurate. Robots like any other artifact are due some consideration necessary to maintain their instrumental utility. But this respect for the object is not anything like the rights that are assumed to be in play by R.U.R.’s Helena Glory and the League of Humanity.

2.3 Instrumentalism at Work

In R.U.R., the robots are introduced and characterized as mere mechanisms that do not have interests and therefore are incapable of having rights or of being liberated. From the informed perspective of the engineer and scientist, Helena Glory’s (misguided) plan to free the robots is not just unnecessary, it is pure nonsense. It would be as absurd as seeking to liberate your toaster from the burdens of making toast so that it could pursue its own interests and desires.3 Similar decisions and arguments have been instituted and mobilized in the scientific literature. One of the more influential and contemporary articulations can be found in the “Principles of Robotics” (Boden et al. 2011 and 2017). The history of the document and the manner of its composition is not immaterial. “In 2010,” Tony Prescott and Michael Szollosy (2017, 119) explain: “the UK’s Engineering and Physical Science and Arts and Humanities Research Councils (EPSRC and AHRC) organised a retreat to consider ethical issues in robotics to which they invited a pool of experts drawn from the worlds of technology, industry, the arts, law and social sciences. This meeting resulted in a set of ethical ‘Principles of Robotics’ (henceforth ‘the Principles’) that were published online by the EPSRC (Boden et al., 2011), which aimed at ‘regulating robots in the real world.’”

The “pool of experts” included some of the most noted and recognized names in the field of AI and robotics—Margaret Boden, Joanna Bryson, Darwin Caldwell, Kerstin Dautenhahn, Lilian Edwards, Sarah Kember, Paul Newman, Vivienne Parry, Geoff Pegman, Tom Rodden, Tom Sorrell, Mick Wallis, Blay Whitby, and Alan Winfield—and the document they produced consisted of five rules, “presented in a semi-legal version together with a looser, but easier to express, version that captures the sense for a non-specialist audience” (Boden et al. 2017, 125) and “seven high-level messages … designed to encourage responsibility within the robotics research and industrial community” (Boden et al. 2017, 128). The Principles were subsequently re-visited and re-evaluated during a 2016 AISB (Society for Study of Artificial Intelligence and Simulation of Behavior) workshop convened in Sheffield, UK. The workshop was chaired by Tony Prescott, and the organizing committee included Joanna Bryson and Alan Winfield, who had contributed to the original document, along with Madeleine de Cock Buning and Noel Sharkey. The revised “Principles” along with the fourteen responses and commentaries that were initially developed for the workshop were then published in a special issue of Connection Science in 2017. Although the document is rather short, its composition and content have a number of important consequences.

2.3.1 Expertise

The principles were initially devised, critically revised, and finally approved by noted experts in the field. The authorship of “The Principles” is not insignificant, and the framing of the document makes it absolutely clear who has the authority to speak on this matter: “In September 2010, experts drawn from the worlds of technology, industry, the arts, law and social sciences met at the joint EPSRC and AHRC Robotics Retreat to discuss robotics, its applications in the real world and the huge amount of promise it offers to benefit society”4 (Boden et al. 2011, 1). This claim on “expertise” is significant. The authors of the document are informed insiders who presumably know about robots, i.e., what these things are and are capable of doing or not doing. Like the engineer and the chief scientist in R.U.R., these experts presumably have the experience, insight, and knowledge to speak with authority about these matters and are authorized to correct potential misunderstandings like those expressed by Helena Glory and her League of Humanity. The fact that “The Principles” have been authored by a group of experts is part of its rhetorical strategy. In the case of this document, it definitely “makes a difference who the speaker is and where s/he comes from” (Plato 1982, 275b).

2.3.2 Robots Are Tools

According to the experts, robots are tools and cannot and should not be considered persons—entities who have moral and legal standing and possess rights and/or responsibilities. This is articulated in various ways and numerous times throughout the course of the document:

  • “Robots are simply tools of various kinds, albeit very special tools, and the responsibility of making sure they behave well must always lie with human beings” (Introduction).
  • “Robots are multi-use tools” (Rule #1).
  • “Robots are just tools, designed to achieve goals and desires that humans specify” (Commentary to Rule #2).
  • “Robots are products” (Rule #3).
  • “Robots are manufactured artefacts” (Rule #4).
  • “Robots are simply not people” (Commentary to Rule #3) (Boden et al. 2017).

Even if or when robots are designed to simulate something more than a mere tool, e.g., a pet or a companion, this appearance, it is argued, should be clearly indicated as such and the actual “tool-being” (to use Graham Harman’s [2002] Heideggerian influenced terminology) of the robot should be made easily accessible and readily apparent:

One of the great promises of robotics is that robot toys may give pleasure, comfort and even a form of companionship to people who are not able to care for pets, whether due to restrictions in their homes, physical capacity, time or money. However, once a user becomes attached to such a toy, it would be possible for manufacturers to claim the robot has needs or desires that could unfairly cost the owners or their families more money. The legal version of this rule was designed to say that although it is permissible and even sometimes desirable for a robot to sometimes give the impression of real intelligence, anyone who owns or interacts with a robot should be able to find out what it really is and perhaps what it was really manufactured to do. Robot intelligence is artificial, and we thought that the best way to protect consumers was to remind them of that by guaranteeing a way for them to “lift the curtain” (to use the metaphor from The Wizard of Oz). (Boden et al. 2017, 127)

No matter how sophisticated their behavior or elegant their design, “The Principles” make it clear that robots are instruments or tools to be used, deployed, and manipulated by human agents. Consequently, the document endorses and operationalizes what Heidegger (1977, 6) identified as the “instrumental and anthropological definition of technology.” “The ESPRC Principles,” as Szollosy (2017, 151) concludes, “make certain, very specific, yet completely unspoken assumptions as to what constitutes a ‘human being.’ And the Principles suit thus [SIC] human being very well. The Principles insist upon a particular relationship between clearly demarcated human subjects, on the one hand, who always act as unique and autonomous agents, and robots, which are forever seen as objects, tools to be manipulated by human masters.”

2.3.3 Is/Ought Inference

In “The Principles,” ought is derived from is. How something should be treated is entirely determined and justified by what ontological properties it has or is capable of possessing. As Prescott explains in his commentary “Robots are not just Tools”:

At the heart of the EPSRC principles of robotics (henceforth “the principles”) are a number of ontological claims about the nature of robots that serve as axioms to frame the subsequent development of ethical challenges and rules. These include claims about what robots are, and also about what they are not. The claims about what robots are include that “robots are multiuse tools” (principle 1), that “robots are products” (principle 3) and “pieces of technology” (commentary on principle 3), and that “robots are manufactured art[i]facts” (principle 4). The claims about what robots are not include that “humans, not robots, are responsible agents” (principle 2), that robots are “simply not people” (commentary on principle 3), and that robot intelligence can give only an “impression of real intelligence” (commentary on principle 4). (Prescott 2017, 142)

As Prescott points out, “The Principles” proceed by first issuing and operationalizing a number of assertions about the ontological conditions and status of robots. These directly stated claims then function as axioms—statements or propositions that are regarded as being established, accepted, or self-evidently true—from which are derived a set of ethical rules and guidelines. Determinations about how robots ought to be regarded, therefore, proceed from what robots are assumed to be or not be. And since robots are simply declared to be just tools and not persons, it then follows that they do not have responsibilities or any claim to rights. In this argument, as Prescott insightfully points out, everything proceeds from and depends on the initial ontological decree and assumption; ought is derived from is.

2.4 Duty Now and for the Future

“The Principles” were designed for, and are intentionally and quite consciously focused on, current opportunities and challenges. But things could change. “One of the consequences of the view of robots as ‘just tools,’” as Prescott (2017, 147) points out in his dissenting opinion on this matter, “is the implicit dismissal of the possibility of strong AI—that future robots could have human-level, or beyond human-level general intelligence.” But even when considered from this future-oriented (and rather speculative) perspective, there may still be reasonable arguments for excluding robots from the community of moral subjects. Consider, for instance, what Lantz Fleming Miller (2015, 374) calls “maximally humanlike automata.” “My concern,” Miller writes, “is whether automata that exhibit all (or sufficiently close to all) traits considered to be distinctive and necessary for being a human should thereby enjoy full human rights” (Miller 2015, 375). This concern is an extreme case that is, at least from our current vantage point, entirely speculative: If it were possible to construct automata that exhibit all the traits that are necessary for an entity to be considered human—“all except being biologically human” (Miller 2015, 380)—should these artifacts then have similar, if not the same, rights (e.g., privileges, claims, powers, and/or immunities) as that enjoyed by a human person? Miller answers this question with a resounding and unqualified “no.” In other words, even when pushed to the extreme limit of possibility, “one is not obliged to grant humanlike automata full human rights” (Miller 2015, 377).

The reason for this, as Miller argues, is that human beings and automata are ontologically distinct. As biological entities that are the product of evolution, human beings exhibit “existential normative neutrality,” while automata, which are artifacts created by human beings, do not. For this reason, human beings can be defined by way of a “one-place predicate A(x),” which is to say that “X has come into existence (‘arisen’/‘come to be’),” whereas artifacts must be defined by way of a two or even three place predicate: C(x,y) “some entity Y has constructed X” and P(x,y,z) “Some entity Y has constructed X for purpose Z” (Miller 2015, 375). Although he is no Heideggerian by any stretch of the imagination, Miller’s formulation is remarkably close to Heidegger’s analysis of instrumentality in Being and Time (1962, 102) even if the vocabulary is a bit different: “Intrinsic properties are considered to be necessary for the object to be the kind it is. An extrinsic property is only incidental or contingent to the object. For artifacts, the purpose is thereby an intrinsic property. A shovel is defined as an object made to dig. A computer is made to calculate. Without the defined purpose, the object is a mass of metal, not a shovel” (Miller 2015, 377). In order for a shovel to be a shovel, it has a purpose for which it is made and to which it is applied. It is this purpose, this “for which” (as Heidegger would say), that makes the shovel a shovel and not just an obtrusive chunk of steel attached to a long shaft of wood. Although the vocabulary is different—most likely owing to differences in the continental and analytic traditions of philosophy—the characterization is virtually identical. Things like robots are what they are insofar as they have a teleology, a purpose to which they are applied and already destined. Human beings, by contrast, lack this teleological orientation; they just are.5

In making this argument, Miller follows and endorses that longstanding tradition in moral philosophy where ontology precedes ethics such that one derives a determination of ought from is. According to Miller, it is because human beings (who are the product of evolution and therefore exhibit what he calls “existential normative neutrality”) and humanlike automata (which are things or instruments that are intentionally designed with a teleological purpose) are ontologically different from one another that the former is deserving of rights and the latter can be defensibly denied the same. And since this particular brand of human exceptionalism is grounded in what Miller argues is a fundamental and incontrovertible intrinsic difference, it is (or Miller at least asserts that it is) absolutely fair and guilt free:

Humans are the ones who discern, affirm, and thereby realize human rights. It is fair and just that they grant such rights, and they occupy a fair and just position to determine which ontological kinds of entities deserve recognition of rights. Particularly, humans are under no moral obligation to grant full human rights to entities possessing ontological properties critically different from them in terms of human rights bases. To grant full human rights recognition only to Homo sapiens does not run against the basis of human rights. Placing the rights partition between humans and automata does not hark back to eras when human rights were granted only to white, European males. We need not attempt to circumvent the return to those eras by extending rights to all kinds of ontologically varying entities, just in case they exhibit some traits that humans have. (Miller 2015, 387)

Whether this “ontological distinction” is indeed a philosophically defensible position is something that remains debated. In fact, as Miller (2015, 374) explicitly notes, the entire argument is “hypothetical, resting conditionally upon a widely held belief about the nature of human rights and the properties of human beings.” Miller’s argument, therefore, is ultimately grounded not in a scientifically proven or even provable fact but in a common, “widely held belief”; it is a matter of faith. Szollosy makes a similar point concerning “The Principles”:

At the (implicit) heart of the ESPRC Principles is a particular human being defined through the last centuries by what has becomes known as humanism. This human being is an agent in its own right, a being that is independent and not to be governed by other, metaphysical, or supernatural, forces. This human being is at the centre of European-based legal, ethical, economic and political systems; however, it is vital to remember that 1) this human being is still a relatively new invention and that 2) throughout its life-span, there has never just been one, singular version of this human being, as humanist proponents have liked to imagine that it is. (Szollosy 2017, 151)

When one derives ought from is (or ought not from is not), ontology determines everything. But decisions about ontological status tend to rest on dogmatic assertions that are contingent and therefore open to significant critical challenges.

What Miller calls a “widely held belief,” for instance, can be easily challenged by other beliefs that are just a valid (or invalid, as the case may be). Nick Bostrom and Eliezer Yudkowsky (2014) propose an equally plausible alternative by way of something they call non-discrimination principles (which as “principles” are also not argued but dogmatically asserted as if they were unquestionably true):

Principle of Substrate Non-Discrimination—If two beings have the same functionality and the same conscious experience, and differ only in the substrate of their implementation, then they have the same moral status.

Principle of Ontogeny Non-Discrimination—If two beings have the same functionality and the same consciousness experience, and differ only in how they came into existence, then they have the same moral status. (Bostrom and Yudkowsky 2014, 322–23)

These two principles, which as principles are not established through argument or evidence but simply asserted, derive a decision concerning moral status not from how something is—its ontological distinctiveness—but from how it appears to functions in actual experience.

A similar argument is proposed by Eric Schwitzgebel and Mara Garza (2015, 100): “It shouldn’t matter to one’s moral status what kind of body one has, except insofar as one’s body influences one’s psychological and social properties. Similarly, it shouldn’t matter to one’s moral status what kind of underlying architecture one has, except insofar as underlying architecture influences one’s psychological and social properties. Only psychological and social properties are directly relevant to moral status—or so we propose.” For this reason, proposals like that offered by Miller are declared to be both atypical and archaic:

All of the well-known modern secular accounts of moral status in philosophy ground moral status only in psychological and social properties, such as capacity for rational thought, pleasure, pain, and social relationships. No influential modern secular account is plausibly read as committed to a principle whereby two beings can differ in moral status but not in any psychological or social properties, past, present, or future, actual or counterfactual. However, some older or religious accounts might have resources to ground a difference in moral status outside the psychological and social. An Aristotelian might suggest that AIs would have a different telos or defining purpose than human beings. However, it’s not clear that an Aristotelian must think this; nor do we think such a principle, interpreted in such a way, would be very attractive from a modern perspective, unless directly relevant psychological or social differences accompanied the difference in telos. Similarly, a theist might suggest that God somehow imbues human beings with higher moral status than AIs, even if they are psychologically and socially identical. We find this claim difficult to assess, but we’re inclined to think that a deity who distributed moral status unequally in this way would be morally deficient. (Schwitzgebel and Garza 2015, 100)

No matter how it is formulated, however, these differing accounts make the same basic argumentative gesture—they derive moral status from rather dogmatic assertions/assumptions about “what is the case.” Ontological assumptions, in other words, determine and justify moral consideration, which had been the problem Hume initially identified and critiqued by way of the is/ought distinction.

2.5 Complications, Difficulties, and Potential Problems

Although this way of thinking works and has been instrumental for resolving questions of who has moral status and what does not, there are a number of significant critical problems.

2.5.1 Tool != Machine

The instrumental theory, for all its success in helping us make sense of technological innovation, is a rather blunt instrument, reducing all technology, irrespective of design, construction, or operation, to a tool or instrument. “Tool,” however, does not necessarily encompass everything technological, and does not, therefore, exhaust all possibilities. There are also machines. Although “experts in mechanics,” as Karl Marx (1977, 493) pointed out, often confuse these two concepts calling “tools simple machines and machines complex tools,” there is an important and crucial difference between the two. Indication of this essential difference can be found in a brief parenthetical remark offered by Heidegger in “The Question Concerning Technology.” “Here it would be appropriate,” Heidegger (1977, 17) writes in reference to his use of the word “machine” to characterize a jet airliner, “to discuss Hegel’s definition of the machine as autonomous tool [selbständigen Werkzeug].” What Heidegger references, without supplying the full citation, are Hegel’s 1805–7 Jena Lectures, in which “machine” had been defined as a tool that is self-sufficient, self-reliant, or independent. As Marx (1977, 495) succinctly described it, picking up on this line of thinking, “the machine is a mechanism that, after being set in motion, performs with its tools the same operations as the worker formerly did with similar tools.”

Understood in this way, Marx (following Hegel) differentiates between the tool used by the worker and the machine, which does not occupy the place of the worker’s tool but takes the place of the worker him/herself. Although Marx did not pursue an investigation of the social, legal, or moral consequences of this insight, it produces some interesting ambiguities with the assignment of moral and legal accountability. As John Sullins (2005, 1) explains, “since an autonomous machine is a surrogate actor for a human agent that would have accomplished the tasks of the machine in its absence, then it follows that the machine is also a surrogate for the ethical rights and responsibilities that apply to a human actor in the same situation.” Because the autonomous machine replaces not the tool but the human user of the tool, the machine can be considered a surrogate for the rights and responsibilities that had been assigned to the human agent. And this seems to have traction with recent developments that have advanced explicit proposals for robots—or at least certain kinds of robots—to be defined as something other than mere instruments or tools.

Perhaps the best illustration of the difference Marx identifies and describes is available with the self-driving car or autonomous vehicle. The autonomous vehicle, whether the Google Car or one of its competitors, is not designed for and intended to replace the automobile. It is, in its design, function, and materials, the same kind of instrument that we currently utilize for the purpose of personal transportation. The autonomous vehicle, therefore, does not replace the instrument of transportation (the automobile); it is intended to replace (or at least significantly displace) the driver. This difference was acknowledged by the US National Highway Traffic Safety Administration (NHTSA), which in a February 4, 2016 letter to Google, stated that the company’s Self Driving System (SDS) could legitimately be considered the legal driver of the vehicle: “As a foundational starting point for the interpretations below, NHTSA will interpret ‘driver’ in the context of Google’s described motor vehicle design as referring to the SDS, and not to any of the vehicle occupants” (Ross 2016). Although this decision is only an interpretation of existing law, the NHTSA explicitly states that it will “consider initiating rulemaking to address whether the definition of ‘driver’ in Section 571.3 [of the current US Federal statute, 49 U.S.C. Chapter 301] should be updated in response to changing circumstances” (Hemmersbaugh 2016).

Similar proposals have been floated in efforts to deal with workplace automation. In a highly publicized draft document submitted to the European Parliament in May of 2016, for instance, it was argued that “sophisticated autonomous robots” (“machines” in Marx’s terminology) be considered “electronic persons” with “specific rights and obligations” for the purposes of contending with the challenges of technological unemployment, tax policy, and legal liability. Although the proposal did not pass as originally written, it represents recognition on the part of policymakers that recent innovations in robotics challenge the way we typically respond to and answer questions regarding moral and legal responsibility. In both these cases, we have decisions that recognize the robotic mechanism as being more than just an instrument through which human beings act on the world. Such technologies, as David C. Vladeck (2014, 121) explains, “will not be tools used by humans; they will be machines deployed by humans that will act independently of direct human instruction, based on information the machine itself acquires and analyzes, and will often make highly consequential decisions in circumstances that may not be anticipated by, let alone directly addressed by, the machine’s creators.” Such mechanisms, therefore, challenge the instrumental theory and lead to some recognition of the robot as a kind of independent moral and/or legal entity. Whether any of this is eventually codified in policy or law remains to be seen. But what we can see at this point is a challenge to the instrumentalist way of thinking about technology that is opening up other opportunities for recognizing the social position and standing of these mechanisms.

2.5.2 Not Just Tools

Additionally, the instrumental theory appears to be unable to contend with recent developments in social robotics. “Tool” might not fit (or at least not easily accommodate) things that have been intentionally designed to be social companions. “The category of tools,” as Prescott (2017, 142) explains, “describes physical/mechanical objects that serve a function, whereas the category of companions describes significant others, usually people or animals, with whom you might have a reciprocal relationship marked by emotional bond. The possibility that robots could belong to both these categories raises important and interesting issues that are obscured by insisting that robots are just tools.”6 In fact, practical experiences with socially interactive machines push against the explanatory capabilities of the instrumental theory, if not forcing a break with it altogether. “At first glance,” Kate Darling (2016a, 216) writes, “it seems hard to justify differentiating between a social robot, such as a Pleo dinosaur toy, and a household appliance, such as a toaster [again with the toasters]. Both are man-made objects that can be purchased on Amazon and used as we please. Yet there is a difference in how we perceive these two artifacts. While toasters are designed to make toast, social robots are designed to act as our companions.”

In support of this claim, Darling offers the work of Sherry Turkle and the experiences of US soldiers in Iraq and Afghanistan. Turkle, who has pursued a combination of observational field research and interviews in clinical studies, identifies a potentially troubling development she calls “the robotic moment”: “I find people willing to seriously consider robots not only as pets but as potential friends, confidants, and even romantic partners. We don’t seem to care what their artificial intelligences ‘know’ or ‘understand’ of the human moments we might ‘share’ with them … the performance of connection seems connection enough” (Turkle 2011, 9). In the face of sociable robots, Turkle argues, we seem to be willing, all too willing, to consider these machines to be much more than tools or instruments; we address them as surrogate pets, close friends, personal confidants, and even paramours. Even if it is true, as Matthais Scheutz (2012, 215) asserts, “that none of the social robots available for purchase today (or in the foreseeable future, for that matter) care about humans,” that does not seem to matter. Our ability to care for a robot, like our ability to care for an animal, does not seem to be predicated on its ability to care or to show evidence of caring for us. We don’t, in other words, seem to care whether they actually care nor not.

But this is not limited to objects like the Furbie, Pleo, AIBO, and PARO robots, which have been intentionally designed to elicit this kind of emotional response. Human beings appear to be able to do it with just about any old mechanism, like the very industrial-looking Explosive Ordnance Disposal (EOD) robots that are being utilized on the battlefield. As Peter W. Singer (2009), Joel Garreau (2007), and Julie Carpenter (2015) have reported, soldiers form surprisingly close personal bonds with their units’ EOD robots, giving them names, awarding them battlefield promotions, risking their own lives to protect that of the robot, and even mourning their death. This happens as a product of the way the mechanism is situated within the unit and the role that it plays in battlefield operations. And it happens in direct opposition to what otherwise sounds like good common sense: “They are just technologies—instruments or tools that feel nothing.” As Eleanor Sandry (2015a, 340) explains:

EOD robots, such as PackBots and Talons, are not humanlike or animal-like, are not currently autonomous and do not have distinctive complex behaviours supported by artificial intelligence capabilities. They might therefore be expected to raise few critical issues relating to human-robot interaction, since communication with these machines relies on the direct transmission of information through radio signals, which have no emotional content and are not open to interpretation. Indeed, the fact that these machines are broadly not autonomous precludes them from being discussed as social robots according to some definitions. … In spite of this, there is an increasing amount of evidence that EOD robots are thought of as team members, and are valued as brave and courageous in the line of duty. It seems that people working with EOD robots, even though the robots are machinelike and under the control of a human, anthropomorphise and/or zoomorphise them, interpreting them as having individual personalities and abilities.

Similar results have been obtained with studies regarding more mundane technological objects, like Guy Hoffmann’s AUR desk lamp (Sandry 2015a and 2015b) and the Roomba robotic vacuum clearer (Sung et al. 2007). As Scheutz (2012, 213) reports: “While at first glance it would seem that the Roomba has no social dimension (neither in its design nor in its behavior) that could trigger people’s social emotions, it turns out that humans, over time, develop a strong sense of gratitude toward the Roomba for cleaning their home. The mere fact that an autonomous machine keeps working for them day in and day out seems to evoke a sense of, if not urge for, reciprocation.”

None of this is necessarily new or surprising. Evidence of it was already tested and demonstrated with Fritz Heider and Mariane Simmel’s “An Experimental Study of Apparent Behavior” (1944), which found that human subjects tend to attribute motive and personality to simple animated geometric figures. Similar results have been obtained by way of the computer as social actor (CASA) studies conducted by Byron Reeves and Clifford Nass in the mid-1990s. As Reeves and Nass discovered across numerous trials with human subjects, users (for better or worse) have a strong tendency to treat socially interactive technology, no matter how rudimentary, as if they were other people.

Computers, in the way that they communicate, instruct, and take turns interacting, are close enough to human that they encourage social responses. The encouragement necessary for such a reaction need not be much. As long as there are some behaviors that suggest a social presence, people will respond accordingly. When it comes to being social, people are built to make the conservative error: When in doubt, treat it as human. Consequently, any medium that is close enough will get human treatment, even though people know it’s foolish and even though they likely will deny it afterwards. (Reeves and Nass 1996, 22)

So what we have is a situation where our theory of technology—a theory that has considerable history behind it and that has been determined to be as applicable to simple hand tools as it is to complex computer systems—seems to be out of sync with the very real practical experiences we now have with machines in a variety of situations and circumstances. To put it in terminology that is more metaphysically situated: How something appears to be—how it actually operates in real social situations and circumstances—might be more important than what it actually is (or has been assumed to be). “We should,” as Prescott (2017, 146) concludes formulating a kind of phenomenological moral maxim, “take into account how people see robots, for instance, that they may feel themselves as having meaningful and valuable relationships with robots, or they may see robots as having important internal states, such as the capacity to suffer, despite them not having such capacities.”

2.5.3 Ethnocentrism

This way of dividing the world into who and what—human persons who matter and mere technological things that do not—is culturally specific and exposed to a kind of moral relativism that is often not adequately identified or accounted for. The ESPRC Principles, for example, have been criticized for assuming a particular concept of “human nature” that is, as Szollosy (2017, 156) argues, “very much a European, Christian concoction.”

Robots are considered to be machines, and therefore merely objects. In the European Christian tradition, such non-living, or even non-human objects, are considered lesser beings on the basis that they do not have a soul; that intangible, metaphysical property unique to life or, in most articulation, unique specifically to humans. (This idea of lacking something vitally human lies at the very idea of the robot, when the word was first introduced to the world in Karel Čapek’s 1921 play, R.U.R.) Though one could argue that Europe is no longer beholden to Christianity, Europe’s (and America’s) Christian values are constantly on display, and this assumption is obvious even in contemporary, completely secular European legal and ethical frameworks, including these ESPRC Principles. (Szollosy 2017, 156)

By way of contrast, other religious/philosophical traditions mobilize entirely different ways of thinking about these things. As Jennifer Robertson points out, the instrumental and anthropological viewpoint would be antithetical to a perspective influenced by Japanese culture and traditions. “Three key sociocultural factors,” Robertson (2014, 576) argues, “influence the way Japanese experience robots as ‘living’ entities. The first is linguistic: In Japanese, two separate verbs can be used to describe existence. Aru/arimasu refers to the existence of something, a bicycle, for example. Iru/imasu is used to refer to the existence of someone. Iru/imasu is also used in reference to robots, as in the title Robotto no iru kurashi (lit., a lifestyle where robots exist).” The second factor is Shintoism, a religion that arranges things differently than the three dominant monotheisms (Judaism, Christianity, and Islam). “Shinto, the native [Japanese] animistic beliefs about life and death, holds that vital energies, deities, forces, or essences called kami are present in both organic and inorganic matter and in naturally occurring and manufactured entities alike. Whether in trees, animals, mountains, or robots, these kami (forces) can be mobilized” (Robertson 2014, 576). The third factor, Robertson argues, concerns a different conception of the meaning of life and what is (and what is not) living.

Inochi, the Japanese word for “life,” encompasses three basic, seemingly contradictory but inter-articulated meanings: a power that infuses sentient beings from generation to generation; a period between birth and death; and, most relevant to robots, the most essential quality of something, whether organic (natural) or manufactured. Thus robots are experienced as “living” things. The important point to remember here is that there is no ontological pressure to make distinctions between organic/inorganic, animate/inanimate, human/nonhuman forms. On the contrary, all of these forms are linked to form a continuous network of beings. (Robertson 2014, 576)

In other words, the instrumental and anthropological definition of technology—that seemingly correct characterization of technology that renders a robot a mere tool as opposed to someone who counts as a socially significant other—will only work when deployed from and within a particular culture and linguistic tradition. It is precisely for this reason, that Veruggio (2005, 4) had called for the robotics community to “develop a general Survey on the main ethical paradigms in the different cultures, religions, faiths” and “define a Rosetta Stone of the ethical guidelines ‘adjusted’ to the different cultures, religions, faiths.”

2.6 Summary

A negative reply to the question “Can and should robots have rights?” appears to be simple and intuitive. This default response proceeds from the apparently common-sense understanding that technology—any technology, whether a simple hand tool like a hammer, a household appliance like a toaster (because it is always about toasters), or a sociable robot—is nothing more than a tool or instrument of human activity. This conceptualization is grounded in what Heidegger (1977, 6) calls “the instrumental and anthropological definition” and its seemingly wide acceptance makes it what many consider to be the correct way of thinking—so “correct,” in fact, that one does not even need to think about it. This orthodoxy is clearly evident and in play in many attempts to grapple with the moral/legal opportunities and challenges of robots, especially efforts to formulate general principles for the proper integration of the technology in contemporary society. These proposals derive ought from is, arguing that what robots are (mere inanimate objects and not moral or legal subjects) determine how they should be treated. Although this way of thinking works and appears to be incontrovertible, there are significant limitations and difficulties inherent in it.

First, the ontological category “tool,” although applicable to many technologies, does not adequately explain or account for every kind of mechanism, especially what Marx had called machines, which constitute a third kind of liminal entity (Kang 2011, 35) that does not fall neatly on one or the other side of the who versus what distinction. Second, there are socially situated artifacts that have been deliberately designed to be more than mere tools, like a sociable robot, or that function in ways that make them—in the eyes of their human partners—something more than an instrument. In the face of these other kinds of socially interactive others, how we decide to respond to these entities appears to be more important and significant than what we have been told they actually are. In other words, the social circumstances of the relationship we have with the artifact appear to take precedence over the ontological properties that belong or have been assigned to it (the consequences of this will be further investigated and developed in chapter 6). Finally, the instrumental and anthropological definition of technology is neither universal nor beyond critical inquiry. It is culturally specific and therefore can be contested by other religious and philosophical traditions that see things differently. Simply declaring that robots are tools and then proceeding on the assumption that this characterization is beyond refutation is not just insensitive to others but risks a kind of intellectual and moral imperialism.

Notes