Before we get too far into investigating or asking about robot rights, it would be both expedient and prudent to clarify (or at least reflect critically on) what is meant by the seemingly simple words that comprise the terms of the investigation. “Robot” and “rights” are, at least at this point in time, both common and arguably well understood. We all know—or at least think we know—what a robot is. Science fiction literature, film, and digital media have been involved with imagining and imaging robots for close to a century. In fact, many of the most widely recognized and identifiable characters in twentieth and twenty-first century popular culture are robots: Robby the Robot of Forbidden Planet, Rosie from the Jetsons, Tetsuwan Atomu [Mighty Atom] or Astro Boy from Osamu Tezuka’s long running manga series of the same name, William Joyce’s Rolie Polie Olie, Lt. Commander Data of Star Trek: The Next Generation, R2-D2 and C-3PO of Star Wars … and the list could go on.
“Rights” also appears to be a well-established and widely understood concept but for reasons that are rooted in the struggles of social reality rather than fiction. The history of the nineteenth and twentieth centuries can, in fact, be characterized as a sequence of conflicts over rights, or what Peter Singer (1989, 148) has called a “liberation movement,” as previously excluded individuals and communities fought for and eventually achieved equal (or closer to equal) status, i.e., women, people of color, LGBTQ individuals, etc. No one it seems is confused by or unclear about the struggles for civil liberties and the criticisms of human rights abuses that have been, and unfortunately still are, part and parcel of everyday social reality.
Given prior experience with both terms, it would seem that there is not much with which to be concerned. But it is precisely this seemingly confident and unproblematic familiarity—this always-already way of understanding things—that is the problem. As Martin Heidegger (1962, 70) pointed out in the opening salvo of Being and Time, it is precisely those things that are closest to us and apparently well-understood in our everyday activities and operations that are the most difficult to see and explicitly conceptualize. For this reason, it is a good idea to begin by getting some critical distance on both terms in order to characterize explicitly what each one means individually and what their concatenation indicates for the effort that follows. Fortunately this “distancing” is already available and in play insofar as the phrase “robot rights” just “feels wrong” or is at least curious and thought-provoking. It is from this initial feeling of disorientation or strangeness—whereby what had been seemingly clear and unremarkable comes to stand out as something worth asking about—that one can begin to question and formulate what is meant by the terms “robot” and “rights.”
“When you hear the word ‘robot,’” Matt Simon (2017, 1) of Wired writes, “the first thing that probably comes to mind is a silvery humanoid, à la The Day the Earth Stood Still or C-3PO (more golden, I guess, but still metallic). But there’s also the Roomba, and autonomous drones, and technically also self-driving cars. A robot can be a lot of things these days―and this is just the beginning of their proliferation. With so many different kinds of robots, how do you define what one is?” Answering this question is no easy task. In his introductory book on the subject, John Jordan (2016, 3–4) openly admits that there is already considerable indeterminacy and slippage with the terminology such that “computer scientists cannot come to anything resembling consensus on what constitutes a robot.” The term “robot,” then, is a rather noisy moniker with indeterminate and flexible semantic boundaries. And this difficulty, as Simon (2017, 1) points out, “is not a trivial semantic conundrum: Thinking about what a robot really is has implications for how humanity deals with the unfolding robo-revolution.”
Rather than offering a definition, Jordan tries to account for and make sense of this terminological difficulty. According to his reading, “robot” is complicated for three reasons:
Reason 1 why robots are hard to talk about: the definition is unsettled, even among those most expert in the field.
Reason 2: definitions evolve unevenly and jerkily, over time as social context and technical capabilities change.
Reason 3: science fiction set the boundaries of the conceptual playing field before the engineers did. (Jordan 2016, 4–5)
These three aspects make defining and characterizing the word “robot” complicated but also interesting. We can, therefore, get a better handle on the problem of identifying what is and what is not a robot by considering each reason individually.
Science fiction not only defines “the boundaries of the conceptual playing field,” but is the original source of the term. The word robot came into the world by way of Karel Čapek’s 1920 stage play, R.U.R. or Rossumovi Univerzální Roboti (Rossum’s Universal Robots) in order to name a class of artificial servants or laborers. In Czech, as in several other Slavic languages, the word robota (or some variation thereof) denotes “servitude or forced labor.” Čapek, however, was not (at least according to his own account) the originator of the term. That honor apparently belongs to the writer’s older brother, the painter Josef Čapek. Thirteen years after the publication of the play, which had been wildly successful at the time, Karel Čapek explained the true origin of the term in the pages of Lidove Noviny, a newspaper published in Prague:
The author of the play R.U.R. did not, in fact, invent that word; he merely ushered it into existence. It was like this: the idea for the play came to said author in a single, unguarded moment. And while it was still warm he rushed immediately to his brother Josef, the painter, who was standing before an easel and painting away at a canvas till it rustled. “Listen, Josef,” the author began, “I think I have an idea for a play.” “What kind,” the painter mumbled (he really did mumble, because at the moment he was holding a brush in his mouth). The author told him as briefly as he could. “Then write it,” the painter remarked, without taking the brush from his mouth or halting work on the canvas. The indifference was quite insulting. “But,” the author said, “I don't know what to call these artificial workers. I could call them Labori, but that strikes me as a bit bookish.” “Then call them Robots,” the painter muttered, brush in mouth, and went on painting. And that’s how it was. Thus the word Robot was born; let this acknowledge its true creator. (Čapek 1935; also quoted in Jones 2016, 53)
Since the publication of Čapek’s play, robots have infiltrated the space of fiction. But what exactly constitutes a robot differs and admits of a wide variety of forms, functions, and configurations. Čapek’s robots, for instance, were artificially produced biological creatures that were humanlike in both material and form. This configuration persists with the bioengineered replicants of Bladerunner and Bladerunnner 2049 (the film adaptations of Philip K. Dick’s Do Androids Dream of Electric Sheep?) and the skin-job Cylons of Battlestar Galactica. Other fictional robots, like the chrome-plated android in Fritz Lang’s Metropolis and C-3PO of Star Wars, as well as the 3D printed “hosts” of HBO’s Westworld and the synths of Channel 4/AMC’s Humans, are humanlike in form but composed of non-biological materials. Others that are composed of similar synthetic materials have a particularly imposing profile, like Forbidden Planet’s Robby the Robot, Gort from The Day the Earth Stood Still, or the Robot from the television series Lost in Space. Still others are not humanoid at all but emulate animals or other kinds of objects, like the trashcan R2-D2, the industrial tanklike Wall-E, or the electric sheep of Dick’s novella. Finally there are entities without bodies, like the HAL 9000 computer in 2001: A Space Odyssey, with virtual bodies, like the Agents in The Matrix, or with entirely different kinds of embodiment, like swarms of nanobots.
In whatever form they have appeared, science fiction already—and well in advance of actual engineering practice—has established expectations for what a robot is or can be. Even before engineers have sought to develop working prototypes, writers, artists, and filmmakers have imagined what robots do or can do, what configurations they might take, and what problems they could produce for human individuals and communities. Jordan (2016, 5) expresses it quite well: “No technology has ever been so widely described and explored before its commercial introduction. …Thus the technologies of mass media helped create public conceptions of and expectations for a whole body of compu-mechanical innovation that had not happened yet: complex, pervasive attitudes and expectations predated the invention of viable products” (emphasis in the original). A similar point has been made by Neil M. Richards and William D. Smart (2016, 5), who approach the question from the legal side of things: “So what is a robot? For the vast majority of the general public (and we include most legal scholars in this category), we claim that the answer to this question is inescapably informed by what they see in movies, the popular media, and, to a lesser extent, in literature. Few people have seen an actual robot, so they must draw conclusions from the depictions of robots that they have seen. Anecdotally, we have found that when asked what a robot is, people will generally make reference to an example from a movie: Wall-E, R2-D2, and C-3PO are popular choices.”
Because of this, science fiction has been both a useful tool and a potential liability. Engineers and developers, for instance, often endeavor to realize what has been imaginatively prototyped in fiction. Take for example the following explanation provided by the roboticists Minoru Asada, Karl F. MacDorman, Hiroshi Ishiguro, and Yasuo Kuniyoshi (2001, 185):
Robot heroes and heroines in science fiction movies and cartoons like Star Wars in US and Astro Boy in Japan have attracted us so much which, as a result, has motivated many robotic researchers. These robots, unlike special purpose machines, are able to communicate with us and perform a variety of complex tasks in the real world. What do the present day robots lack that prevents them from realizing these abilities? We advocate a need for cognitive developmental robotics (CDR), which aims to understand the cognitive developmental processes that an intelligent robot would require and how to realize them in a physical entity.
This “science fiction prototyping,” as Brian David Johnson (2011) calls it, is rather widespread in the discipline even if it is not always explicitly called-out and recognized as such. As Bryan Adams, Cynthia Breazeal, Rodney Brooks, and Brian Scassellati (2000, 25) point out: “While scientific research usually takes credit as the inspiration for science fiction, in the case of AI and robotics, it is possible that fiction led the way for science.”
In addition to influencing research and development programs, science fiction has also proven to be a remarkably expedient mechanism—perhaps even the preferred mechanism—for examining the social opportunities and challenges of technological innovation in AI and robotics. As Sam N. Lehman-Wilzig (1981, 444) once argued: “Since it is beyond human capability to distinguish a priori the truly impossible from the merely fantastic, all possibilities must be taken into account. Thus science fiction’s utility in outlining the problem.” And a good number of serious research efforts (in philosophy and law, in particular) have found it useful to call upon and employ existing narratives, like the robot stories of Isaac Asimov (i.e., Gips 1991, Anderson 2008, Haddadin 2014), the TV series Star Trek (i.e., Introna 2010, Wallach and Allen 2009, Schwitzgebel and Garza 2015), the film 2001: A Space Odyssey (i.e., Dennett 1997, Stork 1997, Isaac and Bridewell 2017), and the reimagined Battlestar Galactica (Dix 2008, Neuhäuser 2015, Guizzo and Ackerman 2016), or even to fabricate their own fictional anecdotes and “thought experiments” (i.e., Bostrom 2014, Ashrafian 2015a and 2015b) as a way of introducing, characterizing, and/or investigating a problem, or what could, in the near term, become a problem. Here too, as Peter Asaro and Wendell Wallach argue, science fiction seems to have a taken a leading role:
[The] philosophical tradition has a long history of considering hypothetical and even magical situations (such as the Ring of Gyges in Plato’s Republic, which bestows invisibility on its wearer). The nineteenth and twentieth centuries saw many of the magical powers of myth and literature brought into technological reality. Alongside the philosophical literature emerged literatures of horror, science fiction, and fantasy, which also explored many of the social, ethical, and philosophical questions raised by the new-found powers of technological capabilities. For most of the twentieth century, the examination of the ethical and moral implications of artificial intelligence and robotics was limited to the work of science fiction and cyberpunk writers, such as Isaac Asimov, Arthur C. Clarke, Bruce Sterling, William Gibson, and Philip K. Dick (to name only a few). It is only in the late twentieth century that we begin to see academic philosophers taking up these questions in a scholarly way. (Asaro and Wallach 2016, 4–5)
Despite its utility, however, for many laboring in the fields of robotics, AI, human robot interaction (HRI), behavioral science, etc., the incursion of “entertainment” into the realm of the serious work of science is also a potential problem and something that must be, if not actively counteracted, then at least carefully bracketed and held in check. Science fiction, it is argued, often produces unrealistic expectations for and irrational fears about robots that are not grounded in or informed by actual science (Bartneck 2004, Kriz et al. 2010, Bruckenberger et al. 2013, Sandoval et al. 2014). “Real robotics,” as Alan Winfield (2011a, 32) explains, “is a science born out of fiction. For roboticists this is both a blessing and a curse. It is a blessing because science fiction provides inspiration, motivation and thought experiments; a curse because most people’s expectations of robots owe much more to fiction than reality. And because the reality is so prosaic, we roboticists often find ourselves having to address the question of why robotics has failed to deliver when it hasn’t, especially since it is being judged against expectations drawn from fiction.”1 For this reason, science fiction is both a useful tool for and a significant obstacle to understanding what the term “robot” designates.
Even when one consults knowledgeable experts, there is little agreement when it comes to defining, characterizing, or even identifying what is (or what is not) a robot. Illah Nourbakhsh (2013, xiv) writes: “Never ask a roboticist what a robot is. The answer changes too quickly. By the time researchers finish their most recent debate on what is and what isn’t a robot, the frontier moves on as whole new interaction technologies are born.” Indicative of this problem is a podcast from John Siracusa and Jason Snell called “Robot or Not?” The first episode explained the raison d’être of the program: “You and I battling through time as we fall deeper and deeper into a crevasse about whether something is a robot or not.” In each episode, Siracusa and Snell focus on an object, either fictional, like the character Mr. Roboto from the 1983 Styx album Kilroy Was Here (episode 1), Darth Vader (episode 3), and the Daleks and Cybermen from Dr. Who (episode 43); or actual, like the Roomba (episode 8), Siri (episode 16), and Chatbots (episode 49); and then debate whether the object in question is a robot or not. This goes on for one hundred episodes over the course of two years. What is remarkable in this ongoing deliberation is not what determinations come to be made (i.e., Siri is not a robot, but a Roomba is a robot) or what criteria are deployed to make the determination (usually something having to do with independent vs. human-dependent control) but the fact that such an ongoing discussion is possible in the first place and that what is and what is not a robot requires this kind of discursive effort.
Dictionaries provide serviceable but arguably insufficient characterizations. The Oxford English Dictionary (2017), for example, defines “robots” in the following way:
The Merriam-Webster Dictionary (2017) offers a similar characterization:
These definitions are generally considered to be both too broad, insofar as they could be applied to any computer program, and too narrow, because they tend to privilege humanlike forms and configurations that are, beyond what is portrayed in science fiction, more the exception than the rule. In response to this, professional organizations and handbooks and textbooks in robotics offer up what is purported to be a more precise characterization. The International Organization for Standardization (ISO) provides the following definition for “robots and robotic devices” in ISO 8373 (2012): “An automatically controlled, reprogrammable, multipurpose, manipulator programmable in three or more axes, which may be either fixed in place or mobile for use in industrial automation applications.” But this characterization can be faulted for being too specific and restrictive, applying only to industrial robots and therefore unable to accommodate other kinds of applications, like social and companion robots.
One widely cited source of a more general and comprehensive definition comes from George Bekey’s Autonomous Robots: From Biological Inspiration to Implementation and Control: “In this book we define a robot as a machine that senses, thinks, and acts. Thus, a robot must have sensors, processing ability that emulates some aspects of cognition, and actuators” (Bekey 2005, 2). This “sense, think, act” or “sense, plan, act” (Arkin 1998, 130) paradigm has considerable traction in the literature—evidenced by the very fact that it constitutes and is called a paradigm:
Robots are machines that are built upon what researchers call the “sense-think-act” paradigm. That is, they are man-made devices with three key components: “sensors” that monitor the environment and detect changes in it, “processors” or “artificial intelligence” that decides how to respond, and “effectors” that act upon the environment in a manner that reflects the decisions, creating some sort of change in the world around a robot. (Singer and Sagan 2009, 67)
When defining a robot, the “sense-think-act paradigm” is as close to consensus as one might find in terms of a robot differentiated from a computer system. (Wynsberghe 2016, 40)
There is no concise, uncontested definition of what a “robot” is. They may best be understood through the sense-think-act paradigm, which distinguishes robots as any technology that gathers data about the environment through one or more sensors, processes the information in a relatively autonomous fashion, and acts on the physical world. (Jones and Millar 2017, 598)
This definition is, as Bekey (2005, 2) recognizes, “very broad,” encompassing a wide range of different kinds of technologies, artifacts, and devices. But it could be too broad insofar as it may be applied to all kinds of artifacts that exceed the proper limits of what many consider to be a robot. As John Jordan (2016, 37) notes: “the sense-think-act paradigm proves to be problematic for industrial robots: some observers contend that a robot needs to be able to move; otherwise, the Watson computer might qualify.” The Nest thermostat provides another complicated case. “The Nest senses: movements, temperature, humidity, and light. It reasons: if there’s no activity, nobody is home to need air conditioning. It acts: given the right sensor input, it autonomously shuts the furnace down. Fulfilling as it does the three conditions, is the Nest, therefore, a robot?” (Jordan 2016, 37). And what about smartphones? According to Joanna Bryson and Alan Winfield (2017, 117) these devices could also be considered robots under this particular characterization. “Robots are artifacts that sense and act in the physical world in real time. By this definition, a smartphone is a (domestic) robot. It has not only microphones but also a variety of proprioceptive sensors that let it know when its orientation is changing or when it is falling.”
In order to further refine the definition and delimit with greater precision what is and what is not a robot, Winfield (2012, 8) offers the following list of qualifying characteristics:
A robot is:
Although ostensibly another iteration of sense-think-act, Winfield adds an important qualification to his list—“embodiment”—making it clear that a software bot, an algorithm, or an AI implementation like Watson or AlphaGo are not robots, strictly speaking. This is by no means an exhaustive list of all the different ways in which “robot” has been defined, explained, or characterized. What is clear from this sample, however, is that the term “robot” is open to a considerable range of diverse and even different denotations. And these “definitions are,” as Jordan (2016, 4) writes, “unsettled, even among those most expert in the field.”
To further complicate matters, words and their definitions are not stable; they evolve over time, often in ways that cannot be anticipated or controlled. This means that the word “robot,” like any word in any language, has been and will continue to be something of a moving target. What had been called “robot” at the time that Čapek introduced the word to the world is significantly different from what was called “robot” during the development of industrial automation in the latter decades of the twentieth century, and this is noticeably different from what is properly identified as “robot” at this particular point in time. These differences are not just a product of technological innovation; they are also a result of human perception and expectation. In an effort to illustrate this point, Jordan (2016, 4) offers the following observation from Bernard Roth of the Stanford Artificial Intelligence Laboratory (SAIL): “My view is that the notion of a robot has to do with which activities are, at a given time, associated with people and which are associated with machines.” As relative capabilities evolve, so do conceptions. “If a machine suddenly becomes able to do what we normally associate with people, the machine can be upgraded in classification and classified as a robot. After a while, people get used to the activity being done by machines, and the device gets downgraded from ‘robot’ to ‘machine’” (Jordan 2016, 4, quoting Bernard Roth). Consequently, what is and what is not a “robot” changes. The same object with the same operational capabilities can at one time be classified as a robot and at another time be regarded as something that is a mere machine (which is, on Roth’s account, presumably “less than” what is called “robot”). These alterations in the status of the object are not simply a product of innovation in technological capability; they are also a consequence of social context and what people think about the object.
Brian Duffy and Gina Joue (2004, 2), make a similar point concerning machine intelligence. “Once problems, originally thought of as hard issues, are solved or even just understood better, they lose the ‘intelligent’ status and become simply software algorithms.” At one point in time, for instance, playing championship chess was considered a sign of “true intelligence,” so much so that experts in AI and robotics stated, quite confidently, that computers will never achieve (or at least were many decades away from achieving) championship-level play in the game of chess (Dreyfus 1992, Hofstadter 1979). Then, in 1997, IBM’s DeepBlue defeated Gary Kasparov. In the wake of this achievement, concepts regarding intelligence and the game of chess have been recalibrated. DeepBlue, in other words, did not prove that the machine was as intelligent as a human being by way of playing a game previously thought to have been a sign of true intelligence. Instead it demonstrated, as Douglas Hofstadter (2001, 35) points out, “that world-class chess-playing ability can indeed be achieved by brute force techniques—techniques that in no way attempt to replicate or emulate what goes on in the head of a chess grandmaster.” DeepBlue, therefore, transformed the hard problem of chess into just another clever computer application. What changed was not just the technological capabilities of the mechanism (and DeepBlue did, in fact, integrate a number of recent technical innovations), but also the way people thought about the game of chess and intelligence.
“Robot,” therefore, is not some rigorously defined, singular kind of thing that exists in a vacuum. What is called “robot” is something that is socially negotiated such that word usage and terminological definitions shift along with expectations for, experience with, and use of the technology. Consequently, one needs to be sensitive to the fact that whatever comes to be called “robot” is always socially situated. Its context (or contexts, because they are always plural and multifaceted) is as important as its technical components and characterizations. What is and what is not a robot, then, is as much a product of science and engineering practice as it is an effect of changing social processes and interactions.
“Robot” is complicated. It is something that is as much an invention of fiction as it is of actual R&D efforts. Its definition and characterization, even among experts in the field, is unsettled and open to considerable slippage and disagreements. And, like all terms, the way the word is used and that to which it applies changes with alterations in both the technology and social context. One typical and expedient response to this kind of terminological trouble, especially in a text or published study, is to begin by clarifying the vocabulary and/or offering an operational definition. George Bekey’s Autonomous Robots: From Biological Inspiration to Implementation and Control (2015, 2) provides a good example of how this operates: “In this book we define a robot as …” This is something of a standard procedure, and different versions of this kind of terminological declaration are evident throughout the literature. Raya Jones (2016, 5), for instance, begins the book Personhood and Social Robotics with the following: “For the book’s purpose, a serviceable definition of social robot is physically embodied intelligent systems that enter social spaces in community and domestic settings” (italics in the original). Likewise for Ryan Calo, A. Michael Froomkin, and Ian Kerr’s edited collection of essays, Robot Law (2016), which operationalizes a version of the sense-think-act paradigm:
Most people, and undoubtedly all the contributors to this volume, would agree that a man-made object capable of responding to external stimuli and acting on the world without requiring direct—some might say constant—human control was a robot, although some might well argue for a considerably broader definition. Three key elements of this relatively narrow, likely under-inclusive, working definition are: (1) some sort of sensor or input mechanism, without which there can be no stimulus to react to; (2) some controlling algorithm or other system that will govern the responses to the sensed data; and (3) some ability to respond in a way that affects or at least is noticeable by the world outside the robot itself. (Calo et al. 2016, xi)
These declarations and working definitions are both necessary and expedient. They are needed to get the analysis underway and to identify, if only for the purpose of the investigation at hand, the particular kinds of objects to be included in the examination and those other kinds of things that can be excluded from further consideration. In defining robots as “physically embodied intelligent systems,” for example, Jones makes a decision that eliminates an entire class of objects from further consideration: “this excludes disembodied automated response systems, search engines, etc.” (Jones 2015, 5–6). Such decisions are, as Jones recognizes, unavoidably exclusive. They make a decisive cut—de + caedere (“to cut”)—that says, in effect, these entities are to be included in the consideration that follows, while these other seemingly related objects are to be excluded and not given further attention, at least not in this particular context. Despite its usefulness, however, this kind of decision-making does have problems and consequences. It is, whether we recognize it as such or not, an expression and exercise of power. In deciding what is included and what is excluded, someone, or some group, bestows on themselves the right to declare what is inside and what is to be eliminated and left outside of the examination. And these exclusive and exclusionary decisions matter, especially in the context of an investigation that seeks to be attentive to moral questions and the social/political complications involved in the exercise of power.
Instead of instituting this kind of univocal decision—even a temporary determination that is in operation for the time being—we can alternatively hold the term open and recognize the opportunity that is available with such plurivocity and semantic drift. “Robot” already allows for and encompasses a wide range of different concepts, entities, and characterizations. It therefore is already the site of a conversation and debate about technology and its social position and status, and we should not be too quick to close off the possibilities that this lexical diversity enables and makes available. As Andrea Bertolini (2013, 216) argues, “all attempts at providing an encompassing definition are a fruitless exercise: robotic applications are extremely diverse and more insight is gained by keeping them separate.” For this reason, Bertolini (2013, 219) does not define robot once and for all but provides a classificatory schema that allows for and tolerates polysemia:
If, then, a notion of robot is to be elaborated for merely descriptive—thus neither qualifying nor discriminating—purposes, it may be as follows: a machine which (i) may either have a tangible physical body, allowing it to interact with the external world, or rather have an intangible nature—such as a software or program, (ii) which in its functioning is alternatively directly controlled or simply supervised by a human being, or may even act autonomously in order to (iii) perform tasks, which present different degrees of complexity (repetitive or not) and may entail the adoption of non-predetermined choices among possible alternatives, yet aimed at attaining a result or providing information for further judgment, as so determined by its user, creator or programmer, (iv) including but not limited to the modification of the external environment, and which in so doing may (v) interact and cooperate with humans in various forms and degrees.
Adopting this kind of descriptive (although much more complex) framework—one that is tolerant of and can accommodate a range or an array of different connotations and aspects—allows for a term, like “robot,” to be more flexible and for the analysis that follows to be more responsive to the ways the word is actually utilized and applied across different texts, social contexts, research efforts, historical epochs, etc. To put it in the terminology of computer programming, it means that “robot” is not a scalar, a variable taking one value that can be declared and assigned right at the beginning. It is more like an array, a multi-value variable that permits and allows for a range of different but related characteristics.
var robot = new Array(“sense-think-act,” “embodied,” “autonomous,” …);
This does not mean, however, that anything goes and that “robot” is whatever one wants or declares it to be. It means, rather, paying attention to how the term “robot” comes to be deployed, defined, and characterized in the scholarly, technical, and popular literature, including fiction; how the term’s connotations shift over time, across different contexts, and even (at times) within the same text; and how these variations relate to and have an impact on the options, arguments, and debates concerning the moral situation and status of our technological artifacts.
Like the term “robot,” the word “rights” appears to identify something that is immediately familiar and seemingly well-understood. Conversations about rights and their recognition, protection, and/or violation is rather common in contemporary discussions about moral, legal, and political matters. “The discourse of rights is,” as Tom Campbell (2006, 3) notes, “pervasive and popular in politics, law and morality. There is scarcely any position, opinion, claim, criticism or aspiration relating to social and political life that is not asserted and affirmed using the term ‘rights.’ Indeed, there is little chance that any cause will be taken seriously in the contemporary world that cannot be expressed with a demand for the recognition or enforcement of rights of one sort or another.” But it is precisely this pervasiveness and widespread usage that is the problem and a potential obstacle to clear understanding. “Rights” is a rather loose signifier that is applied in ways that are not always consistent or even carefully delineated. Even experienced jurists, as Wesley Hohfeld (1920) noted nearly a hundred years prior to the current volume, have a proclivity to conflate different senses of the term and often times mobilize varied (if not contradictory) understandings of the word in the course of a decision or even a single sentence.
In order to redress this perceived deficiency and lend some terminological precision and clarity to the use of the term “rights,” Hohfeld developed a classification schema, analyzing rights into four fundamental types, or what are commonly called the “Hohfeldian incidents.” Although Hohfeld’s work was initially developed in the context of law and therefore applies to the concept of legal rights, his typology has been successfully ported to, modified for, and used to explain moral and political rights as well.2 The Hohfeldian incidents include the following four types of rights: privileges, claims, powers, and immunities. These four incidents are further organized into two groupings, the first two (privileges and claims) are considered “primary rights” (Hart 1961) or “first-order incidents” (Wenar 2005, 232), while the other two (powers and immunities) fall under the category of “secondary rights” (Hart 1961) or “second-order incidents” (Wenar 2005, 232). Explanations and analyses of the four incidents can be found throughout the literature on rights in works of social political philosophy (MacCormick 1982, Gaus and D’Agostino 2013), moral philosophy (Sumner 1987, Steiner 1994), and law (Hart 1961, Dworkin 1977, Raz 1986 and 1994). The following characterizations are taken from Leif Wenar’s “The Nature of Rights” (2005):
(1) Privileges (also called “liberty” or “license”) describe situations where “A has a right to φ,” or stated more precisely:
“A has a Y right to φ” implies “A has no Y duty not to φ.”
(where “Y” is “legal,” “moral,” or “customary,” and φ is an active verb) (Wenar 2005, 225).
As an illustration of this type of right, Wenar (2005) offers the following: “A sheriff in hot pursuit of a suspect has the legal right to break down the door that the suspect has locked behind him. The sheriff’s having a legal right to break down the door implies that he has no legal duty not to break down the door” (225). The sheriff has a privilege (to break down the door) that overrides or “trumps” (which is the term utilized by Dworkin 1984) the obligation or duty that customarily applies in these situations, i.e., that one should not go around breaking down closed doors.
(2) Claims describe situations where “A has a right that B φ.” “This second fundamental form of rights-assertion,” as Wenar (2005, 229) explains, “often implies not a lack of a duty in the rightholder A, but the presence of a duty in a second party B.”
“A has a Y right that B φ” implies “B has a Y duty to A to φ.”
(where “Y” is “legal,” “moral,” or “customary,” and φ is an active verb) (Wenar 2005, 229).
Again examples help clarify how this incident operates: “Your right that I not strike you correlates to my duty not to strike you. Your right that I help you correlates to my duty to help you. Your right that I do what I promised correlates to my duty to do what I promised” (229).
The second order incidents can be understood and formulated as modification of the first order incidents. “We have,” Wenar (2005, 230) explains, “not only privileges and claims, but rights to alter our privileges and claims, and rights that our privileges and claims not be altered.” The former modification is called a “power”; the latter is called “immunity.”
(3) Powers. The power modifies the first of the first order incidents—those propositions that take the form “A has a right to φ”—by imposing some restriction on oneself or another. “To have a power,” Wenar (2005, 231) writes: “is to have the ability within a set of rules to alter the normative situation of oneself or another. Specifically, to have a power is to have the ability within a set of rules to create, waive, or annul some lower-order incident(s).” The power, therefore takes the following logical form:
“A has a power if and only if A has the ability to alter her own or another’s Hohfeldian incidents” (Wenar 2015, 2).
As an example, Wenar offers the following rather odd description: “A ship’s captain has the power-right to order a midshipman to scrub the deck. The captain’s exercise of this power changes the sailor’s normative situation: it imposes a new duty upon the sailor and so annuls one of his Hohfeldian privileges (not to scrub the deck)” (Wenar 2015, 2). Although a bit odd and not necessarily accessible to individuals lacking personal experience with naval operations, the example does provide an adequate illustration of what is typically involved in the Hohfeldian incident called “power.”
(4) Immunities are modification of the second first-order incidents. “The immunity, like the claim,” Wenar (2005, 232) explains, “is signaled by the form ‘A has a right that B φ’ (or, more commonly, ‘... that B not φ’). Rights that are immunities, like many rights that are claims, entitle their holders to protection against harm or paternalism.” The immunity can, therefore, be described as such:
“B has an immunity if and only if A lacks the ability to alter B’s Hohfeldian incidents” (Wenar 2015, 3).
As an illustration of the immunity, Wenar offers an example derived from US constitutional law: “The United States Congress lacks the ability within the Constitution to impose upon American citizens a duty to kneel daily before a cross. Since the Congress lacks a power, the citizens have an immunity. This immunity is a core element of an American citizen’s right to religious freedom” (Wenar 2015, 3). Again the example is rather specific and limited to constitutional law and political freedom as formulated in the United States, but it does provide a sufficient illustration of how the concept of immunity operates.
There are two additional items to note in Hohfeld’s typology. First, the four types of rights or incidents are all formulated in such a way that each one necessitates a correlative duty. If one has a right (a privilege, a claim, a power, or an immunity), it means that there is an associated duty imposed upon another who is obligated to respect that right. Or as Marx and Tiefensee (2015, 71) explain: “The ‘currency’ of rights would not be of much value if rights did not impose any constraints on the actions of others. Rather, for rights to be effective they must be linked with correlated duties.” Hohfeld, therefore, initially presents the four incidents in terms of rights/duties pairs:
If A has a Privilege, then someone (B) has a No-claim.
If A has a Claim, then someone (B) has a Duty.
If A has a Power, then someone (B) has a Liability.
If A has an immunity, then someone (B) has a Disability.
This means that a statement concerning rights can be considered either from the side of the possessor of the right (the “right” that A has or is endowed with), which is a “patient oriented” way of looking at a moral, legal, or political situation; or from the side of the agent (what obligations are imposed on B in relationship to A), which considers the responsibilities of the producer of a moral, legal, or social/political action.
Second, although each of the incidents can by itself specify a particular right, the majority of rights are composed of a combination of more than one incident, or what Wenar (2005, 234) calls a “molecular right.” A good example of this is property rights, specifically the right the owner of a piece of technology, like a computer, has concerning this object:
The “first-order” rights are your legal rights directly over your property—in this case, your computer. The privilege on this first level entitles you to use your computer. The claim correlates to a duty in every other person not to use your computer. The “second-order” rights are your legal rights concerning the alteration of these first-order rights. You have several powers with respect to your claim—you may waive the claim (granting others permission to touch the computer), annul the claim (abandoning the computer as your property), or transfer the claim (making the computer into someone else’s property). Also on the second order, your immunity prevents others from altering your first-order claim over your computer. Your immunity, that is, prevents others from waiving, annulling, or transferring your claim over your computer. The four incidents together constitute a significant portion of your property right. (Wenar 2015, 4)
Because the Hohfeldian incidents either individually or in some combination describe most, if not all, situations and circumstances involving what are typically called rights, Wenar concludes that the Hohfeldian incidents are complete. “Any assertion of a right can be translated into an assertion about a single Hohfeldian incident, or into an assertion about a complex of incidents, or into a set of alternative assertions about such incidents. All rights are Hohfeldian incidents” (Wenar 2005, 235). The proof of this statement, Wenar admits, is inductive: “Our confidence in this inductive step will increase as we successfully explicate more and more rights with the Hohfeldian diagrams, and as we fail to find counterexamples. The reader may want to satisfy himself or herself that confidence in this inductive step is justified, and may wish to test the framework with more sample rights” (236). Though he is reasonably confident that the Hohfeldian incidents are complete and entirely sufficient for characterizing anything and everything that would be considered a right, this particular statement leaves the door open to the fact that it may be possible, at some point, to discover or identify a right that escapes the grasp of the Hohfeldian typology. Consequently, we will need to hold open the possibility that “robot rights” might, on the one hand, fit entirely within the typology, or, on the other hand, could provide the kind of counter example that would decrease Wenar’s confidence.
Though Hohfeld’s typology defines what rights are, it does not explain who has a right or why. This is the purview of theory, and there are two competing candidates concerning this issue. “One such theory,” as Campbell (2006, 43) explains, “is the ‘will’ (or ‘choice’ or ‘power’) theory. The other is the ‘interest’ (or ‘benefit’ or ‘wellbeing’) theory.” Interest theories connect rights to matters of welfare. “Interest theorists maintain that individuals are capable of having rights if they possess certain interests, the value of which provides reason to impose duties on others” (Marx and Tiefensee 2015, 72). Although there is some variation in the way this has been articulated and formalized, “any doctrine classifiable as an Interest Theory of rights would,” as Matthew Kramer (1998, 62) has explained, “subscribe to the following two theses”:
“Will” theorists, by contrast, require that the subject of a right possess the authority and/or capacity to assert the privilege, claim, power, or immunity. “According to advocates of the Will or Choice Theory, only those individuals qualify as right-holders who have the ability to exercise a right. That is, only a person who can choose either to impose or waive the constraints that her right places on others’ conduct can be seen to have rights” (Marx and Tiefensee 2015, 72). Once again there is some variability in the literature concerning how this particular theory is formulated. But all or most of them, as Kramer (1998, 62) argues, share these three principles:
Although a great deal has been written about these two theories and their importance in the construction and development of rights discourse, we can, for the purposes of what follows, take note of two important consequences. First, comparing the common elements of these two theories, it is immediately evident that one of the major differences between Will Theory and Interest Theory is scope, that is, determinations regarding who can be included in the community of entities possessing rights and what is to be excluded as rights-less. Will Theory is designed to be more narrow, restrictive, or conservative on this account, while Interest Theory admits and can accommodate a wider range of entities. In other words, because Will Theory requires rights’ holders to be “competent and authorized to demand or waive the enforcement of the right,” it excludes from consideration human persons of diminished mental capacity, children, animals, the environment, etc. By contrast, “the Interest Theory can readily ascribe rights to children and to mentally incapacitated people (and indeed to animals, if a theorist wishes); it can acknowledge that the criminal law bestows individual rights on the people who are protected by the various statutes and other mandates that impose criminal duties; and it maintains that any genuine right does not have to be waivable and enforceable by the right-holder but can be waivable and enforceable by someone else instead” (Kramer 1998, 78).
Second, each theory has its advocates and champions. Matthew Kramer (1998), for instance, is a vehement advocate of the Interest Theory, arguing that it provides a more adequate formulation of the way that rights actually operate than the more restrictive Will Theory, which risks excluding entire populations of legitimate moral and legal subjects; while Hillel Steiner (1998, 234) defends Will Theory, arguing that it “offers a perfectly general account of what it is to have a right in any type of society” and that “the account it offers is distinctly more plausible than that afforded by the Interest Theory.” The debate between these two factions is ongoing and appears to be unresolvable. “The long and unresolved historical contest between these two single-function theories,” Wenar (2015, 238) concludes, “stretches back through Bentham (an interest theorist) and Kant (a will theorist) into the Dark Ages.” And its most recent contest, which was staged in Kramer, Simmond, and Hillel’s A Debate Over Rights (1998) ended in what Wenar (2015) and others have acknowledged as a “stalemate.” For that reason, it would be impetuous to think that we can, in the course of this analysis, either resolve the debate once and for all or declare and defend the choice of one theory to the exclusion of the other. As with the polysemia that is involved with the word “robot,” there is more to be gained by tolerating and explicitly accounting for differences in the theory of rights in order to be responsive to the diverse ways that rights have been mobilized in the existing literature.
This means not selecting between winners and losers in the debate but recognizing what each theory makes available, explicitly identifying when one is operationalized and asserted over the other, and ascertaining how this difference is able to supply critical perspective for better understanding what is gained and what is lost in the process of operationalizing one theoretical perspective as opposed to the other. We will, therefore, take the approach that Slavoj Žižek (2006a) calls “the parallax view.” “Parallax” is the term Žižek deploys (in something of an appropriation and repurposing of the word’s standard connotation) to name an irreducible difference in perspective that is not programmed for, nor determined to result in, some kind of final mediation or dialectical resolution. “The two perspectives,” Žižek (2006a, 129) writes, “are irretrievably out of sync, so that there is no neutral language to translate one into the other, even less to posit one as the ‘truth’ of the other. All we can ultimately do in today’s condition is to remain faithful to this split as such, to record it.” With the two competing theories of rights, therefore, the truth does not reside on one side or the other or in their mediation through some kind of dialectical synthesis or hybrid construction (cf. Wenar 2015). The truth of the matter can be found in the shift of perspective from the one to the other. The critical task, therefore, is not to select the right theory of rights but to mobilize theory in the right way, recognizing how different theories of rights frame different problematics, make available different kinds of inquiries, and result in different possible outcomes.
“The notion of robot rights,” as Seo-Young Chu (2010, 215) points out, “is as old as is the word ‘robot’ itself. Etymologically the word ‘robot’ comes from the Czech word ‘robota,’ which means ‘forced labor.’ In Karel Čapek’s 1921 play R.U.R., which is widely credited with introducing the term ‘robot,’ a ‘Humanity League’ decries the exploitation of robot slaves—‘they are to be dealt with like human beings,’ one reformer declares—and the robots themselves eventually stage a massive revolt against their human makers.” For many researchers and developers, however, the very idea of “robot rights” is simply unthinkable. “At present,” Sohail Inayatullah and Phil McNally (1988, 123) once wrote, “the notion of robots with rights is unthinkable whether one argues from an ‘everything is alive’ Eastern perspective or ‘only man is alive’ Western perspective.” But why? How and why is the mere idea of “robot rights” inconceivable? What does it mean, when the mere thought of something is already decided and declared to be unthinkable?
Considering the impact of his earlier (1988) essay with Phil McNally, Inayatullah (2001, 1) provided the following reflection concerning the question of robot rights:
Many years ago in the folly of youth, I wrote an article with a colleague titled, “The Rights of Robots.” It has been the piece of mine most ridiculed. Pakistani colleagues have mocked me saying that Inayatullah is worried about robot rights while we have neither human rights, economic rights or rights to our own language and local culture—we have only the “right to be brutalized by our leaders” and Western powers. Others have refused to enter in collegial discussions on the future with me as they have been concerned that I will once again bring up the trivial. As noted thinker Hazel Henderson said, I am happy to join this group—an internet listserv—as long as the rights of robots are not discussed. But why the ridicule and anger?
As Inayatullah reports from his own experience, the mere idea of robot rights—not even advocacy for rights but the suggestion that the question might require some careful consideration—has produced anger, mockery, and ridicule in colleagues and noted thinkers. In the face of this seemingly irrational response, Inayatullah asks a simple but important question, “why the ridicule and anger?” In response to this, he calls upon the work of Christopher Stone, who once pointed out that any extension of rights to previously excluded populations—like animals or the environment—is always, at least initially, considered unthinkable and the target of ridicule. “Throughout legal history,” Stone (1974, 6–7) writes, “each successive extension of rights to some new entity has been, theretofore, a bit unthinkable. … The fact is, that each time there is a movement to confer rights onto some new ‘entity,’ the proposal is bound to sound odd or frightening or laughable.” This mode of response is, for example, evident in the editors’ note appended to the beginning of a reprint of Marvin Minsky’s “Alienable Rights”: “Recently we heard some rumblings in normally sober academic circles about robot rights. We managed to keep a straight face as we asked Marvin Minsky, MIT’s grand old man of artificial intelligence, to address the heady question” (Minsky 2006, 137).3 From the outset, the question of robot rights is presumed to be ludicrous, so much so that one struggles to keep a straight face in the face of the mere idea. Whatever the exact reason for this kind of reaction—whether it be fear, resentment, prejudice, etc.4—the effect of ridicule is to discredit or to dismiss the very idea.
Similar responses have been registered and documented elsewhere in the literature. In 2006, the UK Office of Science and Innovation’s Horizon Scanning Centre commissioned and published a report consisting of, as Raya Jones (2016, 36) explains, “246 summary papers that predict emerging trends in science, health, and technology.” The papers, or “scans” as Sir David King referred to them, were “aimed at stimulating debate and critical discussion to enhance government’s short and long term policy and strategy” (BBC News 2006). Despite the fact that these scans (which had been contracted to and developed by Ipsos MORI in 2006) addressed a number of potentially controversial subjects, only one was “singled out to make headlines” (Jones 2016, 36). Ipsos MORI titled the article: “Robo-rights: Utopian Dream or Rise of the Machines?” (which was called “Robot Rights: Utopian Dream or Rise of the Machines?” in Winfield 2007 and “Robo-rights: Utopian Dream or Rise of the Machine?” in Bensoussan and Bensoussan 2015). What caught everyone’s attention was the following statement: “If artificial intelligence is achieved and widely deployed (or if they can reproduce and improve themselves) calls may be made for human rights to be extended to robots” (Marks 2007; Ipsos MORI 2006). The statement was short and the report itself, which was “a little over 700 words” (Winfield 2007), did not supply much in terms of actual details or evidence. But it definitely had impact. It was covered by and widely reported in the popular press, with attention grabbing headlines like “UK report says robots will have rights” (Davoudi 2006) and “Robots could demand legal rights” (BBC News 2006).
The response from researchers and experts in the field was highly critical.5 As Robert M. Geraci (2010, 190) explained: “The Ipsos MORI document was reviled by some scientists in the UK. Rightly criticizing the study for its minimal research documentation, Owen Holand [SIC], Alan Winfield, and Noel Sharkey claim the Ipsos MORI document directs attention away from the real issue, especially military robots and responsibility for autonomous robot-incurred death or damage (Henderson 2007). The scientists gathered at the Dana Centre in London in April of 2007 to share their ideas with the public.” This account of the sequence of events is not entirely accurate. Holland, Sharkey, and Winfield—along with Kathleen Richardson, whose name is not, for some reason, included in the published news stories—shared their comments about the Ipsos MORI report one day before the Dana Centre event during a press briefing at the Science Media Centre (SMC). In their remarks to the press, Holland, Sharkey, and Winfield—who together with Frank Burnett were co-PIs on the “Walking With Robots” research program, a multi-year public information project funded by the UK’s Engineering and Physical Sciences Research Council (2005)—not only criticized the report but reportedly dismissed the concept of robot rights as a “waste of time and resources” (Geraci 2010, 190) and a “red herring that has diverted attention from more pressing ethical issues” (Henderson 2007). As Holland explained, the report was “very shallow, superficial, and poorly informed. I know of no one within the serious robotics community who would use that phrase, ‘robot rights.’” (Henderson 2007). And according to Sharkey “the idea of machine consciousness and rights is a distraction, it’s fairy tale stuff. We need proper informed debate, about the public safety about for instance the millions of domestic robots that are predicted to be arriving in the next few years” (Henderson 2007 and Marks 2007).
By all accounts, the report was insufficient, perhaps necessarily so, since it was not a peer-reviewed scientific study but a short thought-piece designed to motivate public discussion. There is, however, a bit of oversteer in these critical responses. In addition to directing criticism at the document and its perceived deficiencies, the respondents also characterized the entire subject of robot rights as a fantastic distraction from the serious work that needs to be done and therefore something no serious researcher would think about, let alone say out loud. In all fairness, it is entirely possible that the remarks were motivated by a well-intended effort to try to direct public attention to the “real problems” and away from dramatic media-hype and futurist speculation. And this is undoubtedly one of the important and necessary objectives of science education. But even if we grant this, the comments demonstrate something of a dismissive response to the question of robot rights and its relative importance (or lack thereof) in current research programs. Even when taking a more measured and less polemic approach, as Winfield did in a blog post from February of 2007, the “rights question” is, if not dismissed, then indefinitely delayed by being put aside or pushed off into an indeterminate future.
Ok, let’s get real. Do I think robots will have (human) rights within 20–50 years? No, I do not. Or to put it another way, I think the likelihood is so small as to be negligible. Why? Because the technical challenges of moving from insect-level robot intelligence, which is more or less where we are now, to human-level intelligence are so great. Do I think robots will ever have rights? Well, perhaps. In principle I don’t see why not. Imagine sentient robots, able to fully engage in discourse with humans, on art, philosophy, mathematics; robots able to empathi[z]e or express opinions; robots with hopes, or dreams. Think of Data from Star Trek. It is possible to imagine robots smart, eloquent and persuasive enough to be able to argue their case but, even so, there is absolutely no reason to suppose that robot emancipation would be rapid, or straightforward. After all, even though the rights of man as now generally understood were established over 200 years ago, human rights are still by no means universally respected or upheld. Why should it be any easier for robots? (Winfield 2007).
In this post, Winfield does not dismiss the concept of robot rights per se, but comes at it directly. In the short term (20–50 years), it appears to be, if not impossible, then at least highly improbable. In the long run, it may be something that is, in principle at least, not impossible. This possibility, however, is projected into a distant hypothetical future on the basis of the more exclusive Will Theory of rights, i.e., that robots would be “smart, eloquent and persuasive enough to be able to argue their case.” The question concerning robot rights, therefore, is not “unthinkable”; it is just unlikely and practically not worth the effort at this particular point in time. As Holland had succinctly stated it during the press briefing, “it’s really premature I think to discuss robot rights” (Randerson 2007).
A similar decision is operationalized in the official response issued by the Foundation for Responsible Robotics to the European Parliament’s recently published recommendations to the European Commission concerning robots and AI (Committee on Legal Affairs 2016): “Some of the wording in the document is disappointingly based on Science Fiction. Notions of robot rights and robot citizenship are really a distraction from the main thrust of the document that only served to catch the wrong type of media attention” (Sharkey and Fosch-Villaronga 2017). Likewise, Luciano Floridi (2017, 4) suggests that thinking and talking about the “counterintuitive attribution of rights” to robots is a distraction from the serious work of philosophy. “It may be fun,” Floridi (2017, 4) writes, “to speculate about such questions, but it is also distracting and irresponsible, given the pressing issues we have at hand. The point is not to decide whether robots will qualify someday as a kind of persons, but to reali[z]e that we are stuck within the wrong conceptual framework. The digital is forcing us to rethink new solutions for new forms of agency. While doing so we must keep in mind that the debate is not about robots but about us, who will have to live with them, and about the kind of infosphere and societies we want to create. We need less science fiction and more philosophy.” According to Floridi, it might be fun to speculate about things like robot rights, but this kind of playful thinking is a distraction from the real work that needs to be done and quite possibly is an irresponsible waste of time and effort that should be spent on more serious philosophical endeavors.
In other situations the question of robot rights is not immediately ridiculed or dismissed; it is given some brief attention only to be bracketed or carefully excluded as an area that shall not be given further thought. One of the best illustrations of this is deployed by Michael Anderson, Susan Leigh Anderson, and Chris Armen (2004) in the agenda-setting paper that launched the new field of machine ethics. For Anderson et al., machine ethics (ME) is principally concerned with the consequences of the decisions and behaviors of machines toward human beings. For this reason, Anderson et al. immediately differentiate ME from two related efforts: “Past research concerning the relationship between technology and ethics has largely focused on responsible and irresponsible use of technology by human beings, with a few people being interested in how human beings ought to treat machines.” ME is introduced and defined by being differentiated from two other ways of considering the relationship between technology and ethics. The first is computer ethics, which is concerned, as Anderson and company correctly point out, with questions of human action through the instrumentality of computers and related information systems. In clear distinction from these efforts, machine ethics seeks to enlarge the scope of moral agents by considering the ethical status and actions of machines. As Anderson and Anderson (2007, 15) describe it in a subsequent publication, “the ultimate goal of machine ethics, we believe, is to create a machine that itself follows an ideal ethical principle or set of principles.” The other exclusion involves situations regarding rights, or “how human beings ought to treat machines.” This also does not fall under the purview of ME, and Anderson, Anderson, and Armen explicitly mark it as something to be set aside by their own endeavors. Although the “question of whether intelligent machines should have moral standing,” Susan Leigh Anderson (2008, 480) writes in another article, appears to “loom on the horizon,” ME deliberately and explicitly pushes this issue to the margins, leaving it for others to consider. Here we can perceive how that which is marginalized appears; it makes an appearance in the text only insofar as it is set aside and not given further consideration. It appears, as Derrida (1982, 65) describes it, by way of the trace of its erasure. This setting-aside or exclusion leaves a residue in the text—a mark within the text of something being excluded—that remains legible and can be read.
A similar decision is deployed in Wendell Wallach and Colin Allen’s Moral Machines (2009), and is clearly evident in their choice of the term “artificial moral agent,” or AMA, as the protagonist of the analysis. This term immediately focuses attention on the question of responsibility. Despite this exclusive concern, however, Wallach and Allen (2009, 204–207) do eventually give brief consideration to the issue of rights. Extending the concept of legal responsibility to AMAs is, in Wallach and Allen’s opinion, something of a no-brainer: “The question whether there are barriers to designating intelligent systems legally accountable for their actions has captured the attention of a small but growing community of scholars. They generally concur that the law, as it exists, can accommodate the advent of intelligent (ro)bots. A vast body of law already exists for attributing legal personhood to nonhuman entities (corporations). No radical changes in the law would be required to extend the status of legal person to machines with higher-order faculties, presuming that the (ro)bots were recognized as responsible agents” (Wallach and Allen 2009, 204). According to Wallach and Allen’s estimations, a decision concerning the legal status of AMA’s should not pose any significant problems. Most scholars, they argue, already recognize that this is adequately anticipated by and already has a suitable precedent in available legal and judicial practices, especially as it relates to the corporation.
What is a problem, in their eyes, is the flip side of legal responsibility—the question of rights. “From a legal standpoint,” Wallach and Allen continue, “the more difficult question concerns the rights that might be conferred on an intelligent system. When or if future artificial moral agents should acquire legal status of any kind, the question of their legal rights will also arise” (Wallach and Allen 2009, 204). Although noting the possibility and importance of the question, at least as it would be characterized in legal terms, they do not pursue its consequences very far. In fact, they mention it only to defer it to another kind of question—a kind of investigative bait and switch: “Whether or not the legal ins and outs of personhood can be sorted out, more immediate and practical for engineers and regulators is the need to evaluate AMA performance” (Wallach and Allen 2009, 206). Consequently, Wallach and Allen conclude Moral Machines by briefly gesturing in the direction of a consideration of robot rights only to refer this question back to the issue of agency and performance measurement. In this way, then, they briefly address the rights question only to immediately recoil from the complications it entails, namely, the persistent philosophical problem of having to sort out “the legal ins and outs of personhood.” Although not simply passing over the question of robot rights in silence, Wallach and Allen, like Anderson et al., only mention it in order to postpone or otherwise exclude the issue from further consideration.
In other cases, the exclusion is not even explicitly marked or justified; it is simply declared and enacted. The EURON RoboEthics Roadmap (Veruggio 2006), for instance, does not have anything to say (not even a negative exclusion) about the social standing or rights of robots. As Yueh-Hsuan Weng, Chien-Hsun Chen, and Chuen-Tsai Sun (2009, 270) report, “the Roboethics Roadmap does not consider potential problems associated with robot consciousness, free will, and emotions” but takes an entirely human-centric approach. The Roadmap, then, appears to pursue what is a rather reasonable path. The scope of the report is limited to a decade, and, for this reason, the authors of the document set aside the more speculative questions concerning future achievements with robot capabilities. Though robot rights is thinkable in principal, in practice it is left off the table as a matter of setting some practical limits to the investigation.
In terms of scope, we have taken into consideration—from the point of view of the ethical issues connected to Robotics—a temporal range of a decade, in whose frame we could reasonably locate and infer—on the basis of the current State-of-the-Art in Robotics—certain foreseeable developments in the field. For this reason, we consider premature—and have only hinted at—problems inherent in the possible emergence of human functions in the robot: like consciousness, free will, self-consciousness, sense of dignity, emotions, and so on. Consequently, this is why we have not examined problems—debated in the literature—like the need to consider robot as our slaves, or the need to guarantee them the same respect, rights and dignity we owe to human workers. (Veruggio 2006, 7)
In this document, the question of rights is deliberately set aside as a matter that is premature insofar as it requires advancements in robotics (i.e., consciousness, free will, emotions, etc.) that remain futuristic, debatable, and outside the scope of the study’s ten-year window of opportunity.
Not surprisingly, a similar sounding decision is instituted by Gianmarco Veruggio (the project coordinator of the EURON Roboethics Roadmap) and Fiorella Operto, in a subsequent contribution to the Springer Handbook of Robotics:
In terms of scope, we have taken into consideration—from the point of view of the ethical issues connected to robotics—a temporal range of two decades, in whose frame we could reasonably locate and infer—on the basis of the current state-of-the-art in robotics—certain foreseeable developments in the field. For this reason, we consider premature—and have only hinted at—problems related to the possible emergence of human qualities in robots: consciousness, free will, self-consciousness, sense of dignity, emotions, and so on. Consequently, this is why we have not examined problems—debated in some other papers and essays—like the proposal to not behave with robots like with slaves, or the need to guarantee them the same respect, rights, and dignity we owe to human workers. Likewise, and for the same reasons, the target of roboethics is not the robot and its artificial ethics, but the human ethics of the robots’ designers, manufacturers, and users. Although informed about the issues presented in some papers on the need and possibility to attribute moral values to robots’ decisions, and about the chance that in the future robots might be moral entities like—if not more so than—human beings, the authors have chosen to examine the ethical issues of the human beings involved in the design, manufacturing, and use of the robots. (Veruggio and Operto 2008, 1501)
Here again, Veruggio (who is credited with initially introducing and defining the project of roboethics) and Operto limit consideration to the near-term—two decades. They therefore explicitly set aside and postpone any consideration of robots as either independent agents or patients and focus their attention on a kind of elaboration of computer ethics, that is, the ethical issues encountered by human beings in the design, manufacture, and use of robots.
A similar future-oriented disclaimer is deployed in Michael Nagenborg et al.’s critical investigation of robot regulation across Europe. In this text, the authors consider a number of issues regarding “responsibility and autonomous robots,” but they set aside the question of rights as not pertinent to the matter at hand: “In the context of this paper we therefore assume that artificial entities are not persons and not the bearers of individual, much less civil, rights. This does not imply that in a legal and ethical respect we could not grant a special status to robots nor that the development of artificial persons can be ruled out in principle. However, for the present moment and the nearer future we do not see the necessity to demand a fundamental change of our conception of legality. Thus, we choose a human-centred approach” (Nagenborg et al. 2008, 350). Nagenborg and company can safely and justifiably make this assumption, which to their credit they recognize and identify as an “assumption,” because robots are widely regarded as things that are not “bearers of individual, much less civil, rights” (2008, 350). This could, the authors recognize, change at some point in the future, and that change, in distinction to what Winfield (2007) had argued, could be made and justified in term of an Interest Theory insofar as we would decide to “grant … special status to robots.” But that future is so distant that it is not even worth serious consideration at this particular moment.
In other situations, exclusion occurs not as a deliberate postponement but as a product of the means of inclusion. On the legal side, for instance, the matter of “robot rights” is already effectively disabled by the way we have decided to incorporate robots into the law and how that mode of incorporation differs from another, similarly situated entity. As Gunther Teubner (2006, 521) explains:
Law is opening itself for the entry of new juridical actors—animals and electronic agents. The difference in outcomes, however, are striking. … Animal rights and similar constructs create basically defensive institutions. Paradoxically, they incorporate animals in human society in order to create defenses against destructive tendencies of human society against animals. The old formula of social domination of nature is replaced by the new social contract with nature. For electronic agents, the exact opposite is true. Their legal personification, especially in economic and technological contexts, creates aggressive new action centers as basic productive institutions. Here, their inclusion into society does not protect the new actors—just the opposite. It is society that needs to defend itself against the new actors.
Both robots and animals can be considered “the excluded other” of human social institutions, especially legal institutions. The way these two previously marginalized entities come to be included in law, however, is remarkably different. Animals—at least some species of animals—now have rights that are legally recognized in an effort to defend the welfare and interests of these vulnerable entities from “the destructive tendencies of human society.” Robots, or what Teubner calls “electronic agents” (which is already a mark of difference, insofar as animals are not describe as “agents” but occupy the place of “patient”), come to be included in legal matters for the opposite reason. Robots figure in law not for the sake of protecting the rights of the machine from human exploitation and misuse but for the sake of defending human society from potentially disastrous robot decision-making and action. Animals have rights; robots have responsibilities. It is, therefore, not the case that robots are simply excluded from the law, but that the means of their inclusion is accomplished in such a way that already renders the question of rights inoperative and effectively unthinkable.
In still other situations, the question of robot rights is not excluded per se but included by being pushed to the margins of proper consideration. What is marginal is noted and given some attention, but it is deliberately marked as something located on the periphery of what is determined to be really important and worthy of investigation. Indication of the marginal position or at least the peripheral status of the “rights question” can be found in Wendell Wallach and Peter Asaro’s collection of foundational texts in robot and machine ethics (2017). Out of the thirty-five essays that are assembled in this volume, which is intended “to provide a roadmap of core concerns” by bring together a “careful selection of articles that have or will play an important role in ongoing debates” (Wallach and Asaro 2017, xii), only one essay, the final one, takes up and engages the question of rights. The position of this text within the sequence of the chapters, along with the introductory explanation provided by the editors, situate the inquiry into robot rights as a kind of future-oriented epilogue:
Perhaps the most futuristic and contested question in robot ethics is whether robots may someday have the capacity or capability to be the bearer of rights, and in virtue of what criteria will they deserve those rights? Gunkel (2014) defends the view that there may be robots deserving of the legal and moral status of having rights, and challenges the current standards of evaluating which entities (corporations, animals, etc.) deserve moral standing. Such futuristic legal questions underscore the way in which artificial agents presently serve as a thought experiment for legal theorists and a vehicle for reflections on the nature of legal agency and rights. (Wallach and Asaro 2017, 12)
In pointing this out, I am not complaining. It was both an honor and privilege to have this text included in the project. But the manner of its inclusion—the fact that it is the sole essay addressing the question of robot rights and that it comes “at the end of the story”—indicates the extent to which the “rights question” has been and remains a kind of marginalized afterthought in the literature of robot and machine ethics.
This literal maginalization is also evident in Patrick Lin et al.’s Robot Ethics, which has now appeared in two versions. In the first iteration, Robot Ethics (Lin, Abney, and Bekey 2012), the question concerning robot rights is explicitly addressed in the book’s final section, “VII Rights and Ethics.” This section consists of three essays from Rob Sparrow (2012), Kevin Warwick (2012), and Anthony F. Beavers (2012), and it is introduced with the following editorial note:
The preceding chapter 18 examined the ethics of robot servitude: Is it morally permissible to enforce servitude on robots, sometimes termed “robot slavery”? But to call such servitude “slavery” is inapt, if not seriously misleading, if robots have no will of their own—if they lack the sort of freedom we associate with moral personhood and moral rights. However, could robots someday gain what it takes to become a rights holder? What exactly is it that makes humans (but not other creatures) here on Earth eligible for rights? Is there a foreseeable future in which robots will demand their own “Emancipation Proclamation”? (Lin, Abney, and Bekey 2012, 299)
Like Wallach and Asaro, Lin et al. conclude Robot Rights (version 1, 2012) by giving consideration to a set of questions involving somewhat distant matters that could arise in the foreseeable future. “Robot rights,” although not a central concern of Robot Ethics at the present time, is added on at the end as a kind of indication of “future directions” for possible research: “Thus, after studying issues related to programming ethics and specific areas of robotic applications, in part VII our focus zooms back out to broader, more distant concerns that may arise with future robots” (Lin, Abney, and Bekey 2012, 300). But in the second version of the book, Robot Ethics 2.0 (Lin, Jenkins, and Abney 2017), this material is virtually absent. In this “major update” and “second generation” of Robot Ethics, what had been marginal in the first version—“marginal” insofar as it was a distant possibility situated at the end of the text—is now effectively pushed out of the book altogether.6 This remarkable absence—the lack of something that had been present—is only able to be identified through its trace (Derrida 1982, 65), that is, the withdrawal of something that had previously been there, such that its lack of presence now leaves a mark or trace of its removal.7
Finally, there are some notable exceptions. But these exceptions, more often than not, prove the rule. In 1985, Robert A. Freitas Jr. published a rather prescient note in the journal Student Lawyer. Although considered to be one of the earliest documented attempts to raise the “questions of ‘machine rights’ and ‘robot liberation’” (Freitas 1985, 54), this short article has had less than twenty citations in the past thirty-five years. This may be the result of: the perceived status (or lack thereof) of the journal, which is not a major venue for peer-reviewed research, but a magazine published by the student division of the American Bar Association; a product of some confusion concerning the essay’s three different titles8; the fact that the article is not actually an article but an “end note” or epilogue; or simply an issue of timing, insofar as the questions Freitas raises came well in advance of robot ethics or even wide acceptance of computer ethics. In any event, this initial effort to situate the question concerning robot rights has been and remains on the periphery of scholarship, occupying a position that is, quite literally, an endnote.
Another notable exception is Blay Whitby’s “Sometimes it’s Hard to be a Robot: A Call for Action on the Ethics of Abusing Artificial Agents” from 2008. Although Whitby does not take up the question of robot rights (the term “rights” does not appear in his essay), his concern is with the potential for robot abuse.9 “This is a call,” Whitby (2008, 326) explains, “for informed debate on the ethical issues raised by the forthcoming widespread use of robots, particularly in domestic settings. Research shows that humans can sometimes become very abusive towards computers and robots particularly when they are seen as human-like and this raises important ethical issues.” Whitby’s objective is modest. He only seeks to open up the issue and initiate debate about the possible ethical consequences of future robot abuse. As such, and by deliberate design, he does not portend to offer any practical solutions or answers, which is something for which he is criticized by Thimbleby (2008). He only seeks to get the conversation started in an effort to begin the task of thinking about these potential problems. But even raising the question—even asking that roboticists, ethicists, and legal scholars entertain the question of robot abuse and begin to devise ethical standards in the interest of dealing with these potential complications—was perceived as being premature and unnecessary. As Thimbleby (2008, 341) writes (rather acerbically):
Whitby has not convinced this reader that robotism is distinct from any other technology-oriented ethics. He has not provided persuasive or provocative case studies, e.g., “it’s not the gun that kills but the user” that arise frequently in other technoethical discussions. This reader does not share his sentimental interpretation of “abuse”; robots are not kittens. Robots are easy to build and mass produce; there are no grounds, at least as presented, to justify Whitby’s sentimental title—“sometimes it’s hard to be a robot …” (Aw, I feel sorry for them already.) I imagine, if robots had any notion of what “hard” meant that they’d say it was easy enough to be a robot … In conclusion, then, it should be clear that we disagree with Whitby: there is no urgency to define robotism or any ethics of robots. But as nothing is certain, we should end on a careful note: there is no urgency until the robots themselves start to ask for it.
Thimbleby, it should be noted, does not simply exclude consideration of robot rights tout court but is careful to recognize that if this arguably absurd idea were at some point in the future to become possible, it could only transpire in terms of the Will Theory of rights. The robots themselves will need to ask for it. Consequently, the difference between Whitby’s and Thimbleby’s positions is one that is ultimately grounded in the two different theories of rights. Whitby mobilizes an Interest Theory insofar as he advocates a consideration of the problems and consequences of robot abuse irrespective of whether the robot is capable (or not) of demanding that it not be abused. Thimbleby deploys a Will Theory approach, arguing that until the robots are able to ask for consideration or respect, we simply do not need to be concerned about it. A similar argument is made by Keith Abney (2012, 40), who asserts an exclusive privileging of the Will Theory as the basis for drawing the following conclusion: “we can safely say that, for the foreseeable future, robots will have no rights.” Finally, even though Whitby’s article has been referenced just under forty times in the literature, its main proposal, namely that we take up the question of robot abuse and consider some level of legal/moral protections for artifacts, is only directly addressed in a small fraction of these subsequently published texts (Riek et al. 2009, Coeckelbergh 2010 and 2011, and Dix 2017).
Another seemingly notable exception is Alain and Jérémy Bensoussan’s Droit des Robots (2015), a book written by two attorneys involved in helping to define the emerging field of robot law. The French word droit/droits is, however, a bit tricky and can be translated as either “law” or “rights.” Droit, in the singular and written either with or without the definite article (le droit), is typically translated as “law,” as in le droit penal or “the penal law.” The plural droits, especially when it is written with the definite article les droits, is typically translated “rights,” as in the UN’s La Déclaration universelle des droits de l'homme [The Universal Declaration of Human Rights]. But there is some equivocation concerning this matter, which makes the choice between “law” and “rights” something that is often dependent on context.
The title of Bensoussan and Bensoussan’s book, for instance, would be translated as “Robot Law” and not “Right of Robots.” And this translation is consistent with and verified by the contents of the text, which endeavors to articulate a new framework for responding to the legal challenges and opportunities of robots. In the course of their argument, however, Bensoussan and Bensoussan introduce a charter, composed of ten articles, that they call La Chartes Des Droits Des Robots. In this case, the word droits could be translated as “laws,” “rights,” or both, because the contents of the charter concern not only legal responsibilities for the manufacture, deployment, and use of robots, but also stipulate the legal status of robotic artifacts: “Article 2—Robot Person: A robot is an artificial entity endowed with its own legal personality: the robot personality. The robot has a name, an identification number, a state and a legal representative, which may be a natural person or a legal entity” (Bensoussan and Bensoussan 2015, paragraph 219; my translation). With this second article, Bensoussan and Bensoussan make the case for recognizing robots as a new kind of legal person, like a corporation. This is necessary, they argue, to contend with the unique social and legal challenges that robots present to current judicial systems. In other words, and unlike the vast majority of Anglophone researchers, scholars, and critics, Bensoussan and Bensoussan’s work recognizes the difficulty of formulating laws [droits] that apply to robots without giving some consideration to the legal standing and rights [les droits] of the entity to which these laws would apply. And the one legal right that they believe is paramount for robots is the right to privacy.10 This is stipulated in the third article: “Personal data stored by a robot are subject to the Data Protection and Freedom regulations. A robot has the right to the respect of its dignity limited to the personal data conserved” (Bensoussan and Bensoussan 2015, paragraph 219; my translation).
It should be noted, however, that the motivation and objective for extending legal protections to the robot remains focused on the human user. Here is how Alain Bensoussan explained it in an interview from 2014:
There is the question of duties, but also of rights. Such as the right to privacy. Indeed, the robot, in its interaction with the elderly or robots who “work” with autistic children, acquires information about their health and privacy. For example, the robot is able to tell a person with Alzheimer's disease “This is going to be your grandson’s birthday” or to an autistic child “Your brother has arrived,” or to an elderly person “Your granddaughter is here.” And this intimacy must be protected by protecting the memory of the robot. (Bensoussan 2014, 13; my translation)
Consequently, Bensoussan and Bensoussan (2015) recognize the importance of explicitly stipulating the social and legal status of the robot and entertain questions regarding both robot law [droit des robots] and robot rights [les droits des robots], or at least one particular claim, the right to privacy. But for all its promise to think and deal with the question of robot rights, Bensoussan and Bensoussan’s Droit des Robots is still restricted to an anthropocentric focus and frame of reference. The one right that the robot would have and would need to have—the right to privacy—is provided not for the sake of the robot but to protect the human beings involved with the robot. It is, therefore, a kind of indirect and instrumentalist way of thinking of the rights of others. Consequently, a robot’s rights are not unthinkable, but their consideration is tightly constrained by, and limited to, the rights that belong to its human users.
Investigating the opportunities and challenges of robot rights is already complicated by both linguistic and conceptual difficulties. Linguistically, the term “robot” admits of a wide range of different and not necessarily compatible definitions, characterizations, and understandings. Like the word “animal,” which Derrida (2008, 32) finds both compelling and astonishing—“Animal, what a word!”—the word “robot” is also a rather indeterminate and slippery signifier for a set of ideas and concepts that differ across texts, research projects, and investigative efforts. It is, therefore, and already from the beginning, “a question of words” and “an exploration of language” (Derrida 2008, 33). Even the most general and seemingly uncontroversial of characterizations—the sense-think-act paradigm—remains insufficient to capture and/or represent the full range and complexity that is already available in the current literature. And this “literature” includes not just academic, technical, and commercial publications but decades of science fiction as presented in print (i.e., short stories, novels, novellas, and comic books), stage plays and performances, and other forms of media (i.e., radio, cinema, and television); which in this particular case has determined many aspects of this thing called “robot”—or perhaps more accurately stated, these diverse and differentiated artifacts that have come to be designated and identified with the word “robot”—well in advance of actual efforts in science and engineering. In response to this, it has been decided not to make the usual definitive decision but to attend to and learn to take responsibility for this polysemia. That is, rather than proceeding by way of the “tried and true” approach to defining terminology and issuing a declaration that takes the form of something like, “In this study, we understand ‘robot’ to be …,” we will endeavor to account for, to tolerate, and even to facilitate this semantic diversity and plurivocity. The word “robot,” then, will function as a kind of semi-autonomous, linguistic artifact that is always and already on the verge of escaping from our oversight and control, such that the task before us is not to limit opportunity in advance of the investigation but to account for the full range of significations that the term “robot” puts into play and makes available.
A different but not necessarily unrelated terminological problem occurs with the word “rights.” Unlike “robot,” the term “rights” does have a rather strict and formal characterization. The Hohfeldian incidents parse the term into four distinct and interrelated elements: privileges, claims, powers, and immunities. For most moral philosophers and jurists, as Wenar (2005, 2015) suggests, these four incidents—either alone or in some combination—provide what is typically considered to be a complete formulation of and account for what we understand by the term “rights.” Where things get complicated is in deciding who or what can or should have a share in these four incidents. Who or what, in other words, can have rights? As we have seen, there are two competing theories for addressing this question: the Interest Theory and the Will Theory. The latter sets a rather high bar, requiring that the subject of a right possess the authority and/or capacity to assert, on its own, a privilege, claim, power, and/or immunity that would need to be respected by others. The former provides for a much lower threshold, making it possible for one to decide rights on behalf of the interests of others.
The choice of theory makes a significant difference. Under the Will Theory, for instance, it would be virtually impossible for an animal or the environment to have any stake in the four incidents, and this insofar as they lack λόγος—that influential ancient Greek word that means “word” and can be translated as “speech,” “language,” “logic,” and “reason”—and the ability to articulate and assert their rights. The Interest Theory, however, proceeds otherwise. It recognizes, following Bentham (2005, 283), that the operative question is not “Can they speak?” but “Can they suffer?” This other way of proceeding11 would institute an entirely different way of deciding between who or what has rights and who or what does not. But this decision between the two theories is itself already a moral determination, for it already makes a decisive cut in the fabric of being separating who can be a moral subject from what remains a mere object situated outside the proper confines of moral concern. And this proto-moral determination—this “moral decision before moral decision” or this “ethics of ethics”—will be and will remain a subject of the investigation. Ultimately, because a decision concerning these two competing theories of rights remains unresolved and debated, we will need to keep both in play by using a strategy that Žižek calls “the parallax view,” which is a way of perceiving and taking account of the difference that is available in the shift from the one theoretical perspective to the other.12 What is important, therefore, is not selecting the “right theory” of rights at the beginning and sticking with it. What is important is developing the capability to call out and identify which theory is in play in which particular consideration of rights and how each of these frame distinctly different ways of responding to the question “Can and should robots have rights?”
The terms “robot” and “rights” are already complicated enough, but once you put the two words together, you get something of an allergic reaction. “Robot rights” is for many theorist and practitioners simply unthinkable, meaning that it is either unable to be thought, insofar as the very concept strains against common sense or good scientific reasoning; or is to be purposefully avoided as something that must not be thought—i.e., as a kind of prohibited idea or blasphemy that would open a Pandora’s box of problems and therefore must be suppressed or repressed (to use common psychoanalytical terminology). Whatever the reason, there is something of a deliberate decision and concerted effort not to think—or at least not to take as a serious matter for thinking—the question of robot rights. The very idea is openly mocked as ridiculous and laughable; it is cited for engaging in pointless speculation that has the effect of distracting one from the true and serious work that needs to be done; or the subject is pushed aside and marginalized as a kind of appendix or sidebar that could be of interest in the future, but which (for now at least) does not really need serious attention. This is, however, precisely the reason why we must think the unthinkable—to challenge these declarations, assumptions, and orthodoxies; and this needs to transpire for at least two reasons.
First, any dogmatic declaration of this kind should already make us uneasy and suspicious. The fact that this thought—the mere thought that is made available in the association of the two words “robot” and “rights”—has been declared “unthinkable” should give one pause and trigger critical questions like: Who says? Who gets to decide in advance what we can or cannot think? And, perhaps more revealing, what values and assumptions are being protected through this kind of proscription and prohibition? As Barbara Johnson (1981, xv) reminds us, the task of critical thinking is to read backward from what seems obvious, self-evident, and natural, to show how these things already have a history that determines what becomes possible and what proceeds from them. When an idea, like robot rights, is immediately declared unthinkable, that is an indication of the need for critical philosophy and the difficult but necessary task of confronting and thinking the unthinkable.
Second, challenging exclusions and prohibitions is the proper work of ethics. Ethics operates by making exclusive decisions. It inevitably and unavoidably picks winners and losers, and determines who is inside the moral community and what remains outside or on the periphery (Gunkel 2014 and Coeckelbergh 2012). But our moral theories and practices develop and evolve by challenging these exceptions and limitations. Ethics, therefore, advances by critically questioning its own exclusivity and eventually accommodating many previously excluded or marginalized others—women, people of color, animals, the environment, etc. “In the history of the United States,” Susan Leigh Anderson (2008, 480) has argued, “gradually more and more beings have been granted the same rights that others possessed and we’ve become a more ethical society as a result.” But as Stone (1974, 6) has pointed out, this moral progress is difficult insofar as any extension of rights—any effort to extend moral consideration to previously excluded populations—has always been “a bit unthinkable.” In other words, “each time there is a movement to confer rights onto some new ‘entity,’ the proposal is bound to sound odd or frightening or laughable” (Stone 1974, 7). The task of both moral and legal philosophy, therefore, is not (and cannot be) simply to defend existing orthodoxies by shrinking away in fright in the face of alternatives, trigging angry reactions to the possibility of including others, or mocking such challenges with dismissive laughter. Instead the task is, or at least should be, to stress-test and question the limitations and exclusions of existing moral positions and modes of thinking. Defending orthodoxy is the purview of religion and ideology; critically testing hypotheses and remaining open to revising the way we think about the world in the face of new challenges and opportunities is the task of science. The project of philosophy—and moral philosophy in particular—has always been to ask about what has not been thought before or what has been decried and avoided as unthinkable. Doing so for the previously unthinkable thought of “robot rights” is precisely the objective of the chapters that follow.