6 
Thinking Otherwise

As is evident from the preceding four chapters, each modality has its particular advantages and challenges. Therefore, none of the four offer what would be considered a definitive and indisputable answer to the question “Can and should robots have rights?” In the face of this result, one can obviously continue to concoct arguments, accumulate evidence, and formulate revealing illustrations and examples that support one modality or another. But this effort will do little or nothing to advance the debate beyond where we currently stand. In order to get some new perspective on things, we can (and perhaps should) try something different. This alternative, which can be called thinking otherwise—insofar as it seeks to respond to other forms of otherness in a way that is significantly different—does not argue either for or against the is-ought inference or in favor of one or another of its four modalities. Instead, it deconstructs this conceptual configuration.1

This is precisely the innovation introduced and developed by Emmanuel Levinas, who, in direct opposition to the usual ways of thinking, asserts that ethics precedes ontology. In other words, it is the axiological aspect, the ought or should dimension, that comes first, in terms of both temporal sequence and status, and then the ontological aspect (the is or can) follows from this decision.2 This is a deliberate and calculated provocation that cuts across the grain of the philosophical tradition. As Luciano Floridi (2013, 116) has correctly pointed out, in most moral theory, “what the entity is [the ontological question] determines the degree of moral value it enjoys [the ethical question].” Levinas deliberately inverts and distorts this procedure. According to this alternative way of thinking—what Roger Duncan (2006, 277) calls “the underivability of ethics from ‘ontology’”—we are initially confronted with a mess of anonymous others who intrude on us and to whom we are obligated to respond even before we know anything at all about them and their inner workings. To use Hume’s terminology—which will be a kind of translation insofar as Hume’s philosophical vocabulary, and not just his language, is something that is foreign to Levinas’s own formulations—we are first obligated to respond, and then, after having made a response, to what or whom we responded is able to be determined and identified. As Jacques Derrida (2005, 80) has characterized it, the crucial task is to “reach a place from which the distinction between who and what comes to appear and become determined.”

6.1 Levinas 101

Referring to and utilizing Levinas in this context is already doubly problematic. First, for many of those working in roboethics, robot ethics, machine ethics, philosophy of technology, etc., Levinas is and remains other. His philosophy is alien to and remains outside of, or at least on the periphery of, standard approaches to pursuing moral inquiry in the face of these technological opportunities and challenges. The work of Levinas, who is arguably the most celebrated moral theorist in the continental tradition, literally has no place in Moral Machines (Wallach and Allen 2009), Machine Ethics (Anderson and Anderson 2011), Robot Ethics (Lin et al. 2012 and 2017), The Ethics of Information (Floridi 2013), or Robot Law (Calo, Froomkin, and Kerr 2017), and is only of marginal concern in Robophilosophy (Seibt, Nørskov, and Andersen 2016) and Social Robots (Nørskov 2016). For this reason, Levinas and “the ethics of otherness” that is credited to his philosophical innovations remain an alien presence in, or, perhaps stated more accurately, the excluded other of moral philosophy’s own efforts to address the opportunities and challenges of robots and emerging technology.

Second, Levinas (and many of those individuals who follow and pursue his particular brand of philosophical inquiry) does not do much to assist himself on this account. He would most certainly resist and struggle against this application of his work to technology in general and robots in particular. In fact, Levinas, who died in 1995, wrote little or nothing concerning technology and did not address the opportunities and challenges made available by the twentieth century innovations in computers, computer networks, artificial intelligence, and robotics. To complicate matters, his “ethics of otherness” has difficulties accommodating and responding to anything other than another human entity. “In the main body of his philosophical work,” Barbara Jane Davy (2007, 39) argues: “Emmanuel Levinas presents ethics as exclusive to human relations. He suggests that because plants and animals lack language and do not have faces like human faces, we cannot enter into ethical relations with nonhuman others in what he calls ‘face to face’ relations.” Furthermore, the subsequent generation of scholars who have followed in Levinas’s footsteps have done very little to port his brand of philosophy into technology. Recent efforts at extending Levinas’s work, or “radicalizing Levinas,” as Peter Allerton and Matthew Calarco (2010) have called it, pursue the animal question (Llewelyn 1995), environmental ethics (Davy 2007), and even the alterity of things (Benso 2000); but few have sought to develop what would be a functional Levinsian API (application program interface) for technology. Apart from a few marginal exceptions—one essay by Cohen (2000, reprinted in Cohen 2010) and the work that has been already published by Coeckelbergh (2016c) and myself (Gunkel 2007, 2012, and 2016b)—there have been few attempts to develop a Levinsian philosophy of technology. For both these reasons—that is, (1) the exclusion or at least marginalization of Levinasian thought from the standard procedures and practices of robot ethics, and (2) the marginalization of technology in the works of Levinas and those others who follow his lead—applying Levainasian philosophy to the question of robot rights will require that we first get some basics out of the way and then rework and extend these innovations in the direction of robots and related technology.

6.1.1 A Different Kind of Difference

Levinas, a French philosopher of Lithuanian Jewish ancestry, is usually associated with that group of late twentieth century thinkers that have been tagged (for better or worse) with the moniker “poststructuralism.” So let’s start with the root element of this term, “structuralism.” Although structuralism does not constitute a formal discipline or singular method of investigation, its particular innovations are widely recognized as the result of developments in the “structural linguistics” of Ferdinand de Saussure. In the posthumously published Course in General Linguistics, Saussure argued for a fundamental shift in the way that language is understood and analyzed. “The common view,” as Jonathan Culler (1982, 98–99) describes it, “is doubtless that a language consists of words, positive entities, which are put together to form a system and thus acquire relations with one another …” Saussure turns this commonsense view on its head. For him, the fundamental element of language is the sign and “the constitutive structure of signs is,” as Mark Taylor (1999, 102) explains, “binary opposition.” “In language,” Saussure (1959, 120) argues in one of the most often quoted passages from the Course, “there are only differences. Even more important: a difference generally implies positive terms between which the difference is set up; but in language there are only differences without positive terms.” For Saussure then, language is not composed of individual linguistic units that have some intrinsic value or positive meaning and that subsequently comprise a system of language through their associations and interrelationships. Instead, a sign, any sign in any language, is defined by the differences that distinguish it from other signs within the linguistic system to which it belongs. According to this way of thinking, the sign is an effect of difference, and language itself consists in a system of differences. This characterization of language, although never explicitly described in this fashion by Saussure, mirrors the logic of the digital computer, where the binary digits 0 and 1 have no intrinsic or positive meaning but are simply indicators and an effect of difference—a switch that is either on or off.

Poststructualism, as the name indicates, identifies as a kind of aftereffect or further development of the innovations introduced by structuralism. “While poststructuralism does not constitute,” as Taylor (1997, 269) points out, “a unified movement, writers as different as Jacques Derrida, Jacques Lacan, and Michel Foucault on the one hand, and on the other Hélèn Cixous, Julia Kristeva, and Michel de Certeau devise alternative tactics to subvert the grid of binary oppositions with which structuralists believe they can capture reality.”3 Because poststructuralism does not name a unified movement or singular method, what makes its different forms cohere is not an underlying similarity but a difference, specifically different modes of thinking difference differently. In other words, what draws the different articulations of poststructuralism together into an affiliation that may be identified with this one term is not a homogeneous method or technique of investigation. Instead what they share and hold in common is an interest in contending with the differences customarily situated between conceptual opposites outside of and beyond not just the theoretical grasp of structuralism but the entire ideological apparatus of Western philosophy. Consequently, the different and rather diverse approaches that constitute poststructualism all take aim at difference and endeavor to articulate, in very different and sometimes incompatible ways, a kind of difference that is, for lack of a better description, radically different.

For Levinas, the target of this effort is located in what he calls “the same.” “Western philosophy,” as Levinas (1969, 43) writes, “has most often been an ontology: a reduction of the other to the same by interposition of a middle or neutral term that ensures the comprehension of being.” According to Levinas’s analysis, the standard operating presumption of Western philosophy can be described as an effort to reduce or mediate apparent differences. In the history of moral philosophy, this typically takes shape in terms of a sequence of competing centrisms and an ever-expanding circle of moral inclusion. Anthropocentric ethics, for example, posits a common humanity that is determined to underlie and substantiate the perceived differences in race, gender, ethnicity, class, etc. It is because of an assumed shared humanity that I am obligated to respond to another person, irrespective of accidental differences in appearance and geographical location, with moral consideration and respect. Biocentric ethics expands the circle of inclusion by assuming that there is a common value in life itself, which subtends all forms of available biological diversity. And in the ontocentric theory of Floridi’s information ethics (IE)—what Floridi (2013, 85) argues will have been a more inclusive and universal form of macroethics and the “ultimate completion” of this effort at moral expansionism (Floridi 2013, 65)—it is “being,” the very substance of ontology, that supposedly underlies and supports all apparent differentiation. “IE is,” Floridi (2008, 47) writes, “an ecological ethics that replaces biocentrism with ontocentrism. IE suggests that there is something even more elemental than life, namely being—that is, the existence and flourishing of all entities and their global environment.”4

All of these innovations, despite differences in focus, employ a similar maneuver and logic: that is, they redefine the center of moral consideration in order to describe progressively larger circles that come to encompass a wider range of possible participants. Although there are and will continue to be considerable debates about what should define the center and who or what is or is not included, this debate is not the problem. The problem rests with the strategy itself. In taking a centrist approach, these different ethical theories endeavor to identify what is essentially the same in a phenomenal diversity of different individuals. Consequently, they include others by effectively stripping away and reducing differences. This approach, although having the appearance of being progressively more inclusive, effaces the unique alterity of others and turns them into more of the same. This is, according to Levinas (1969 and 1981), the defining gesture of Western philosophy and one that does considerable violence to others. “The institution of any practice of any criterion of moral considerablity,” the environmental ethicist Thomas Birch (1993, 317) writes, “is an act of power over, and ultimately an act of violence” toward others. The issue, therefore, is not deciding which form of centrism is more or less inclusive of others; the difficulty rests with this strategy itself, which succeeds only by reducing difference and turning what is other into a modality of the same.

Levinas deliberately interrupts and resists this homology or reductionism, which is, as he argues, an exercise of “appropriation and power” (Levinas 1987, 50). He does not just contest the different universal terms that have been identified and asserted as the common ontological element underlining differences but criticizes the very logic that comprises this reductio differencia. “Perceived in this way,” Levinas (1969, 43) writes, “philosophy would be engaged in reducing to the same all that is opposed to it as other.” In direct response to this, Levinasian philosophy, like other forms of poststructuralist thinking, endeavors to manage difference differently by articulating a form of moral consideration that can respond to and take responsibility5 for the other, not as something that is determined to be substantially similar to oneself, but in its irreducible otherness. Levinas, then, not only is critical of the traditional tropes and traps of Western ontology but proposes an ethics of radical otherness that deliberately resists and interrupts the philosophical gesture par excellence, that is, the reduction of difference to the same (which is manifest and operative in all forms of anthropocentric, biocentric, and ontocentric moral theory). This radically different approach to thinking difference differently—this “eccentric ethics of otherness,” as I have called it elsewhere (Gunkel 2014a, 113)—is not just a useful and expedient strategy. It is not, in other words, a mere gimmick. It constitutes a fundamental reorientation that effectively changes the rules of the game and the standard operating presumptions. In this way, “morality is,” as Levinas (1969, 304) concludes, “not a branch of philosophy, but first philosophy.” This statement deliberately contests and inverts a fundamental assumption in traditional forms of philosophical thinking. Typically the title “first philosophy” had been, all the way from Aristotle to at least Heidegger, assigned to ontology such that ethics, as a result and by comparison, was assumed to be secondary in both sequence and status. For Levinas the order of things is reversed; morality comes first, and it is ontology that is secondary and derived.

Consequently, Levinas’s philosophy is not what is typically understood as an ethics, a metaethics, a normative ethics, or even an applied ethics. It is what John Llewelyn (1995, 4) has called a “proto-ethics” or what Derrida (1978, 111) has identified as an “Ethics of Ethics.” “It is true,” Derrida (1978, 111) explains, “that Ethics in Levinas’s sense is an Ethics without law and without concept, which maintains its non-violent purity only before being determined as concepts and laws. This is not an objection: let us not forget that Levinas does not seek to propose laws or moral rules, does not seek to determine a morality, but rather the essence of the ethical relation in general. But as this determination does not offer itself as a theory of Ethics, in question, then, is an Ethics of Ethics.”6 This fundamental reconfiguration, which puts ethics first in both sequence and status, permits Levinas to circumvent and deflect a lot of the difficulties that have traditionally tripped up moral thinking in general and efforts to address other forms of otherness in particular.

6.1.2 Social and Relational

According to this alternative way of thinking, moral status is decided and conferred not on the basis of substantive characteristics or internal properties that have been identified in advance of social interactions but according to empirically observable, extrinsic relationships. “Moral consideration,” as Mark Coeckelbergh (2010a, 214) describes it, “is no longer seen as being ‘intrinsic’ to the entity: instead it is seen as something that is ‘extrinsic’: it is attributed to entities within social relations and within a social context.” As we encounter and interact with other entities—whether they are another human person, an animal, the natural environment, or a domestic robot—this other entity is first and foremost experienced in relationship to us. “Relations are ‘prior’ to the relata” (Coeckelbergh 2012, 45). Or as Levinas (1987, 54) has characterized it, “experience, the idea of infinity, occurs in the relationship with the other. The idea of infinity is the social relationship.” The question of moral status, therefore, does not depend on and derive from what the other is in his/her/its essence but on how he/she/it (and the choice of pronoun here is already part of the problem) comes to appear or supervene before me and how I decide, in “the face of the other” (to use that distinctly Levinasian terminology), to make a response to or to take responsibility for (and in the face of) another. In this transaction, the “relations are prior to the things related” (Callicott 1989, 110), instituting what Anne Gerdes (2015), following Coeckelbergh (2010a), has called “a relational turn” in ethics.7 Consequently, and contrary to Floridi’s (2013, 116) description, what the entity is does not determine the degree of moral value it enjoys. Instead, the exposure to the face of the Other, what Levinas calls “ethics,” precedes and takes precedence over all these ontological machinations and determinations. Although Levinas never uses Hume’s terminology, ought precedes and takes precedence over is.

A similar form of a social relational ethics, where ought is prior (in both sequence and status) to is, can be found in contributions supplied by non-Western sources, which are often mobilized by Westerners seeking an alternative in what has traditionally been “the other.” Raya Jones, as we have seen in the previous chapter, contests the Western “Weltanschauung of individualism” by looking to different formulations of collectivism from the East, especially Japan (Jones 2016, 83–84). Similar to Levinasian philosophy, “contributions to social robotics from the Far East,” as Jones (2016, 83–84) asserts, focus not on the intrinsic or ontological properties of the robotic entity but on the social configuration and mode by which human communities relate and respond to the alterity that is embodied and performed by the robot. In these social situations, the Other is not limited to just another human individual but can also be an “inanimate object” (Jones 2016, 83–84). Levinas did not himself address or pursue the points of contact between his “ethics of otherness” and the non-Western contributions described by Jones, and Jones, for her part, does not mention or engage with the work of Levinas in her efforts to describe a collectivist approach to moral consideration. Yet, there is an important affinity between the two traditions—perhaps not exactly “the same,” the homology of which would be far too abrasive for Levinasian philosophy, but important similarities that extend across and accommodate cultural differences.

According to Levinas (and others who follow his lead), the Other always and already obligates me in advance of the customary decisions and debates concerning who or what is and is not a moral subject. “If ethics arises,” as Calarco (2008, 71) writes, “from an encounter with an Other who is fundamentally irreducible to and unanticipated by my egoistic and cognitive machinations,” then identifying the “‘who’ of the Other” is something that cannot be decided once and for all or with any certitude. This apparent inability or indecision, however, is not necessarily a problem. In fact, it is a considerable advantage insofar as it opens ethics not only to the Other but to other forms of otherness (i.e., those other entities that remain otherwise than another human being). “If this is indeed the case,” Calarco (2008, 71) concludes, “that is, if it is the case that we do not know where the face begins and ends, where moral considerability begins and ends, then we are obligated to proceed from the possibility that anything might take on a face. And we are further obligated to hold this possibility permanently open.” Levinasian philosophy, therefore, does not make prior commitments or decisions about who or what will be considered a legitimate moral subject. For Levinas anything that faces the I and calls its immediate self-involvement (or what Levinas, using a Latin derivative, calls “ipseity”) into question would be Other and would constitute the site of ethics.

This shift in perspective—a shift that inverts the standard operating procedure by putting the ethical relationship before ontological determinations—is not just a theoretical proposal. It has, in fact, been experimentally confirmed in a number of practical investigations with computers and robots. The computer as social actor (CASA) studies undertaken by Reeves and Nass (1996), for instance, demonstrated that human users will accord computers social standing similar to that of another human person and that this occurs as a product of the extrinsic social interaction, irrespective of the actual intrinsic/ontological properties (known or not) of the entities in question. In the face of the machine, Reeves and Nass find, human test subjects tend to treat the computer as another socially significant Other. In other words, a significant majority of test subjects responded to the alterity of the computer as someone who counts as opposed to just another object—what is a mere tool or instrument that does not matter—and this occurs as a product of the extrinsic social circumstances and often in direct opposition to the ontological properties of the mechanism (Reeves and Nass 1996, 22). Although Levinas would probably not recognize it as such, what is demonstrated by CASA and related studies, like those published by Bartneck and Hu (2008), Rosenthal-von der Pütten et al. (2013), and Suziki et al. (2015), is precisely what Levinas had advanced and argued; namely, that the ethical response to the other precedes and even trumps knowledge concerning ontological properties.

6.1.3 Radically Superficial

The use of the term “face” is unavoidable, when describing Levinasian thought. In fact, the one thing most people known about Levinas is that his brand of moral philosophy is all about the face, specifically “the face of the other.” This attention to “face,” however, is not simply an expedient metaphor. It is the superficial rejoinder to the profound problem of other minds. In the face of others, the seemingly persistent and irresolvable problems of other minds—the difficulty of knowing with any certitude whether the other who confronts me has a conscious mind, is capable of experiencing pain, or possesses some other morally relevant property—is not some fundamental limitation that must be addressed and resolved prior to moral decision-making. Levinasian philosophy, instead of being tripped-up or derailed by this classic epistemological problem, immediately affirms and acknowledges it as the very condition of possibility for ethics as such. Or as Richard Cohen (one of Levinas’s Anglophone interpreters) succinctly describes it, “not ‘other minds,’ mind you, but the ‘face’ of the other, and the faces of all others” (Cohen 2001, 336). In this way, then, Levinas provides for a seemingly more attentive and empirically grounded approach to the problem of other minds insofar as he explicitly acknowledges and endeavors to respond to and take responsibility for the original and irreducible difference of others instead of getting involved with and playing all kinds of speculative (and oftentimes wrongheaded) head-games. “The ethical relationship,” Levinas (1987, 56) writes, “is not grafted on to an antecedent relationship of cognition; it is a foundation and not a superstructure. … It is then more cognitive than cognition itself, and all objectivity must participate in it.”

This means that the order of precedence in moral decision-making can, and perhaps should be, reversed. Internal, substantive properties do not come first and then moral respect follows from this ontological fact. Instead, the morally significant properties—those ontological criteria that we assume ground moral respect—are what Slavoj Žižek (2008a, 209) terms “retroactively (presup)posited” as the result of and as justification for decisions made in the face of social interactions with others. In other words, we project the morally relevant properties onto or into those others who we have already decided to treat as being socially significant—those Others who are deemed to possess face, in Levinasian terminology. In social situations—in contending with the exteriority of others—we always and already decide between who counts as morally significant and what does not, and then retroactively justify these actions by “finding” the properties that we believe motivated this decision-making in the first place. Properties, therefore, are not the intrinsic a priori condition of possibility for moral standing. They are a posteriori products of extrinsic social interactions with and in the face of others.

Once again, this is not some heady theoretical formulation; it is practically the definition of machine intelligence. Although the phrase “artificial intelligence” is the product of an academic conference organized by John McCarthy at Dartmouth College in 1956, it is Alan Turing’s 1950 paper and its “game of imitation,” or what is now routinely called “the Turing Test,” that defines and characterizes the field. Although Turing begins his essay by proposing to consider the question “Can machines think?” he immediately recognizes the difficulty with defining the subject “machine” and the property “think.” For this reason, he proposes to pursue an alternative line of inquiry, one that can, as he describes it, be “expressed in relatively unambiguous words” (Turing 1999, 37). “The new form of the problem can be described in terms of a game which we call the ‘imitation game.’ It is played with three people, a man (A), a woman (B), and an interrogator (C) who may be of either sex. The interrogator stays in a room apart from the other two. The object of the game for the interrogator is to determine which of the other two is the man and which is the woman” (Turing 1999, 37). This determination is to be made on the basis of simple questions and answers. The interrogator (C) asks both the man (A) and the woman (B) various questions, and based on their responses (and only their responses) to these inquiries tries to discern whether the respondent is a man or a woman. “In order that tone of voice may not help the interrogator,” Turing further stipulates, “the answers should be written, or better still, typewritten. The ideal arrangement is to have a teleprinter communicating between the two rooms” (Turing 1999, 37–38).

Turing then takes his thought experiment one step further. “We can now ask the question, ‘What will happen when a machine takes the part of A in this game?’ Will the interrogator decide wrongly as often when the game is played like this as he does when the game is played between a man and a woman? These questions replace our original, ‘Can machines think?’” (Turing 1999, 38). In other words, if the man (A) in the game of imitation is replaced with a computer, would this device be able to respond to questions and simulate, to a reasonably high degree of accuracy, the activity of another person? If a computer is capable of successfully simulating a human being in communicative exchange to such an extent that the interrogator cannot tell whether she is interacting with a machine or another person, then that machine should, Turing concludes, be considered “intelligent.” Or in Žižek’s terms, if the machine effectively passes for another human person in communicative interactions, the property of intelligence would need to be “retroactively (presup)posited” for that entity, and this is done irrespective of the actual internal states or operations of the other, which are, according to the stipulations of Turing’s test, unknown and hidden from view.

6.2 Applied (Levinasian) Philosophy

The advantage of applying Levinasian philosophy to the question of robot rights is that it provides an entirely different method for responding to the challenge not just of robots and other forms of autonomous technology but of the ways we have typically tried to address and decide things in the face of this very challenge. What is of fundamental importance is that this Levinasian alternative shifts the focus of the question and transforms the terms of the debate. Here it is no longer a matter of deciding, for example, whether robots can have rights or not, which is largely an ontological query concerned with the prior discovery of intrinsic and morally relevant properties. Instead, it involves making a decision concerning whether or not a robot—or any other entity for that matter—ought to have standing or moral/social status, which is an ethical question and one that is decided not on the basis of what things are but on how we relate and respond to them in actual social situations and circumstances. “The relational approach,” as Coeckelbergh (2010a, 219) concludes, “suggests that we should not assume that there is a kind of moral backpack attached to the entity in question; instead, moral consideration is granted within a dynamic relation between humans and the entity under consideration.” In this case, then, the actual practices of social beings in interaction with each other take precedence over the ontological properties of the individual entities or their different material implementations. This change in perspective provides for a number of important innovations that affect not just the ethical opportunities and challenges of robots but of moral philosophy itself.

6.2.1 The Face of the Robot

Looked at through the lens of Levinasian philosophy,8 the question “Can or should robots have rights?” becomes something like “Can or should robots have face?” But this formulation is not quite accurate insofar as the verb “to have,” as in “to have face” or “to have a face,” has the tendency to turn “face” into a possession and a property that belongs to someone or something. The form of the question, therefore, risks redeploying the properties approach to deciding moral standing in the process of trying to articulate an alternative. A similar complication has been identified and addressed by Derrida (1978, 105) concerning the word “Other,” which is “a substantive—and such it is classed by the dictionaries—but a substantive which is not, as usual, a species of noun: neither common noun, for it cannot take, as in the category of the other in general, the heteron, the definite article. Nor the plural,” since implied in that kind of usage is “property, rights: the property, the rights of Others” (Derrida 1978, 105). Instead of asking “Can or Should robots have face?” or “Can or should robots be the Other?” we should rework the form of the question: “What does it take for a robot to supervene and be revealed as Other in the Levinasian sense?” This question, which recognizes that “alterity is a verb” no longer asks about “moral standing in a strict sense, since ‘standing’ suggests that there is an ontological platform onto which morality is mounted. It is therefore a more ‘direct’ ethical question: under what conditions can a robot—this particular robot that appears here before me—be included in the moral community?” (Coeckelbergh and Gunkel 2014, 723–24, slightly modified).

In order to respond to this other question (a question that is formulated otherwise and that is able to address others and other kinds of others), we need to consider not “robot” as a kind of generic ontological category but a specific instance of an encounter with a particular entity—one that challenges us to think otherwise. In July of 2014, for instance, the world got its first look at Jibo. Exactly who or what is Jibo? That is an interesting and important question. In a promotional video that was designed to raise capital investment through preorders, social robotics pioneer Cynthia Breazeal introduced Jibo with the following explanation: “This is your car. This is your house. This is your toothbrush. These are your things. But these [and the camera slowly zooms into a family photograph] are the things that matter. And somewhere in between is this guy. Introducing Jibo, the world’s first family robot” (Jibo 2014). Whether explicitly recognized as such or not, this promotional video leverages Derrida’s (2005, 80) distinction between who and what. On the side of what we have those things that are considered to be mere objects—our car, our house, and our toothbrush. According to the instrumental theory of technology, these things are mere instruments that do not have any independent moral status whatsoever (Lyotard 1984, 44). We might worry about the impact that the car’s emissions have on the environment (or perhaps stated more precisely, on the health and well-being of the other human beings who share the planet with us), but the car itself is not a moral subject. On the other side there are, as the promotional video describes it “those things that matter.” These things are not things, strictly speaking, but are the other persons who count as socially and morally significant Others. Unlike the car, the house, or the toothbrush, these Others have moral status and rights (i.e., privileges, claims, powers, and immunities). They can, in other words, be benefitted or harmed by our decisions and actions.

Jibo, we are told, occupies a place that is situated somewhere in between what are mere things and who really matters. Consequently Jibo is not just another instrument, like the automobile or toothbrush. But he/she/it (and again the choice of pronoun is not without consequence) is also not quite another member of the family pictured in the photograph. Jibo inhabits a place that is situated in between these two options. He/she/it occupies the in-between position Ihde (1990, 98) calls the “quasi-other” or what Prescott identifies as “liminal”:

Whilst most robots are currently little more than tools, we are entering an era where there will be new kinds of entities that combine some of the properties of machines and tools with psychological capacities that we had previously thought were reserved for complex biological organisms such as humans. Following Kang [2011, 17], the ontological status of robots might be best described as liminal—neither living quite in the same way as biological organisms, nor simply mechanical as with a traditional machine. The liminality of robots makes them both fascinating and inherently frightening, and a lightning rod for our wider fears about the dehumanising effects of technology. (Prescott 2017, 148)

This is, it should be noted, not unprecedented. We are already familiar with other entities who/that occupy a similar ambivalent social position, like the family dog. In fact animals, which since the time of Descartes’s bête-machine have been the other of the machine (Gunkel 2012 and 2017b), provide a useful precedent for understanding the opportunities and challenges of sociable robots, like Jibo. Some animals, like the domestic pigs that are raised for food, occupy the position of what, being mere things that can be used and disposed of as we see fit. Other animals, like a pet dog, are closer (although not identical) to another person who counts as Other. They are named, occupy a place alongside us inside the house, and are considered by many to be “a member of the family” (Coeckelbergh and Gunkel 2014).

As we have seen, we typically theorize and justify the decision between the what and the who on the basis of intrinsic properties. This approach puts ontology before ethics, whereby what an entity is determines how it comes to be treated. But this method, for all its expediency, also has considerable difficulties (as previously described in chapter 3): (1) substantive problems with inconsistencies in the identification and selection of the qualifying property; (2) terminological troubles with the definition of the morally significant property; (3) epistemological complications with detecting and evaluating the presence of the property in another; and (4) moral concerns caused by the very effort to use this determination to justify extending moral standing to others. In fact, if we return to the example of animals, it seems difficult to justify differentiating between the pig, which is a thing we raise and slaughter for food and other raw materials, and the dog, who is considered to be (at least from the perspective of a contemporary European/North American cultural context, if not beyond) a member of the family, on the basis of ontological properties. In terms of all the usual criteria—consciousness, sentience, suffering, etc.—the pig and the dog seem to be (as far as we are capable of knowing and detecting) virtually indistinguishable. Our moral theory might dictate strict ontological criteria for inclusion and exclusion, but our everyday practices seem to operate otherwise, proving George Orwell’s Animal Farm (1945, 118) correct: “All animals are equal, but some animals are more equal than others.”

Alternative approaches to making these decisions, like that developed by Levinas, recognize that who is or can be Other is much more complicated. The dog, for instance, occupies the place of an Other (or at least Ihde’s “quasi-other”) who counts, while the pig is excluded as a mere thing, not because of differences in their intrinsic properties, but because of the way these entities have been situated in social relationships with us. One of these animals shares our home, is bestowed with a proper name, and is considered to have face, to use the Levinasian terminology. The other one does not. Social robots, like an animal, occupy an essentially undecidable or liminal position that is in between who and what. Whether Jibo or another kind of social robot is or is not an Other, therefore, is not something that will be decided in advance and on the basis of intrinsic properties; it will be negotiated and renegotiated again and again in the face of actual social circumstances and interactions. It will, in other words, be out of the actual social relationships we have with social robots that one will decide whether a particular robot counts or not. It is precisely for this reason that Prescott hesitates to dismiss or write-off the actual experiences of users. According to his analysis, what users of technology do in the face of the robot matters and needs to be taken seriously. “We should,” Prescott (2017, 145) concludes, “take into account how people see robots, for instance, that they may feel themselves as having meaningful and valuable relationships with robots, or they may see robots as having important internal states, such as the capacity to suffer, despite them not having such capacities.”

Jibo, and other social robots like this, are not science fiction. They are already or will soon be in our lives and in our homes.9 And in the face of these socially situated and interactive entities, we are going to have to decide whether they are mere things like our car, our house, and our toothbrush; someone who matters like another member of the family; or something altogether different that is situated in between the one and the other as a kind of quasi-other. In whatever way this comes to be decided, however, these entities undoubtedly challenge our concept of ethics and the way we typically distinguish between who is to be considered Other and what is not. Although there are certainly good reasons to be concerned with how these technologies will be integrated into our lives and what the effect of that will be, this concern does not justify alarmist reactions and exclusions. We need, following the moral innovations of Levinas, to hold open the possibility that these devices might also implicate us in social relationships where they take on or are attributed face. At the very least, ethics obligates us—and it does so in advance of knowing anything at all about the inner workings and ontological status of these other kinds of entities—to hold open the possibility that they might become Other. Turkle (2011, 9), therefore, is right about one thing, we are and we should be “willing to seriously consider robots not only as pets but as potential friends, confidants, and even romantic partners.” But this is not, as Turkle had initially suggested, a dangerous weakness or vulnerability to be avoided at all costs. It is ethics.

6.2.2 Ethics Beyond Rights

From one perspective, this Levinasian influenced, relational ethic might appear to be similar, at least functionally similar, to that developed in the second pair of modalities—those that distinguish between and do not derive ought from is. Kate Darling, in particular, advances a case for extending moral and legal considerations to social robots irrespective of what they are or can be. So what makes the Levinasian proposal different from the fourth modality (chapter 5)? The important difference is, in a word, anthropomorphism. For Darling, the reason we ought to treat robots well is because we perceive something of ourselves in them. Even if these traits and capabilities are not really there in the mechanism, we project them onto others—and other forms of otherness, like that represented by social robots—by way of anthropomorphism. For Darling, then, it is because the other looks and feels (to us) to be something like us that we are then obligated to extend to it some level of moral and legal consideration.

For Levinas this anthropomorphic operation is the problem insofar as it reduces the other to a modality of the same—turning what is other into an alter ego and mirrored projection of oneself. Levinas deliberately resists this gesture that already domesticates and even violates the alterity of others.10 Ethics for Levinas is entirely otherwise: “The strangeness of the Other, his irreducibility to the I, to my thoughts and my possessions, is precisely accomplished as a calling into question of my spontaneity, as ethics” (Levinas 1969, 43). For Levinas, then, ethics is not based on “respect for others” but transpires in the face of the other and the interruption of ipseity that it produces. The principal moral gesture, therefore, is not the conferring or extending of rights to others as a kind of benevolent gesture or even an act of compassion but deciding how to respond to the Other who supervenes before me in such a way that always and already places me and my self-mastery in question. Levinasian philosophy, therefore, deliberately interrupts and resists the imposition of power that Birch (1995, 39) finds operative in all forms of rights discourse: “The nub of the problem with granting or extending rights to others … is that it presupposes the existence and the maintenance of a position of power from which to do the granting.” Whereas Darling (2012 and 2016a) is interested in “extending legal protection to social robots,” Levinas provides a way to question the assumptions and consequences involved in this very gesture of power over others.

This alternative configuration, therefore, does not so much answer or respond to the question with which we began as it alters the terms of the inquiry itself. When one asks “Can or should robots have rights?” the form of the question already makes an assumption: namely, that rights are a kind of personal property or possession that an entity can have or should be bestowed with. Levinas, however, does not use the concepts or language of rights, and this is deliberate. “For rights,” as Hillel Steiner (2006, 459) points out, “are essentially about who is owed what by whom.” Considerations of rights, therefore, involve a subject (who) and an object (whom) (Sumner 2000, 289). But this gets ahead of things by making a prior decision about who or what can be the subject and object of rights. As Derrida (1978, 85) points out, Levinasian philosophy is directed against and works to articulate what is anterior to the “primacy of the subject-object correlation.” For this reason, the Other is not, strictly speaking, either the subject of rights (i.e., one or more of the Hohfeldian incidents: privilege, claim, power, and/or immunity) or subjected to rights (as typically formulated in terms of either the Will or Interest Theory). As Coeckelbergh (2016c, 190) succinctly describes it, otherness is not a noun: “alterity is a verb.” The Other (despite this grammatical construction) is not a substantive subject position that can be declared in advance of interactions with others. It is an action or a happening that needs to be responded to in the act of specific social challenges and interactions. “It is,” as Coeckelbergh (2016c, 190) explains, mobilizing Heideggerian terminology, “about experiencing (Erfahren) and something happening to you, which can assume the dimension of a meeting or en-counter (Wiederfahren).”

Levinasian ethics, then, is not reducible to the Hohfeldian incidents of privileges, claims, powers, and/or immunities; nor is it limited to and determined by the Will Theory or Interest Theory of rights. It articulates a mode of thinking that is prior and anterior to these standard determinations. “For the condition for,” Levinas (1987, 123) explains, “or the unconditionality of, the self does not begin in the autoaffection of a sovereign ego that would be, after the event, ‘compassionate’ for another. Quite the contrary: the uniqueness of the responsible ego is possible only in being obsessed by another, in the trauma suffered prior to any auto-identification, in an unrepresentable before.” The self or the ego, as Levinas describes it, does not constitute some preexisting self-assured condition that is situated before and is the cause of the subsequent relationship with another. It does not (yet) take the form of empathy or compassion or even respect for the rights (i.e., a privilege, a claim, a power, and/or an immunity) of the other. Rather it becomes what it is as a byproduct of an uncontrolled and incomprehensible exposure to the face of the Other that takes place prior to any formulation of the self in terms of agency.

Likewise, the Other is not comprehended as a “moral patient” who is constituted as the recipient of the agent’s actions and whose interests and rights would need to be identified, taken into account, and duly respected. Instead, the absolute and irreducible exposure to the Other is something that is anterior and exterior to these determinations, not only remaining beyond the range of their conceptual grasp and regulation but also making possible and ordering the antagonistic structure that subsequently comes to characterize the difference that distinguishes the self from its others, the agent from the patient, and the subject from the object of rights. In other words, for Levinas at least, prior decisions about subject and object or moral agent and moral patient do not first establish the terms and conditions of any and all possible encounters that the self might have with others and with other forms of otherness. It is the other way around. The Other first confronts, calls upon, and interrupts self-involvement and in the process determines the terms and conditions by which the standard roles of the subject and object of rights come to be articulated and assigned. An ethics without (or anterior to) rights is an ethics that is open to the moral situation of others.

6.3 Complications, Difficulties, and Potential Problems

This is not to say that Levinas’s innovations do not have their own challenges and problems. And criticisms of Levinasian philosophy are not in short supply. For our purposes, there are at least two important difficulties that complicate the application of Levinas’s work to robots.

6.3.1 Anthropocentrism

Although Levinas effectively dodges and disarms the complications of anthropomorphism (as mobilized in the work of Kate Darling and others), his philosophy, for all its promise to organize a mode of thinking that is organized and oriented otherwise, has been and remains inescapably anthropocentric. For this reason, applying his work to robots, let alone nonhuman animals or the environment, can only occur by way of a deliberate violation of the proper limits that he himself had established for his own brand of philosophical thinking. Whatever the import of his unique contribution, “Other” in Levinas is and remains unapologetically human. Although he is not the first to identify it, Jeffrey Nealon (1998, 71) provides what is perhaps one of the most succinct descriptions of the problem: “In thematizing response solely in terms of the human face and voice, it would seem that Levinas leaves untouched the oldest and perhaps most sinister unexamined privilege of the same: anthropos [άνθρωπος] and only anthropos, has logos [λόγος]; and as such, anthropos responds not to the barbarous or the inanimate, but only to those who qualify for the privilege of ‘humanity,’ only those deemed to possess a face, only to those recognized to be living in the logos.” For Levinas, as for many of those who follow in the wake of his influence, Other has been exclusively operationalized as another human subject. If, as Levinas argues, ethics precedes ontology, then in Levinas’s own work anthropology and a certain brand of humanism still precede ethics.

There is considerable debate in the subsequent literature concerning this anthropocentrism and its significance. Derrida, for his part, finds it a disturbing residue of humanist thinking and therefore reason for considerable concern: “In looking at the gaze of the other, Levinas says, one must forget the color of his eyes, in other words see the gaze, the face that gazes before seeing the visible eyes of the other. But when he reminds us that the ‘best way of meeting the Other is not even to notice the color of his eyes,’ he is speaking of man, of one’s fellow as man, kindred, brother; he thinks of the other man and this, for us, will later be revealed as a matter for serious concern” (Derrida 2008, 12). What truly “concerns” Derrida is not just the way this anthropocentrism (which in this particular context also exhibits an exclusive gendered configuration that is part and parcel of the humanist tradition) restricts Levinas’s philosophical innovations but the manner by which it already makes exclusive decisions about the (im)possibility of an animal other. Richard Cohen, by contrast, endeavors to give this “problem” a positive interpretation in his “Introduction” to one of Levinas’s books: “The three chapters of Humanism of the Other each defend humanism—the world view founded on the belief in the irreducible dignity of humans, a belief in the efficacy and worth of human freedom and hence also of human responsibility” (Cohen 2003, ix). For Cohen, however, this version of humanism is significantly different; it consists in a radical thinking of the “humanity of the human” as the unique site of ethics: “From beginning to end, Levinas’s thought is a humanism of the other. The distinctive moment of Levinas’s philosophy transcends its articulation but is nevertheless not difficult to discern: the superlative moral priority of the other person. It proposes a conception of the ‘humanity of the human,’ the ‘subjectivity of the subject,’ according to which being ‘for-the-other’ takes precedence over, is better than being for-itself. Ethics conceived as a metaphysical anthropology is therefore nothing less than ‘first philosophy’” (Cohen 2003, xxvi).

Consequently, utilizing Levasian thought for the purposes of addressing the moral/legal status of robots requires fighting against and struggling to break free from the gravitational pull of Levinas’s own anthropocentrism. Fortunately this application—this effort to read Levinas in excess of Levinas—is not unprecedented. There has, in fact, been some impressive efforts at “radicalizing Levinas” (Atterton and Calarco 2010), “broadening Levinas’s sense of the Other” (Davy 2007, 41), and elaborating Levinasian philosophy in “the face of things” (Benso 2001). “Although Levinas himself,” as Calarco (2008, 55) points out: “is for the most part unabashedly and dogmatically anthropocentric, the underlying logic of his thought permits no such anthropocentrism. When read rigorously, the logic of Levinas’s account of ethics does not allow for either of these two claims. In fact … Levinas’s ethical philosophy is, or at least should be, committed to a notion of universal ethical consideration, that is, an agnostic form of ethical consideration that has no a priori constraints or boundaries.” In proposing this alternative reading of the Levinasian tradition, Calarco interprets Levinas against himself, arguing that the logic of Levinas’s account is in fact richer and more radical than the limited interpretation the philosopher had initially provided for it. This means, of course, that we would be obligated to consider all kinds of others as Other, including other human persons, animals, the natural environment, artifacts, technologies, and robots. An “altruism” that tries to limit in advance who can or should be Other would not be, strictly speaking, altruistic.

Unfortunately even Calarco’s “radicalization of Levinas” does not go far enough. His representative list of previously excluded others, which includes “‘lower’ animals, insects, dirt, hair, fingernails, and ecosystems” (Calarco 2008, 71), may be more responsive to other forms of otherness, but it also makes exclusive decisions. And what is noticeably absent from his list are artifacts, technologies, and robots. The same may be said for Barbara Jane Davy, who also seeks to generalize Levinas’s philosophy beyond its limited anthropocentric restrictions. Like Calarco, Davy’s list of other kinds of others is broader, but it is still lacking anything artificial or technological:

In broadening Levinas’s sense of the Other I aim to include not just humans and other animals, but any Other. While for Levinas the Other is always assumed to be a human being, I take his phenomenological understanding of the Other beyond categories such as human, animal, plant, rock, wind, or body of water. In Levinasian ethics, the Other is met as a person rather than thematized or interpreted through categories. The Other interrupts one’s thematization of everything into one’s own view of the world. I contend that other sorts of persons can also interrupt one’s thematization of the world in this manner. (Davy 2007, 41)

Despite efforts to develop Levinas’s philosophy beyond its own inherent limitations to an anthropocentric frame and point of reference, these subsequent efforts continue to institute marginalizations of any kind of technological other. In the face of all kinds of other things that can in principle take on face—animals, plants, rocks, waters, and even fingernails—robots remain faceless and otherwise than Other, or beyond ethics.

6.3.2 Relativism and Other Difficulties

For all its opportunities, the extension of Levinasian philosophy to other forms of otherness risks exposure to the charge of moral relativism—“the claim that no universally valid beliefs or values exist” (Ess 1996, 204) or “that beliefs, norms, practices, frameworks, etc., are legitimate in relation to a specific culture” (Ess 2009, 21). To put it rather bluntly, if moral status is “relational” (Coeckelbergh’s term) and open to different decisions concerning others made at different times for different reasons, are we not at risk of affirming an extreme form of moral relativism? The answer to this question depends on how one defines the term “relativism.” As I have argued elsewhere (Gunkel 2010 and 2012) “relative,” which has an entirely different pedigree in a discipline like physics, need not be construed negatively and decried, as Žižek (2000, 79 and 2006, 281) has often done, as the epitome of postmodern multiculturalism run amok, or, as Robert Scott (1976, 264) characterizes it, as “a standardless society, or at least a maze of differing standards, and thus a cacophony of disparate, and likely selfish, interests.” Instead, it may be the case that, following Scott, “relativism can indicate circumstances in which standards have to be established cooperatively and renewed repeatedly” (Scott 1976, 264). This means that one can remain critical of “moral relativism,” in the usual dogmatic sense of the phrase, while being open and receptive to the fact that moral standards—like many social conventions and legal statues—are socially constructed formations that are subject to and the subject of difference.

Charles Ess (2009, 21) calls this alternative “ethical pluralism.” “Pluralism stands as a third possibility—one that is something of a middle ground between absolutism and relativism. ... Ethical pluralism requires us to think in a ‘both/and’ sort of way, as it conjoins both shared norms and their diverse interpretations and applications in different cultures, times, and places” (Ess 2009, 21–22). Likewise Floridi (2013, 32) advocates a “pluralism without endorsing relativism,” calling this “middle ground” relationalism: “When I criticize a position as relativistic, or when I object to relativism, I do not mean to equate such positions to non-absolutist, as if there were only two alternatives, e.g. as if either moral values were absolute or relative, or truths were either absolute or relative. The method of abstraction enables one to avoid exactly such a false dichotomy, by showing that subjectivist positions, for example, need not be relativistic, but only relational” (Floridi 2013, 32). And Žižek, for his part, advances a similar position, which he formulates not in terms of ethics but epistemology: “At the level of positive knowledge it is, of course, never possible to (be sure that we have) attain(ed) the truth—one can only endlessly approach it, because language is ultimately self-referential, there is no way to draw a definitive line of separation between sophism, sophistic exercises, and Truth itself (this is Plato’s problem). Lacan’s wager is here the Pascalean one: the wager of Truth. But how? Not by running after ‘objective’ truth, but by holding onto the truth about the position from which one speaks” (Žižek 2008b, 3). Like Floridi, Žižek recognizes that truth is neither absolute (always and everywhere one and the same) nor completely relative (such that “anything goes”); it is always formulated and operationalized from a particular position of enunciation, or what Floridi calls “level of abstraction,” which is dynamic and alterable.

In endorsing this kind of “ethical pluralism” or “theory of moral relativity” and following through on it to the end, what one gets is not a situation where anything goes and “everything is permitted” (Camus 1983, 67). Instead, what is obtained is a kind of ethical thinking that is more dynamic and responsive to the changing social situations and circumstances of the twenty-first century. Although such a formulation sounds workable in theory, it does run into considerable problems when deployed and put into practice. In particular, if any and all things become capable of taking on face—animals, plants, rocks, and even robots—does this also indicate that something might not, for one reason or another, manifest face and therefore come to be treated as a mere tool or instrument for my use and enjoyment? Is it possible that what is to be gained for others, including the robot—namely, that they might supervene before us and possess face—could in fact work in the opposite direction and deface another (another human individual, for instance), who for one reason or another would not achieve the same kind of moral status? If face is not a kind of substantive property of Others, but a kind of event or occurrence of alterity, does the “relational turn” run the risk, as noted by Gerdes (2015, 274), that we might lose something valuable, that “our human-human relations may be obscured by human-robot relations?”

This is a much more practically oriented question and one that cannot be answered by simply deferring to the concept of ethical pluralism or a theory of moral relativity. One possible mode of response—what we might call the conservative version—reaffirms human exceptionalism in the face of all others. This is the position occupied and advocated by Kathleen Richardson:

But if the machine can become another, what does it say for how robotic and AI scientists conceptualise ‘relationship’? Is relationship instrumental? Is relationship mutual and reciprocal? There are many different kinds of relationships that people have. Our market economies structure work so that encounters between people in the work sphere take on a character of formal interactions. … These formal interactions characterize a huge proportion of our lived experience. Some philosophers have characterised these types of relations between persons as “instrumental.” If these kinds of relations are “instrumental” does that make people in the situations “instruments” or “tools?” Humans are never tools or instruments, even if relations between people take on a formal character. When we meet a cashier at a checkout or restaurant service staff, they have not stopped being human just because they are only expressing themselves formally in the given situation. People do not stop being human and turn into instruments when they enter the working environment, and then switch back to being human in the private sphere. In every encounter we meet each other as persons, members of a common humanity. (Richardson 2017, 1)

Richardson’s assertion that “humans are never tools or instruments” not only ignores the fact that for a good part of human history (and even now) humans beings have treated other human individuals and communities of individuals as tools or instruments (i.e., slaves) but also takes the rhetorical form of a dogmatic and absolute proclamation, deploying totalizing words like “never” and “in every encounter.” In other words, Richardson’s response to the potential hazards of relationalism or relativism is to retreat to an absolutist position through a kind of totalitarian assertion concerning the truth of things. But it is precisely this gesture and its legacy—this assertion of an unquestioned human exceptionalism and the reduction of difference to the same by way of a common underlying “humanity” that is always defined and defended from some particular position of power—that is in question.

A different way to respond to this question is to recognize, as Anne Foerst suggests (mobilizing the terminology of personhood that had already appeared in Richardson’s assertion), that otherness is not (not in actual practices, at least) unlimited or absolute: “Each of us only assigns personhood to a very few people. The ethical stance is always that we have to assign personhood to everyone, but in reality we don’t. We don’t care about a million people dying in China of an earthquake, ultimately, in an emotional way. We try to, but we can’t really, because we don’t share the same physical space. It might be much more important for us if our dog is sick” (Benford and Malartre 2007, 163). This statement is perhaps more honest about the way that moral decision-making and the occurrence of otherness actually transpires. Instead of declaring an absolutist claim to a kind of totality (Levinas’s word), it remains open to particular configurations of otherness that is more mobile, flexible, and context-dependent. But there remains, as Gerdes (2015) recognizes, something in this formulation that is abrasive to our moral intuitions. This may be due to the fact that this Levinasian inspired ethic does not make a singular and absolute decision about otherness that stands once and for all, such that there is one determination concerning others that decides everything for all time. The encounter with Others—the occurrence of face in the face of the Other—is something that happens in time and needs to be negotiated and renegotiated again and again. The work of ethics is, therefore, inexhaustible. It is an ongoing and interminable responsibility requiring that one respond and take responsibility for how one responds. Is this way of thinking and doing ethics without risk? Not at all. But the risk is itself the site of ethics and the challenge that one must face in all interactions with others, whether human, animal, or otherwise.

6.4 Summary

“Every philosophy,” Benso (2000, 136) writes in a comprehensive gesture that performs precisely what it seeks to address, “is a quest for wholeness.” This objective, she argues, has been typically targeted in one of two ways: “Traditional Western thought has pursued wholeness by means of reduction, integration, systematization of all its parts. Totality has replaced wholeness, and the result is totalitarianism from which what is truly other escapes, revealing the deficiencies and fallacies of the attempted system” (Benso 2000, 136). This is precisely the kind of violent philosophizing that Levinas (1969) identifies under the term “totality,” and it includes the efforts of both standard agent-oriented approaches (i.e., consequentialism, deontologism, virtue ethics, etc.) and non-standard patient-oriented approaches (i.e., animal rights, bioethics, and information ethics). The alternative to these totalizing transactions is a philosophy that is oriented otherwise, like that proposed by Levinas. This other approach, however, “must do so by moving not from the same, but from the other, and not only the Other, but also the other of the Other, and, if that is the case, the other of the other of the Other. In this must, it must also be aware of the inescapable injustice embedded in any formulation of the other” (Benso 2000, 136). And this “injustice” that Benso identifies and calls-out is evident not only in Levinas’s exclusive anthropocentism but also in the way that those who seek to redress this “humanism of the other” continue to exclude or marginalize technology in general and robots in particular.

For these reasons, the question concerning robot rights does not end with a single definitive answer—a simple and direct “yes” or “no.” But this outcome is not, we can say following the argumentative strategy of Daniel Dennett’s “Why you Cannot Make a Computer that Feels Pain” (1978 and 1998), necessarily a product of some inherent or essential deficiency with the technology. Instead it is a result of the fact that the discourse of rights as well as the methods by which moral standing has been typically questioned and resolved already deploy and rely on questionable constructions and logics. “Whether or not it is acceptable to grant rights to some robots,” as Coeckelbergh (2010a, 219) concludes, “reflection on the development of artificially intelligent robots reveals significant problems with our existing justifications of moral consideration.” The question concerning the rights of robots, therefore, is not simply a matter of asking about the extension of moral consideration to one more historically excluded other, which would, in effect, leave the existing mechanisms of moral philosophy in place, fully operational, and unchallenged. Instead, the question of robot rights (assuming that it is desirable to retain this particular vocabulary) makes a fundamental claim on ethics, requiring us to rethink the systems of moral considerability all the way down. This is, as Levinas (1981, 20) explains, the necessarily “interminable methodological movement of philosophy” that continually struggles against accepted norms and practices in an effort to think otherwise—not just differently, but in ways that are responsive to and responsible for others.

Consequently, this book ends not as one might have initially expected—that is, by accumulating evidence or arguments in favor of permitting robots, a particular kind of robot, or even just one representative robot, entry into the community of moral and/or legal subjects. Instead, it concludes with a fundamental challenge to ethics and the way moral philosophy has typically defined, decided, and defended the question of the standing and status of others. Although ending in this way—in effect, responding to a question with yet another question—is commonly considered bad form, this is not necessarily the case. This is because questioning is definitive of the philosophical enterprise. “I am,” Dennett (1996, vii) writes, “a philosopher and not a scientist, and we philosophers are better at questions than answers. I haven’t begun by insulting myself and my discipline, in spite of first appearances. Finding better questions to ask, and breaking old habits and traditions of asking, is a very difficult part of the grand human project of understanding ourselves and our world.” The objective of Robot Rights, therefore, has not been to advocate on behalf of a cause, to advance some definitive declaration, or to offer up a preponderance of evidence to prove one or the other side in the existing debates. The goal has been to ask about the very means by which we have gone about trying to articulate and formulate this problem and its investigation. The issue, then, is not only investigating whether or not robots can and should have rights, but also, and perhaps more importantly, exposing how these questions concerning rights have been configured and how these configurations already accommodate and/or marginalize others. Or to put it in terms devised by Žižek (2006b, 137), the question “Can and should robots have rights?” might already be questionable insofar as the very form of the inquiry—the very way we have perceived and defined the problem—“is already an obstacle to its solution.”

For this reason, the examination of robot rights is not just one more version or iteration of an applied moral philosophy; its investigation opens up and releases a thorough and profound challenge to what is called “ethics.” And answering this call will be the task of thinking of and for the future. In other words, if the problem from the beginning has been the fact that the question of robot rights is and remains “unthinkable” (Levy 2005, 393)—or if not entirely unthinkable then at least so difficult to think11 that it occupies a marginal position in the efforts of roboethics, robot ethics, machine ethics, robophilosophy, etc.—then it will have been sufficient for us that the book has made it possible to think the unthought (to use that distinctly Heideggerian formulation) or (if one would prefer something less continental and more analytic) to render tractable the question “Can and should robot have rights?” Where we go from here, or what can and should be thought or done from this point forward, remains a question that will need to be answered in the face of the robot.

Notes