The final modality also supports the independence and asymmetry of the two statements, but it does so by denying the first and affirming the second. In this case, which is something proposed and developed by Kate Darling (2012 and 2016a), robots, at least in term of the currently available technology, cannot have rights. They do not, at least at this particular point in time, possess the necessary capabilities or properties to be considered full moral and legal subjects. Despite this fact, there is, Darling asserts, something qualitatively different about the way we encounter and perceive robots, especially social robots. “Looking at state of the art technology, our robots are nowhere close to the intelligence and complexity of humans or animals, nor will they reach this stage in the near future. And yet, while it seems far-fetched for a robot’s legal status to differ from that of a toaster,1 there is already a notable difference in how we interact with certain types of robotic objects” (Darling 2012, 1). This occurs, Darling continues, principally due to our tendencies to anthropomorphize things by projecting onto them cognitive capabilities, emotions, and motivations that do not necessarily exist. Socially interactive robots, in particular, are intentionally designed to leverage and manipulate this proclivity. “Social robots,” Darling (2012, 1) explains, “play off of this tendency by mimicking cues that we automatically associate with certain states of mind or feelings. Even in today’s primitive form, this can elicit emotional reactions from people that are similar, for instance, to how we react to animals and to each other.” And it is this emotional reaction, Darling argues, that necessitates obligations in the face of social robots. “Given that many people already feel strongly about state-of-the-art social robot ‘abuse,’ it may soon become more widely perceived as out of line with our social values to treat robotic companions in a way that we would not treat our pets” (Darling 2012, 1).
This argument proceeds (even if Darling does not explicitly identify it as such) according to an interest theory of rights. Although robots cannot, at least not at this particular point in time, have rights or achieve levels of intelligence and complexity that make a claim to rights both possible and necessary, there is an overwhelming interest in extending to them some level of recognition and protection. As proof of this claim, Darling offer anecdotal evidence, results from a workshop demonstration, and at least one laboratory experiment with human subjects.
Darling recounts (and often begins her public presentations with) the tragic story of Hitchbot. Hitchbot was a hitchhiking robot, created by David Harris Smith and Frauke Zeller, that successfully made its way across Canada and Europe only to be brutally vandalized at the beginning of a similar effort to cross the United States. For Darling, what is important in this story, and what she highlights in her work, is the way that people responded to Hitchbot’s demise. “Honestly I was a little surprised that it took this long for something bad to happen to Hitchbot. But I was even more surprised by the amount of attention that this case got. I mean it made international headlines and there was an outpouring of sympathy and support from thousands and thousands of people for Hitchbot” (Darling 2015). To illustrate this, Darling reviews tweets from individuals who not only expressed a sense of loss over the “death of Hitchbot”—but took it upon themselves to apologize directly to the robot for the cruelty that had been visited upon it by cold and insensitive humans. At other times, Darling has referenced and described the experiences people have with their Roomba vacuum cleaners—the fact that they give them names and “feel bad for the vacuum cleaner when it gets stuck under the couch” (Darling 2016b). And in another “more extreme example of this” (Darling 2016b), she recounts the experience of US soldiers with military robots, referencing the work done by Julie Carpenter (2015): “There are countless stories of soldiers becoming emotionally attached to the robots that they work with. So they’ll give them names, they’ll give them medals of honor, they’ll have funerals for them when they die. And the interesting thing about these robots is that they’re not designed to evoke this at all; they’re just sticks on wheels” (Darling 2016b). Although it is not explicitly acknowledged as such, all three anecdotes (and there are several more involving other robotic objects) exemplify what Reeves and Nass (1996) call “the media effect”—the fact that human beings will accord social standing to mechanisms that exhibit some level of social presence (even a very minimal level, like the independent movement of the Roomba), whether this is intentionally designed into the artifact or simply perceived by the human user/observer of the device.
But Darling’s efforts are not limited to recounting stories provided by others; she has also sought to test and verify these insights by way of her own research. In 2013, Darling along with Hannes Gassert, ran a workshop at LIFT13, an academic conference in Geneva, Switzerland. The workshop, which had a title that called upon and utilized Hume’s is/ought distinction—“Harming and Protecting Robots: Can We, Should We?”—sought to “test whether people felt differently about social robots than they would everyday objects, like toasters” (Darling and Hauert 2013).
In the workshop, groups of participants [four groups of six for a total of twenty-four] were given Pleos—cute robotic dinosaurs that are roughly the size of small cats. After interacting with the robots and performing various tasks with them, the groups were asked to tie up, strike, and “kill” their Pleos. Drama ensued, with many of the participants refusing to “hurt” the robots, and even physically protecting them from being struck by fellow group members … While everyone in the room was fully aware that the robot was just simulating its pain, most participants giggled nervously and felt a distinct sense of discomfort when it whimpered while it was being broken. (Darling 2016a, 222)
Although the demonstration was conducted “in a non-scientific setting” (Darling and Vedantam 2017), meaning that the results obtained are anecdotal at best (more on this in a moment), it did provide a compelling illustration of something that has been previously tested and verified in laboratory studies involving robot abuse2 undertaken by Christopher Bartnek and Jun Hu (2008) and Astrid M. Rosenthal-von der Pütten et al. (2013). The former utilized both a Lego robot and Crawling Microbug Robots; the latter employed Pleo dinosaur toys, like Darling’s workshop.
The one scientific study that has been performed and published by Darling, involves a Milgram-like obedience experiment designed to test the effect of framing on human empathy. In this laboratory experiment, which leverages the basic approach of Bartnek and Hu’s (2008) second robot abuse study with the Crawling Microbug robot, Darling, along with co-authors Palash Nandy and Cynthia Breazeal (2015), invited participants to observe a Hexbug Nano (a small, inexpensive artificial bug) and then asked them to strike it with a mallet. The experiment demonstrated that various framing devices, like naming the robot and giving it a backstory, contributed to the artifact’s perceived status in the mind of test subjects.
Participants hesitated significantly more to strike the robot when it was introduced through anthropomorphic framing like a name and backstory (for example, “This is Frank. He’s lived at the Lab for a few months now. His favorite color is red. Etc.”). In order to help rule out hesitation for other reasons (for example perceived value of the robot), we measured participants’ psychological trait empathy and found a strong relationship between tendency for empathic concern and hesitation to strike robots introduced with anthropomorphic framing. Adding color to our findings, many participants’ verbal and physical reactions in the experiment were indicative of empathy (asking, for example, “Will it hurt him?” or muttering under their breath “it’s just a bug, it’s just a bug” as they visibly steeled themselves to strike the personified Hexbug). (Darling 2017, 11)
This means that language plays a constitutive role in these matters, which is something that has been argued by Coeckelbergh (2017). How entities come to be situated in and by language (outside of what they really are) matters and makes a difference. Therefore, what is important is not just how we assemble, fabricate, and program the device—what matters is how they are situated or framed in social reality through the instrumentality of language.
It is on the basis of this evidence that Darling argues that it may be necessary to extend some level of rights or legal protections to robots in general and social robots in particular.3 Even if social robots cannot be moral subjects strictly speaking (at least not yet), there is something about this kind of artifact (whether it is Hitchbot, a Pleo dinosaur toy, or a Hexbug Nao) that just looks and feels different. According to Darling (2016a, 213), it is because we “perceive robots differently than we do other objects” that one should consider extending some level of legal protections. This conclusion is consistent with Hume’s thesis. If “ought” cannot be derived from “is,” then axiological decisions concerning moral value are little more than sentiments based on how we feel about something at a particular time. Darling mobilizes a version of this moral sentimentalism with respect to social robots: “Violent behavior toward robotic objects feels wrong to many of us, even if we know that the abused object does not experience anything” (Darling 2016a, 223). This insight is supported by one of the rather dramatic “war stories” initially reported by Joel Garreau in the Washington Post (2007) and subsequently recounted by Darling (2012): “When the United States military began testing a robot that defused landmines by stepping on them, the colonel in command ended up calling off the exercise. The robot was modeled after a stick insect with six legs. Every time it stepped on a mine, it lost one of its legs and continued on the remaining ones. According to the Washington Post, ‘[t]he colonel just could not stand the pathos of watching the burned, scarred and crippled machine drag itself forward on its last leg. This test, he charged, was inhumane.’”
Beyond the one empirical study published by Darling et al. (2015), other experimental studies, like that conducted by Fiery Cushman et al. (2012), have demonstrated that “aversion to harmful behavior” can be explained by one of two models: 1) outcome aversion, where “people are averse to harmful acts because of empathic concern for victim distress;” and 2) action aversion, where an aversive response “might be triggered by the basic perceptual and motoric properties of an action, even without considering its outcome” (Cushman et al. 2012, 2). The former explains, for example, how one can argue that “kicking a robot”—Coeckelbergh’s (2016) term and a reference to the controversial Boston Dynamics publicity videos from 2015—is not really harmful or abusive, because a machine is incapable of feeling anything. The latter, however, complicates this assumption, demonstrating how one can have an aversion to an action, like watching a mine-sweeping robot drag its damaged frame across the field, irrespective of the outcome (known or not) for the recipient of that action. In their study of this, Cushman et al. (2012) had participants perform simulated harmful actions, e.g., stabbing an experimenter with a rubber knife, shooting him or her with a disabled handgun, etc. Despite being informed of and understanding that the actions were not harmful, i.e., would not cause the recipient of the action any distress whatsoever, participants still manifested aversive reactions (measured in terms of standard physiological responses) to performing such actions. “This suggests,” as Cushman et al. (2012, 2) conclude, “that the aversion to harmful actions extends beyond empathic concern for victim harm.”
Consequently, and to its credit, Darling’s proposal, unlike Bryson’s argument in “Robots Should Be Slaves,” tries to accommodate and work with, rather than against, the actual intuitions and experiences of users. Whereas Bryson’s chapter admonishes users for wrongly projecting states of mind onto a robotic object, Darling recognizes that this happens and will continue to occur despite accurate information about the mechanisms and that a functional moral and legal framework needs to take this fact into account.
And Darling is not alone in this proposal. Heather Knight (2014), who references Darling’s Pleo dinosaur “study” (and Knight calls it a “study” even if, as Darling herself points out, it was little more than a workshop demonstration), issues a similar call as a component of social robot integration. “In addition to encouraging positive applications for the technology and protecting the user, there may also ultimately arise the question of whether we should regulate the treatment of machines. This may seem like a ridiculous proposition today, but the more we regard a robot as a social presence, the more we seem to extend our concepts of right and wrong to our comportment toward them” (Knight 2014, 9). And like Darling, the principal justification for entertaining this “seemingly ridiculous proposition” is ultimately rooted in a version of Immanuel Kant’s indirect duties to animals. “As Carnegie Mellon ethicist John Hooker once told our Robots Ethics class, while in theory there is not a moral negative to hurting a robot, if we regard that robot as a social entity, causing it damage reflects poorly on us. This is not dissimilar from discouraging young children from hurting ants, as we do not want such play behaviors to develop into biting other children at school” (Knight 2014, 9).
Furthermore, this is not just a theoretical proposal; it is already being put into practice by commercialization efforts for social robots. (Whether these efforts are motivated by nothing more than clever marketing strategies designed to increase consumer interest and sell more product or are something derived from a more deliberate effort to question cultural norms and push the boundaries of moral consideration, remains an open question.) As Scheutz (2012, 215) correctly and accurately points out, “none of the social robots available for purchase today (or in the foreseeable future, for that matter) care about humans, simply because they cannot care. That is, these robots do not have the architectural and computational mechanisms that would allow them to care, largely because we do not even know what it takes, computationally for a system to care about anything.” Despite this empirical fact, many social robots, like Paro and Cynthia Breazeal’s Jibo, are intentionally designed for and situated in positions where human beings are invited to care or coerced into caring (and the choice of verb is not unimportant). For example, the initial marketing campaign for Jibo—one that was admittedly designed to “create buzz” about the product and to generate capital investment through preorders—intentionally situated the device in a position that made it something more than a mere instrument and somewhat less than another person. As the world’s first “family robot,” Jibo has been determined to occupy a curious in-between position that Don Ihde calls “quasi-other” and Peter Asaro identifies with the term “quasi-person.” Jibo and other social robots like it are, therefore, intentionally designed to exploit what Coeckelbergh (2014, 63) calls the “category boundary problem” since they are situated in between what is a mere thing and who is another socially significant person.
This new “ontological category” has been tested and confirmed in empirical research. In a study of children’s social and moral relationships with a humanoid social robot called “Robovie,” Peter Kahn et al. (2012, 313) discovered the following: “we had asked children whether they thought Robovie was a living being. Results showed that 38% of the children were unwilling to commit to either category and talked in various ways of Robovie being ‘in between’ living and not living or simply not fitting either category.” Although the exact social, moral, and/or legal status of such “quasi-persons” remains undecided and open to debate, the fact of the matter is that courts have already been asked to consider the issue in the case of animals. In 2015, the Nonhuman Rights Project sought to get a writ of habeas corpus for two chimpanzees. Although the court was sympathetic to an evolving cultural practice that regards animals, especially family pets, as “quasi-persons,” the judge in the case reverted to existing precedent: “As the law presently regards, there is no ‘in-between’ position of personhood for the purposes of establishing rights because entities are categorized in a simple, binary, ‘all-or-nothing fashion’ ... beings recognized as persons have rights to enjoy and duties to perform, whereas things do not have these legal entitlements and responsibilities (Stanley 2015)” (Solaiman 2016, 170). Though the courts currently do not recognize an “in-between” position, there are mounting efforts coming from animal rights advocates and innovations in social robotics that challenge these existing “binary, all-or-nothing” procedures.
Consequently, Darling’s proposal, unlike other efforts either to deny or to extend rights to robots, does not need to resolve fundamental ontological and epistemological questions concerning the inner nature or operations of the artifact. What is important and what makes the difference are the actual social situations and actions undertaken in the face of the artifact and what these interactions mean in relationship to human individuals and social institutions. Her approach, therefore, is a kind of “relationalism”—the idea, as Mark Coeckelbergh (2010, 214 and 2012, 5) explains, that the status of an entity is not a matter of intrinsic states of mind but a product of extrinsic interactions and relationships. Although Darling does not explain it in this exact way, her proposal introduces a shift in perspective from a predominantly individualistic way of deciding moral standing to one that is more relational and collectivist. Raya Jones explains this difference in geographical terms:
Debates about robot personhood make sense only within the Weltanschauung of individualism. In this world view the default unit for self-construal is the individual person, an independent self (Markus and Kitayama 1991). Therefore the basic relational unit is an “I-You,” two autonomous selves conjoined. This invites imaginatively placing a robot in the “You” position—and then debating whether the artificial could have an independent self. In the Weltanschauung of collectivism, the default unit of self-construal is the social group, and therefore the relational unit is a whole consisting of interdependent selves (cf. Markus and Kitayama). Contributions to social robotics from the Far East tend to imagine a world-order where everything has its proper place, and collective harmony depends on behavioural propriety towards others as well as inanimate objects. The boundary between the animate and inanimate may be fuzzier (especially in Japan) than in the West. More importantly, however, when collectivism is the default self-construal, the imperative becomes a question of social inclusion—of opening an “us” to include robots—rather than the imperative to determine the robot’s inner nature. (Jones 2016, 83)
Although Darling does not use Jones’s West/East distinction, her proposal shifts attention away from what Jones calls Western forms of individualism toward a more Eastern influenced form of collectivism. The former, as Jones explains it, decides questions of social standing based on ontology, i.e., what the entity is or is capable of being. Or as Floridi (2013, 116) has pointed out, “what the entity is determines the degree of moral value it enjoys, if any.” The alternative to this way of thinking, what Jones attributes to an Eastern worldview or Weltanschauung, proceeds otherwise—deciding questions of social standing on the basis of actual relationships and how an entity comes to be included (or not) in the existing social order. And in Eastern traditions and cultures, Jones avers, the question of how human collectives ought to relate to other entities applies to both animate and inanimate things.
Following Jones’s geographically situated distinction, one could say that what makes Darling’s proposal innovative and attractive is that it challenges Western individualism by way of a more Eastern influenced form of collectivism. But we should be cautious with this way of thinking, lest we risk redeploying a form of “orientalism” that makes the “East” the alter ego and mere conceptual foil of the “West.” As Edward Said (1978, 2–3) explained in his eponymously titled book, “orientalism is a style of thought based upon an ontological and epistemological distinction made between ‘the Orient’ and (most of the time) ‘the Occident.’ Thus a very large mass of writers, among whom are poets, novelists, philosophers, political theorists, economists, and imperial administrators, have accepted the basic distinction between East and West as the starting point for elaborate theories, epics, novels, social descriptions, and political accounts concerning the Orient, its people, customs, ‘mind,’ destiny, and so on.”
By explaining things in terms of a geographical distinction, Jones risks the charge of orientalism, and she is explicitly aware of it: “The geographic designation should be taken with circumspection. Chapter 5 cites Dutch computer scientists who propound a very similar version as do Saadatian, Samani and their colleagues in Asia. The ideal of a human-robot coexistence society, although it seems to arise more naturally in the Far East, may be a peculiarity of twenty-first century globalized technoculture (rather than of Japanese or Korean societies, for instance)” (Jones 2016, 83). Despite (or perhaps because of) the inclusion of this critical self-reflection, Jones’s argument uses a curious and potentially problematic rhetorical gesture, since it both deploys “orientalism”—making the Far East in general and Japanese and Korean societies in particular the “other” of Western individualism—and, at the same time, remains critical of this very gesture as being inaccurate and potentially ethnocentric. This way of speaking has become rather common in contemporary culture, and takes the form of statements like, “I’m not a racist, but …” or “I’m not a sexist, but …” where what follows after the conjunction is some kind of racist or sexist statement. The idea seems to be that one is permitted to use outdated and problematic discursive constructions while maintaining some critical distance from them. It is a way of mobilizing problematic conceptual distinctions while recognizing the problem and assuming that recognition would be sufficient to answer for and even excuse the criticism.
Darling is clearly aware that her proposal is “a bit of a provocation” (Darling and Vedantam 2017), and, probably for this reason, it has attracted considerable media coverage and attention. Despite this popularity, however, the argument has a number of significant problems and complications.
First, basing decisions concerning moral standing on individual perceptions and sentiment can be criticized for being capricious and inconsistent. “Feelings,” Immanuel Kant (1983, 442) writes in response to this kind of moral sentimentalism, “naturally differ from one another by an infinity of degrees, so that feelings are not capable of providing a uniform measure of good and evil; furthermore, they do so even though one man cannot by his feeling judge validly at all for other men.” Bryson extends this concern to robots by returning to the experience she had in the MIT robotics lab:
To return to the Cog project, there have been two “standard” answers by the project leaders to the question of “isn’t it unethical to unplug Cog.” The original answer was that when we begin to empathise with a robot, then we should treat it as deserving of ethical attention. The idea here is that one should err on the side of being conservative in order to prevent horrible accidents. However, in fact people empathise with soap opera characters, stuffed animals and even pet rocks, yet fail to empathise with members of their own species or even family given differences as minor as religion. Relying on human intuition seems deeply unsatisfactory, particular given that it is rooted in evolution and past experience, so thus does not necessarily generalise correctly to new situations. (Bryson 2000, 2)
Bryson’s concern is that proposals like that offered by Darling give too much credence to intuition and therefore err in the direction of erroneously ascribing standing and rights to things that simple do not deserve them, strictly speaking. Because intuition “does not necessarily generalize correctly,” it would be capricious and potentially irresponsible to develop policy concerning the social position and legal standing of robots on what are mere personal experiences and gut feelings. Such sentiments might be necessary for initiating discussion about policy, but they are not, in and of themselves, sufficient evidence.
Words matter in this debate. In fact, these feelings might be a product of the words that are used to talk about the robot and its behavior. “This may be,” as Harold Thimbleby (2008, 339) argues in response to Blay Whitby, “no more than a word game. While ‘dismantling’ a robot seems to have neutral ethical implications, call that dismantling ‘mistreatment’ and it evokes a sentimental response.” Additionally, because sentiment is a matter of individual experience and response, it remains uncertain as to whose perceptions actually matter or make the difference. Who, for instance, is included (and who is excluded) from the collective first person “we” that Darling operationalizes in and makes the subject of her proposal? In other words, whose sentiments count when it comes to decisions concerning the extension of moral and legal rights to others? And do all sentiments have the same status and value when compared to each other? Bryson (2000, 2), for her part, is careful to emphasize the fact that the misidentifications that had occurred with Cog were perpetrated not by uninformed or naïve users, like Weizenbaum’s (1976) secretary who insisted that ELIZA really understood her, but by apparently well-informed PhD students from MIT and Harvard University. The assumed backstory to her narrative is that experts should know better, but even they—even those individuals who should know about such things—can still get it wrong. So this just raises the stakes. If well-informed experts can do this, then who should be deciding these things? Whose moral sentiments should matter when determining the proper way to respond to a robot? Are all intuitions equal, or are some “better” than others? And if so, how could we tell and devise a reasonable means for sorting the one from the other?
The problem this produces is immediately evident when one considers the kind of evidence that Darling mobilizes. One of the frustrations with Darling’s research is that much of it is not research, strictly speaking, but anecdotal evidence from admittedly less-than-scientific studies presented in venues other than peer-reviewed journals. It is not entirely clear why, for instance, the workshop demonstration—which has had considerable traction in the popular press (e.g., Fisher 2013, Collins 2015, Lalji 2015, Walk 2016, and Darling and Vedantam 2017), has been cited in subsequent academic literature (e.g., Knight 2014, 9; Larriba et al. 2016, 188; Yonck 2017, 90; Hagendorff 2017), and is utilized in almost all of Darling’s papers and conference presentations—has not been repeated and verified with a more robust scientific study. According to Darling, the main reason has been cost: the Pleo Dinosaur toys “are a little too expensive to do an experiment with 100 participants” (Darling and Vedantam 2017). But this explanation (or perhaps “excuse”) seems less credible in light of the actual cost of the robot and the preliminary findings that had been obtained from the workshop demonstration. First, the robots are obviously expensive toys with each unit costing approximately $500. But this per unit price is not outside the realm of possibility for externally funded research, especially if the robots could be leased for the period of time necessary for conducting the study. Second, if the results obtained from the workshop are indeed correct and participants are not inclined to harm the expensive Pleo dinosaurs, then nothing (or close to nothing) would be lost in the process of conducting such an experiment. A few Pleos might be damaged in the process, but the vast majority would, assuming the workshop data is correct and representative, survive the study unscathed. Without a more robust scientific investigation, Darling’s workshop results are limited to the personal experiences and feelings of twenty-four participants, who are not a representative sample, involved in a procedure that was not a controlled experiment.
Raya Jones has identified a similar problem in the work of another MIT-based social scientist, Sherry Turkle. Considering the significance of Turkle’s “findings,” namely that we are currently in the midst of a “robotic moment,” Jones (2016, 74) makes the following observation:
Turkle (2011) backs up her position with a wealth of real-life anecdotes—e.g. a student approached her after a lecture to confide that she’d gladly replace her boyfriend with a robot—interviews and naturalistic observations. Such evidence could be countered by anecdotes, interview and observations that support claims to the contrary. It is impossible to tell whether society has arrived at a robotic moment without some trend analysis—a robust investigation of changes in social patterns associated with the technology—and surveys showing that sentiments expressed by Turkle’s informants are widely shared.4
The same charge could easily be leveled against Darling’s anecdotes, like the tweets regarding the demise of Hitchbot, and workshop evidence, like that conducted with the Pleo dinosaurs. These personal experiences and encounters might point in the direction of a need to consider robots as a different kind of socially situated entity, but without a robust, scientific investigation to prove this insight one way or the other, this conclusion can be easily countered by other, equally valid anecdotes and opinions, like Bryson’s claim that “robots should be slaves.” I point this out not to dismiss Darling’s work (or that of other researchers like Turkle) but to identify a procedural difficulty, namely the way this seemingly informative and undeniably influential investigative effort is at risk of being undermined and compromised by the very evidence that has been offered to support it. Basing normative proscriptions on personal opinion and experience is precisely the problem of moral sentimentalism that was identified by Kant.
Second, despite the fact that Darling’s proposal appears to uphold the Humean thesis, differentiating what ought to be from what is, it still proceeds by inferring ought from what appears to be. According to Darling (2016a, 214), everything depends on “our well-documented inclination” to anthropomorphize things. “People are prone,” she argues, “to anthropomorphism; that is, we project our own inherent qualities onto other entities to make them seem more human-like” (Darling 2016a, 214)—qualities like emotions, intelligence, sentience, etc. Even though these capabilities do not (for now at least) really exist in the mechanism, we tend to project them onto the robot in such a way that we then perceive them to be something that we presume belongs to the robot. This is what Duffy (2003, 178) calls “the big AI cheat” that arguably began with Turing’s “imitation game,” where what matters is not whether a device really is intelligent or not but how users experience interactions with the mechanism and form interpretations about its capabilities from these experiences. By focusing on this anthropomorphic operation, Darling mobilizes and deploys a well-known and rather useful philosophical distinction that is at least as old as Plato—the ontological difference between being (what something really is) and appearance (how it appears to be). What ultimately matters, according to Darling’s argument, is not what the robot actually is “in and of itself,” to redeploy Kantian terminology. What makes the difference is how the mechanism comes to be perceived by us and in relationship to us, that is, how the robot appears to be. This subtle but important change in perspective represents a shift from a kind of naïve empiricism to a more sophisticated phenomenological formulation (at least in the Kantian sense of the word). Darling’s proposal, therefore, does not derive ought from what is; but it does derive ought from what appears to be. There are, however, three problems or difficulties with this particular procedure.
In the first place, there is a problem with anthropomorphism. As Duffy (2003, 180) explains, “anthropomorphism (from the Greek word Anthropos for man, and morphe, form/structure) … is the tendency to attribute human characteristics to inanimate objects, animals and others with a view to helping us rationalise their actions. It is attributing cognitive or emotional states to something based on observation in order to rationalise an entity’s behaviour in a given social environment.” For Duffy this process of rationalization is similar to what Daniel Dennett (1996, 27) called the intentional stance—“the strategy of interpreting the behavior of an entity (person, animal, artifact, whatever) by treating it as if it were a rational agent who governed its ‘choice’ of ‘action’ by a ‘consideration’ of its ‘beliefs’ and ‘desires.’” One only needs to take note of the proliferation of quotation marks in these passages to see the complexity involved in the process—the effort to attribute a state of mind to something that does not actually possess it, or perhaps more accurately stated (insofar as the problem tends to be an epistemological matter), to something that one cannot be entirely certain whether it actually possess it or not. This formulation of anthropomorphism is rather consistent across the literature and is often identified as a kind of default behavior. As Heather Knight (2014, 4) explains: “Robots do not require eyes, arms or legs for us to treat them like social agents. It turns out that we rapidly assess machine capabilities and personas instinctively, perhaps because machines have physical embodiments and frequently readable objectives. Sociability is our natural interface, to each other and to living creatures in general. As part of that innate behavior, we quickly seek to identify objects from agents. In fact, as social creatures, it is often our default behavior to anthropomorphize moving robots.”
Anthropomorphism happens; that does not appear to be open to debate. What is debatable, however, is determining the extent to which this happens and in what circumstances. Empirical studies show that the anthropomorphic projection is something that is multifaceted and can change based on who does the perceiving, in what context, at what time, and with what level of experience. Christopher Jaeger and Daniel Levin’s (2016) review of the existing literature find considerable disagreement and difference when it comes to the question of projecting agency and theory of mind on nonhuman artifacts. On the one side, there are numerous studies that demonstrate what they call “promiscuous agency” (Nass and Moon 2000; Epley, Waytz, and Cacioppo 2007). “On this account,” which characterizes Darling’s position, “people tend to automatically attribute humanlike qualities to technological agents” (Jaeger and Levin 2016, 4). But these conclusions have been contested by more recent studies (Levin, Killingsworth, and Saylor 2008; Hymel et al. 2011; Levin et al. 2013) that demonstrate something Jaeger and Levin (2016, 4) call “selective agency,” which is “precisely the opposite of the promiscuous agency account” (Jaeger and Levin 2016, 10). “The selective agency account,” Jaeger and Levin explain, “posits that people’s default or baseline assumption in dealing with a technological agent is to sharply distinguish the agent from humans, making specific anthropomorphic attributions only after deeper consideration” (Jaeger and Levin 2016, 10).
Consequently, results from a more detailed consideration of the available empirical research concerning anthropomorphism with nonhuman artifacts both support and refute the claim that is at the heart of Darling’s argument. In some cases, with some users, a more promiscuous anthropomorphic projection might in fact warrant the extension of rights to nonhuman artifact. But, in other cases, with other users, a more selective determination remains conservative in its anthropomorphic projections, meaning that an extension of rights might not be entirely justified. In an effort to negotiate these differing responses, Eleanor Sandry (2015, 344) proposes something she calls “tempered anthropomorphism,” which “allows a robot to be considered familiar enough in its behavior to interpret its movements as meaningful, while also leaving space to acknowledge its fundamental differences” so that its unique contributions to human/robot teams can be both acknowledged and successfully operationalized. Pointing this out, does not, in and of itself, refute Darling’s argument, but it does introduce significant complications concerning the way that anthropomorphism actually operates.
Because the anthropomorphic projection is a kind of “cheat” (Duffy’s word) there is a persistent concern with manipulation and deception. “From an observer perspective,” Duffy (2003, 184) writes, “one could pose the question whether the attribution of artificial emotions to a robot is analogous to the Clever Hans Error (Pfungst 1965), where the meaning and in fact the result is primarily dependent on the observer and not the initiator.” Clever Hans (or der Kluge Hans) was a horse that allegedly was able to perform basic arithmetic operations and communicate the results of its calculations by tapping his hoof on the ground in response to questions from human interrogators. The story was originally recounted in an article published in the New York Times (Heyn 1904); was debunked by Oskar Pfungst (1965), who demonstrated that what appeared to be intelligence was just a conditioned response; and has been utilized in subsequent efforts to address animal intelligence and human/animal communication (cf. Sandry 2015b, 34). In the case of robots, Matthias Scheutz finds good reason to be concerned with this particular kind of manipulation.
What is so dangerous about unidirectional emotional bonds is that they create psychological dependencies that could have serious consequences for human societies … Social robots that cause people to establish emotional bonds with them, and trust them deeply as a result, could be misused to manipulate people in ways that were not possible before. For example, a company might exploit the robot’s unique relationship with its owner to make the robot convince the owner to purchase products the company wishes to promote. Note that unlike human relationships where, under normal circumstances, social emotional mechanisms such as empathy and guilt would prevent the escalation of such scenarios; there does not have to be anything on the robots’ side to stop them from abusing their influence over their owners (Scheutz 2015, 216–217).
The problem Scheutz describes is that we might be attributing something to the machine in a way that is unidirectional, error-prone, and potentially dangerous.
The real problem here, however, is undecidability in the face of this question. In other words, we seem to be unable to decide whether anthropomorphism, or what Bryson and Kime (2011, 2) call “over identification,” is a bug to be eliminated in order to expedite correct understanding and protect users from deception (Shneiderman 1989, Bryson 2014, Bryson and Kime 2011) or whether it is, as Duffy (2003) argues, a feature to be carefully cultivated so as to create better, socially interactive artifacts. As Sandry (2015a, 340) explains:
Scientific discourse is generally biased against anthropomorphism, arguing that any attribution of human characteristics to nonhumans is incompatible with maintaining one’s objectivity (Flynn 2008 and Hearne 2000). Indeed, Marc Bekoff (2007, 113) has gone so far as to describe anthropomorphism as one of the “dirty words in science” being linked with “the subjective and the personal.” However, social robotics research has, for some time, been open to the idea of encouraging anthropomorphic responses in humans. In particular, Turkle et al. (2006), and Kirsten Dautenhahn’s (1998) early work, argued that anthropomorphism is an important part of facilitating meaningful human-robot interactions.
The problem, then, is not that the potential for deceptive anthropomorphic projection exist; the problem is that we seem to be unable to decide whether such deceptions or simulations (to use a more “positive” term) are detrimental, useful, or both in the context of robots that are designed for social interaction. In other words, is what Duffy calls “the big AI cheat”—a cheat that is already definitive of the Turing Test and the target of Searle’s Chinese Room thought experiment—a problem that is to be avoid at all costs? Or is it a valuable asset and feature that should it be carefully developed and operationalized?
In all of this, the determining factor is moral sentiment. As Torrance (2008) explains: “Thus it might be said that my ethical attitude towards another human is strongly conditioned by my sense of that human’s consciousness: that I would not be so likely to feel moral concern for a person who behaved as if in great distress (for example) if I came to believe that the individual had no capacity for consciously feeling distress, who was simply exhibiting the ‘outward’ behavioural signs of distress without their ‘inner’ sentient states.” Moral standing, on this account, is a matter of how I feel about others, my sense that they are conscious (or not). Everything, therefore, depends on outward appearances and the ability to jump the epistemological divide that separates apparent behavior from actual inner sentient states. And this is, as Torrance indicates, a matter of belief. Or as Duffy (2006, 34) explains:
With the advent of the social machine, and particularly the social robot, where the aim is to develop an artificial system capable of socially engaging people according to standard social mechanisms (speech, gestures, affective mechanisms), the perception as to whether the machine has intentionality, consciousness and free-will will change. From a social interaction perspective, it becomes less of an issue whether the machine actually has these properties and more of an issue as to whether it appears to have them. If the fake is good enough, we can effectively perceive that they do have intentionality, consciousness and free-will [emphasis in the original].
This fact, namely that “the fake is good enough,” has been experimental tested and confirmed by a number of studies. Gray et al. (2007), for instance, demonstrated that the perception of mind takes place across two dimensions, agency and experience, and that different entities (adult human beings, babies, animals, gods, and robots) have different perceived levels of mind across these two dimensions. Specifically, the robot in the study, as Elizabeth Broadbent (2017, 642) points out, “was perceived to have little ability to experience but to have a moderate degree of agency,” thus suggesting “that robots are seen by most people to have at least some components of mind.” And Broadbent et al. (2013, 1) have shown, by way of a repeated measures experiment with human participants interacting with a Peoplebot healthcare robot, that “the more humanlike a healthcare robot’s face display is, the more people attribute mind and positive personality characteristics to it.” All of this is something like Searle’s Chinese Room in action. And, as with the famous thought experiment, the underlying problem is with the epistemological divide. Simulation is not the real thing. But for most human users, simulation is “good enough” and often trumps what we know or do not know about the real situation. Thus it is not so easy to resolve these questions without some kind of “faith-based” initiative. This is evident with both Descartes, who needed a benevolent god to guarantee that sensation accorded with and accurately represented an external reality, and Torrance, who finds it necessary to rely on feelings and beliefs.
Third, because what ultimately matters is how “we” see things, this proposal remains thoroughly anthropocentric and instrumentalizes others by turning them into mechanisms of our moral sentiment. According to Darling, the principal reason we need to consider extending legal rights to others, like social robots, is for our sake. This follows the well-known Kantian argument for restricting animal abuse, and Darling endorses this formulation without any critical hesitation whatsoever: “The Kantian philosophical argument for preventing cruelty to animals is that our actions toward non-humans reflect our morality—if we treat animals in inhumane ways, we become inhumane persons. This logically extends to the treatment of robotic companions” (Darling 2016a, 227–228). Or as she described it in a recent conversation with Shankar Vedantam of National Public Radio’s Hidden Brain podcast:
My sense is that if we have evidence that behaving violently toward very lifelike objects not only tells us something about us as a person but can also change people and desensitize them to that behavior in other contexts—so if you’re used to kicking a robot dog, are you more likely to kick a real dog—then that might actually be an argument, if that’s the case, to give robots certain legal protections the same way that we give animals protections, but for a different reason. We like to tell ourselves that we give animals protection from abuse because they actually experience pain and suffering. I actually don’t think that is the only reason that we do it. But for robots the idea would be not that they experience anything but rather that it’s desensitizing to us and it has a negative effect on our behavior to be abusive toward the robots. (Darling and Vedantam 2017)
And in a conversation with PC Magazine’s Evan Dashevsky (2017), Darling issues an even more direct articulation of this argument:
I think that there’s a Kantian philosophical argument to be made. So Kant’s argument for animal rights was always about us and not about the animals. Kant didn’t give a shit about animals. He thought “if we’re cruel to animals, that makes us cruel humans.” And I think that applies to robots that are designed in a lifelike way and we treat like living things. We need to ask what does it do to us to be cruel to these things and from a very practical standpoint—and we don’t know the answer to this—but it might literally turn us into crueler humans if we get used to certain behaviors with these lifelike robots.
Formulated in this fashion, Darling’s explanation makes a subtitle shift from what had been explicitly identified as a Kantian-inspired deontological argument to something that sounds more like virtue ethics. “According to this argument,” as Coeckelbergh (2016, 6) explains, “mistreating a robot is not wrong because of the robot, but because doing so repeatedly and habitually shapes one’s moral character in the wrong kind of way. The problem, then, is not a violation of a moral duty or bad moral consequences for the robot; it is a problem of character and virtue. Mistreating the robot is a vice.” This way of thinking, although potentially expedient for developing and justifying new forms of legal protections, renders the inclusion of previously excluded others less than altruistic. It transforms animals and robot companions into nothing more than instruments of human self-interest. The rights of others, in other words, is not (at least not directly) about them. It is all about us and our interactions with other human beings. Or as Sabine Hauert summarizes it in the RobotsPodcast conversation with Darling: “So it is really about protecting the robots for the sake of the humans, not for the sake of the robots” (Darling and Hauert 2013).
A similar argument is articulated in Blay Whitby’s (2008) call for robotics professionals to entertain the question of robot abuse and its possible protections. Like Darling, Whitby is aware of the fact that the problem he wishes to address has little or nothing to do with the robotic artifact per se. “It is important to be clear,” Whitby (2008, 2) explains, “that the word ‘mistreatment’ here does not imply that the robot or intelligent computer system has any capacity to suffer in the ways that we normally assume humans and some animals have.” For this reason, “the ethics of mistreating robots” (Whitby 2008, 2) is primarily about human beings and only indirectly about the robot. “For present purposes,” Whitby (2008, 4) continues, “we are excluding any case in which the artifact itself is capable of any genuine or morally significant suffering. We are therefore concerned only with human (and possibly animal) moral consequences.” What is of concern to Whitby, therefore, is not how the robot might or could feel about being abused. That question is, for the present purposes, simply excluded from serious consideration. What does matter, however, is how robot abuse and the mistreatment of such artifacts might adversely influence or otherwise harm human beings, human social institutions, and even, by way of a parenthetical concession, some animals. As Alan Dix (2017, 13) accurately summarizes it: “Whitby called for professional codes of conduct to be modified to deal with issues of abusive behaviour to artificial agents; not because these agents have any moral status in themselves, but because of the potential effect on other people. These effects include the potential psychological damage to the perpetrators of abuse themselves and the potential for violence against artificial agents to spill over into attitudes to others.”
This anthropocentric focus continues in other investigations of robot rights, like Hutan Ashrafian’s (2015a) consideration of robot-on-robot violence. In his investigation, Ashrafian identifies an area of concern that he contends has been missing from robot ethics. All questions regarding robot ethics, he argues, have to do with interactions between humans and robots. What is left out of the mix is a consideration of robot-on-robot (or AIonAI) interactions and how these relationships could affect human observers. Ashrafian uses the example of animal abuse, i.e., dog fights, where the problem of dog-on-dog violence consists in what effect this display has on us. “It is illegal to encourage animals to fight or to harm each other unnecessarily. In such a situation an animal that physically harms another (other than for established biological and nutritional need in the context of their evolved ecosystem) is deemed to be inappropriate for human society” (Ashrafian 2015a, 33). The same problem could potentially occur with robots. “Similarly AIonAI or robot-on-robot violence and harmful behaviours are undesirable as human beings design, build, program and manage them. An AIonAI abuse would therefore result from a failure of humans as the creators and controllers of artificial intelligences and robots. In this role humans are therefore the responsible guiding agents for the action of their sentient and or rational artificial intelligences, so that any AIonAI abuse would render humans morally and legally culpable” (Ashrafian 2015a, 34).5
As with Darling, and following Kant, Ashrafian’s argument is that robot-on-robot violence could be just as morally problematic as animal abuse not because of the way this would harm the robot or the animal, but for the way it would affect us—the human beings who let it happen and are (so it is argued) diminished by permitting such violence to occur. “As a result,” Ashrafian (2015a, 34) concludes, “the prevention of AIonAI transgression of inherent rights should be a consideration of robotic design and practice because attention to a moral code in this manner can uphold civilisation’s concept of humanity.” For this reason, what appears (at least initially) to be an altruistic concern with the rights of others is in fact a concern for ourselves and how we might feel in the face of watching others being abused—whether those others are an animal or a robot.
Promoting good AIonAI or robot-robot interactions would reflect well on humanity, as mankind is ultimately the creator of artificial intelligences. Rational and sentient robots with comparable human intelligence and reason would be vulnerable to human sentiments such as ability to suffer abuse, psychological trauma and pain. This would reflect badly on the human creators who instigated this harm, even if it did not directly affect humanity in a tangible sense. In actuality it could be argued that observing robots abusing each other (as in the case above) could lead to psychological trauma to the humans observing the AIonAI or robot-on-robot transgression. Therefore the creation of artificial intelligences or robots should include a law that accommodated good AIonAI relationships. (Ashrafian 2015a, 35–36)
In other words, the reason to consider the rights of individual robots is not for the sake of the robot, but for our sake and how observing the infringement of those rights by way of robot abuse could have harmful effects on human beings.
A similar argument was developed and deployed in David Levy’s “The Ethical Treatment of Artificially Conscious Robots” (2009). In fact, his argument for “robot rights” is virtually identical to Kant’s formulation of “indirect duties” to animals, although Levy, like Whitby, does not explicitly recognize it as such:
I believe that the way we treat humanlike (artificially) conscious robots will affect those around us by setting our own behaviour towards those robots as an example of how one should treat other human beings. If our children see it as acceptable behaviour from their parents to scream and shout at a robot or to hit it, then, despite the fact that we can program robots to feel no such pain or unhappiness, our children might well come to accept that such behaviour is acceptable in the treatment of human beings. By virtue of their exhibiting consciousness, robots will come to be perceived by many humans, and especially by children, as being in some sense on the same plane as humans. This is the reasoning behind my argument that we should be ethically correct in our treatment of conscious robots—not because the robots would experience virtual pain or virtual unhappiness as result of being hit or shouted at. (Levy 2009, 214)
For Levy what really matters is how the social circumstances and perception of the robot will affect us and influence the treatment of other humans. “Treating robots in ethically suspect ways will,” as Levy (2009, 215) concludes, “send the message that it is acceptable to treat humans in the same ethically suspect ways.” The concern, therefore, is not with the robot per se but with the artifact as an instrument of human sociality and moral conduct. Levy’s efforts to “think the unthinkable” and to take seriously the fact that “robots should be endowed with rights and should be treated ethically” (Levy 2009, 215) turns out to be just one more version of instrumentalism, whereby the robot is turned into a means or tool of our moral interactions with each other, and “robot rights” is a matter of the correct use of the robot for the sake of human moral conduct and instruction.
Finally, these various arguments only work on the basis of an unquestioned claim and assumption. As Whitby (2008, 4) explains: “the argument that the mistreatment of anything humanlike is morally wrong subsumes a number of other claims. The most obvious of these is that those people who abuse human-like artefacts are thereby more likely to abuse humans.” The indirect duties argument—like that utilized by Darling, Levy, and others—makes a determinist assumption: namely, that the abuse of robots will (the hard determinist position) or is likely to (the weaker version of the same) cause individuals to behave this way with real people and other entities, like animals. If this sounds familiar, it should. “This claim that ‘they might do it for real’ has,” as Whitby (2008, 4) points out, “received a great deal of attention with respect to other technologies in recent years. The technology most relevant to the present discussion is probably that of computer games.” And, in an earlier essay on virtual reality (VR), Whitby spells out the exact terms of the debate:
This argument [“they might do it for real”] suggests that people who regularly perform morally reprehensible acts such as rape and murder within VR are as a consequence more likely to perform such acts in reality. This is certainly not a new departure in the discussion of ethics. In fact the counter argument to this suggestion is as least as old as the third century BC. It is based on the Aristotelian notion of catharsis. Essentially this counter argument claims that performing morally reprehensible acts within VR would tend to reduce the need for the user to perform such acts in reality. The question as to which of these two arguments is correct is a purely empirical one. Unfortunately, it is not clear what sort of experiment could ever resolve the issue … There is little prospect in resolving this debate in a scientific fashion. (Whitby 1993, 24)
In this passage, Whitby identifies something of an impasse in video game and virtual worlds scholarship, and his conclusion has been supported and verified by others. In a meta-analysis of studies addressing virtual violence in video games, John Sherry (2001 and 2006) found little evidence to support either side of the current debate. “Unlike the television controversy, the existing social science research on the impact of video games is not nearly as compelling. Despite over 30 studies, researchers cannot agree if violent-content video games have an effect on aggression” (Sherry 2001, 409). Whitby, therefore, simply extends this undecidability regarding violent behavior with virtual entities in computer games to the abuse of physically embodied robots.
From the example of computer games we can draw some conclusions relevant to the mistreatment of robots. The first is that the empirical claim that such activities make participants more likely to “do it for real” will be highly contested. The competing claim is usually that the Aristotelian notion of “catharsis” applies (Aristotle, 1968). This entails that by doing things to robots, or at least in virtual ways, the desire to do them in reality is thereby reduced. The catharsis claim would be that mistreatment of robots reduced the need for people to mistreat humans and was therefore morally good. (Whitby 2008, 4)
This certainly complicates the picture by opening up considerable indeterminacy regarding the social impact and effect of robot abuse. Can we say, for instance, that the mistreatment of robots produces real abuse, as Darling, Levy, and others have argued, thereby justifying some level of protection for the artifact or even restrictions against (or, more forcefully stated, potential criminalization) of the act? Or could it be the case that the exploitation of robots is, in fact, therapeutic and cathartic, thereby producing the exact opposite outcome, namely justifying the abuse of robotic artifacts as a means for defusing violent proclivities and insulating real living things, like human beings and animals, from the experience of real violence? At this point, and given the available evidence (or lack of evidence, as the case may be), there is simply no way to answer this question in any definitive way.
To make it more concrete (and using what is arguably an extreme, yet popular, example) one may ask whether raping a robotic sex doll does in fact encourage or otherwise contribute to actual violent behavior toward women and other vulnerable individuals, as has been suggested by Richardson (2016a and 2016b)?6 Or whether such activity directed toward and perpetrated against what are arguably mere artifacts actually provides an effective method for defusing or deflecting violence away from real human individuals, as Ron Arkin had reportedly suggested with the use of childlike sex robots for treating pedophilia? (Hill 2014).7 “There are,” as Danaher (2017, 90) summarizes it, “three possible extrinsic effects that are of interest. The first would be that engaging in acts of robotic rape and child sexual abuse significantly increases the likelihood of someone performing their real-world equivalents. The second would be that engaging in acts of robotic rape and child sexual abuse significantly decreases the likelihood of someone performing their real-world equivalents. And the third would be that it has no significant effect or an ambiguous effect on the likelihood of someone performing their real-world equivalents.” The problem—a problem that is already available and well documented in studies of video game violence—is that these questions not only remain unresolved but would be rather difficult if not impossible to study in ways that are both scientifically sound and ethically justifiable. Imagine, for instance, trying to design an experiment that could pass standard IRB scrutiny, if the objective was to test whether raping robotic sex dolls would make one more likely to perform such acts in real life, be a cathartic release for individuals with a proclivity for sexual violence, or have no noticeable effect on individual conduct.8
In the end, therefore, Darling’s argument is considerably less decisive and less provocative than it initially appears to be or had been advertised. Her claim is not that robots should have rights or even that we should grant them legal protections, despite the fact that one of her essays is titled “Extending Legal Protection to Social Robots.” In fact, when Sabine Hauert pushed the issue and asked her directly, “Should we be protecting these robots and why?” (Darling and Hauert 2013), Darling responded by retreating and trying to recalibrate expectations:
So I haven’t conclusively answered the question for myself whether we should have an actual law protecting them, but two reasons we might want to think about enacting some sort of abuse protection for these objects is first of all because people feel so strongly about it. So one of the reasons that we do not let people cut off cat’s ears and set them on fire is that people feel strongly about that type of thing and so we have laws protecting that type of abuse of animals. And another reason that we could want to protect robotic objects that interact with us socially is to deter behavior that could be harmful in other contexts. So if a child doesn’t understand the difference between a Pleo and a cat, you might want to prevent the child from kicking both of them. And in the same way, you might want to protect the subconscious of adults as well or society in general by having us treat things that we tend to perceive as alive the same way we would treat living objects. Also another example to illustrate this is often in animal abuse cases—when animals are abused in homes—when this happens this will often trigger a child abuse case for the same household, if there are children, because this is behavior that will translate. (Darling and Hauert 2013)
Consequently, Darling’s argument is not that robots should have rights or should be extended some level of legal protections. She is not willing to go that far, not yet at least. Her argument is more guarded and diffident: Given the way human users tend to anthropomorphize objects and project states of mind onto seemingly animate things like robots and other artifacts, and given the fact that it is possible that the abuse of robots could encourage human beings to mistreat other living things (although this has yet to be proven one way or the other), then we should perhaps begin “to think about enacting some sort of abuse protection for these objects.”
Darling puts forth what appears to be one of the strongest cases in favor of robot rights. Her proposal is provocative precisely because it appears to consider the social status of the robot in itself, as both a moral and legal subject. She therefore recognizes and endeavors to contend with something that Prescott (2017, 144) had also identified: “We should take into account how people see robots, for instance, that they may feel themselves as having meaningful and valuable relationships with robots, or they may see robots as having important internal states, such as the capacity to suffer, despite them not having such capacities.” But this effort, for all its promise, turns out to be both frustrating and disappointing.
It is frustrating because Darling’s arguments rely mostly on anecdotal evidence, i.e., stories from the news and other researchers and results from what are admittedly less-than-scientific demonstrations. This means that her argument—even if it is intuitively correct—remains at the level of personal sentiment and experience. This is not even to say that she needs to design and perform the experiments necessary to prove her anthropomorphism hypothesis. All she would need to do is leverage the work that is already available and presented in the existing literature, adding a legal or moral dimension to the findings that have already been reported by others. Without being grounded in scientific studies—studies that can be repeated and tested—the evidence she utilizes risks undermining her own argument and recommendations. Furthermore, all of this is disappointing, because just when you think she is going to seriously consider robot rights, she pulls her punches and retreats to a rather comfortable Kantian position that make it all about us. For Darling, robots are, in the final analysis, just an instrument of human sociality, and we should treat them well for our sake.