2
Androids: Artificial Persons or Glorified Toasters?

Joe Slater

You have a moral status. By this, I mean that you matter. For example, there are things it would be wrong for you to do. There are also things it would be wrong for me to do to you. You have rights. What makes you the type of thing with this moral status? Is it just being human? Does the creature discovered by the crew of the Nostromo in Alien have moral status? Can it be wronged? Maybe you think that even though we might reasonably hate those creatures, and certainly not want to be anywhere near them, it would still be wrong to torture them. And what about synthetics? Do androids like Ash in Alien have rights?

Philosophers have tried to answer this type of question in several ways. In this chapter, we’ll look at a few of these different ways, thinking about some cases that might be surprisingly difficult to explain, like why babies matter, whether animals have moral status, and what we should think about synthetics (or “artificial persons,” as Bishop prefers to be called) in this regard.

“He was programmed to protect human life”

One way we might try to answer these questions is by saying that only human beings have moral status. This is a nice, simple answer to the question. Because this rule for moral status doesn’t depend upon intelligence or ability, it lets us say that even human babies (who are notoriously not very bright!) matter, morally speaking. That’s great, because generally speaking, we do think it’s wrong to go around kicking babies.

Thinking about it this way would have some consequences we might not like, however. If all that matters in being “human” is having the right genetic code, then it seems that a human embryo should be considered as having the same status as a fully grown adult. In a similar vein, a human being who is completely brain‐dead, but still alive, would deserve moral consideration. While many people do think embryos and those in persistent vegetative states have some rights (and thus have some kind of moral status), whether their status is equal to that of a normal adult is controversial.

Another type of problem case we might consider involves categorizing what is and what is not a human being. In Alien: Resurrection, for instance, Call notes that the resurrected Ripley isn’t human. “Wren cloned her because she was carrying an alien in her,” Call explains. The Ripley clone seems to be a hybrid—mostly human, but part alien as well. If she gets moral status by being very similar to human beings, how far does that extend? Does the Xenomorph that’s born at the end of Resurrection, a creature that possesses some human DNA—and recognizes Ripley as its mother—fit into the moral community too?

These cases—embryos, the brain‐dead, and human–alien hybrids—might give us reason to reconsider the view that it’s only human beings that matter. There are also things that seem to warrant moral consideration which wouldn’t be included in that formulation, such as animals. Whether or not we’re pet owners or vegetarians, presumably we think certain animals matter in our decision‐making. Ripley goes to extreme lengths to save Jones the cat in Alien. This must be because she thinks Jones matters somehow, and that it would be bad if Jonesy was to be killed.

Australian philosopher Peter Singer argues that it is speciesist to treat human beings as the only things worthy of moral status.1 He thinks speciesism, like racism and sexism, is morally abhorrent. While most people in the western world have acknowledged that racism and sexism are bad, Singer thinks we haven’t properly appreciated the moral status of other species. It’s difficult to see why it’s justified to say that humans are special just because they’re human beings; it just seems arbitrary. The charge of speciesism seeks to expose a bias underlying our moral thinking and to make us think about what facts really matter when asking whether it’s okay to treat someone (or something) a certain way. We can imagine some intelligent animal or alien species—perhaps even the ones in the Alien films—or forms of synthetic life that would have some moral rights. So simply drawing the line at whether something is human doesn’t seem to do the job.

Autonomy

Instead of simply belonging to a species, maybe there is some special ability or trait that makes something matter morally. We could then say it’s forbidden to treat anything that has that trait or ability in certain ways. Animals, aliens, or androids might be in the class of things having moral status. Autonomy is one trait some philosophers, including Immanuel Kant (1724–1804), have thought is a good candidate for conferring moral status. It is difficult to define autonomy, but for now we can think of it as the ability to consider options, evaluate them, and freely choose to act in accordance with them.

Kant thinks autonomy determines moral status because he thinks that the only thing that’s good in itself (rather than only good as a means to other good things) is a good will. We can think of this as good intentions. You act morally, according to Kant, if you can frame and make yourself obey moral laws, which you can only do if you have autonomy. One reason we might think of autonomy as the supreme notion of moral status is the role it plays in fixing the meaning of other moral ideas. Advocates of autonomy suggest, for example, that you can’t see other moral agents as intrinsically valuable unless you’re autonomous yourself. With this understanding, we don’t need to know whether the Ripley‐8 clone from Alien: Resurrection is human or not in order to know whether she matters morally. If she can consider options for her future and choose to do the right thing, then she matters. The same goes for synthetics and “artificial persons.” If Bishop is autonomous (for example), then it would be wrong for us as moral agents to use him just as a tool.

This account seems to have problems, though. Not only does the reliance on autonomy exclude ordinary, non‐autonomous animals from moral consideration, but also it leaves out babies. Babies don’t deliberate about their options and choose rules to follow when they do stuff. Those who accept autonomy as an ultimate moral principle suggest that there isn’t anything intrinsically wrong about treating babies or animals in ways that might be forbidden to treat autonomous moral agents. Instead, they say, the reason it’s wrong to hurt babies or animals is because of something it does to us moral agents. Going around torturing babies might be bad because it’s bad for their parents, or because it makes us vicious and therefore more likely to ignore or violate moral laws.

Another approach that retains the moral status of babies is to count anything with the potential to become autonomous as having moral status. While this would keep babies in the mix, it would also force us to count embryos as having moral status, and maybe even individual sperm cells or eggs because they all seem, in one way or another, to have the potential to become autonomous beings. Perhaps more problematically, this type of view can’t account for why animals matter. You don’t have to be a vegetarian to think it’s wrong to be arbitrarily cruel to animals. Generally, all of us think animals have some moral standing. When Ripley saves Jones the cat in Alien, there is some reason to do so (I think she actually goes way too far, risking her life a lot. Come on, Ellen—it’s just a cat!). Similarly, when Murphy’s dog Spike dies in Alien3, it’s not completely crazy to be sad about that. And it does seem like a serious limitation for a moral theory if it can’t explain at all how the lives and suffering of animals matter.

“Because pain hurts”

Another obvious candidate for whether something has moral status is the theory that focuses on whether a thing can experience pain and pleasure. In ancient Greece, the followers of Epicurus held this type of view. More recently, Jeremy Bentham (1748–1832) defended it when he outlined the ethical position of utilitarianism. When asking why creatures have rights, he asked:

Is it the faculty of reason or perhaps the faculty of discourse? But a full‐grown horse or dog, is beyond comparison a more rational, as well as a more conversable animal, than an infant of a day or a week or even a month, old. But suppose the case were otherwise, what would it avail? The question is not, Can they reason? nor, Can they talk? but, Can they suffer?2

This probably strikes most of us as pretty sensible. The reason it’d be wrong to kick Jones the cat, or babies, or you or me is because we experience pain, and as Ripley reminds Johner when advising him to get away from her—“pain hurts” (Alien: Resurrection). That by itself gives us reasons to avoid it.

For traditional utilitarians like Bentham, the only things that matter to morality are pain and pleasure. So, anything that can experience pain or pleasure has moral status. This neatly includes normal human adults, babies, and most animals. It clearly doesn’t include plants, brain‐dead people, or embryos. One difficulty we might find is whether to include synthetic beings. In Alien, after being decapitated, the android Ash shows no sign of being in pain. Indeed, what would be the point in making synthetics that experience pain? We might note that there is a difference between experiencing pain and noticing that you’re damaged in some way. While the former is a sensation that makes you want to alleviate it as soon as possible, the latter is simply a diagnostic recognition. If you’re anesthetized, you can notice that your leg is broken, but you won’t feel pain. We might well imagine that synthetics are like that.

As it happens, Bishop does seem to be able to experience pain. When Ripley manages to reactivate his mangled torso in Alien3, he says, “I hurt. Do me a favor. Disconnect me.” If we’re to take that literally, then synthetics—at least of Bishop’s type—would have moral status according to Bentham’s picture. That would suggest that we could have obligations to androids, but the idea is problematic in the Alien universe. After all, Bishop is ordered around like a slave, and Burke says “it’s been policy for years to have a synthetic on board.”

For the sake of argument, I’m going to assume that synthetics don’t really experience pain. Maybe Bishop was speaking figuratively. In any case, the type of synthetic life that doesn’t experience pain—which does seem possible and may be the case for Ash—offers an interesting problem. Do beings like that just not matter, morally speaking, at all?

Agents and Patients

Up until now, we’ve just looked at one kind of moral status. While the fact that it’s wrong to treat you in certain ways means you’ve got moral status, we might also think there’s another important type of moral status. A normal human being can both be wronged and commit wrongs. This difference picks out an important distinction. On one hand, there are things that can act morally or not, things that can act rightly or wrongly. Philosophers call these moral agents. On the other hand, there are things that can have moral acts done to them, that can be wronged or treated rightly. These are moral patients.

According to this view, normal human beings are both moral agents and moral patients, but babies are not agents—babies don’t seem to be able to act in ways that are good or evil. Yet it seems to most of us like it’s wrong to do certain things to them. The same goes for most animals.

We might think that the two answers to the question of moral status we’ve looked at so far—autonomy and the ability to feel pleasure or pain—address these separate moral situations of agency and patiency. The ability to analyze potential choices and select from among them in moral terms seems like the sort of ability that creatures that can act rightly or wrongly should have. If you have no choice about what you do, you can’t be blamed for what you do. If the Xenomorphs always act purely on instinct, not evaluating options and choosing freely from among them, then they don’t appear to be moral agents. They might be more like predatory creatures such as lions or snakes. It doesn’t make much sense to blame them for acting the way they do.

As for moral patiency, we might think that just being able to experience pain or pleasure suffices for this. The fact that something could feel in pain if you act a certain way to them does seem to give you a good reason not to do it, and the fact that something would get pleasure if you do something for them is a reason in favor of acting so. Of course, you might still be right to decide that inflicting pain on (or killing) moral patients is sometimes the right thing to do. Sometimes you might have to nuke a colony full of aliens from orbit, just to be sure, because the alternatives are even worse.

“It’s a robot! A goddamn droid!”

Where would this leave our synthetics? If we think they don’t really experience pleasure or pain, they aren’t moral patients on this view. Perhaps they’re just moral agents. Synthetics do seem to weigh options and decide between them. If that’s right, they can wrong others, like Ash, when he screws over the crew of the Nostromo; they can’t be wronged, though.

This also doesn’t seem quite right, for this reason: the notion of pain and pleasure we’ve been talking about so far is simply physical pain or pleasure. Being stabbed or shot, or enjoying food or sex are examples. Even if a synthetic can’t experience that sort of thing, why should it matter? A lot of human suffering or joy comes from other, nonphysical sources. In Aliens, when Ripley comforts Newt, who’s struggling to get to sleep, it’s obvious that Newt can suffer from having bad dreams. However, as Newt reminds us, her doll Casey can’t suffer because “she’s just a piece of plastic.” Having a bad dream isn’t a physical pain, but it’s definitely bad for whoever goes through it. Surely it’s that sort of difference—what separates things with preferences, likes and dislikes, from mere objects like plastic dolls—that matters when you consider whether it’s right or wrong to act toward a moral patient.

So, do synthetics have this feature? Do they have hopes or dreams, desires or predilections? It seems like they do. Even if Bishop can’t feel pain, he still seems to have interests of some sort. He makes this clear when he reluctantly volunteers to leave his relative safety to remote‐pilot the ship, saying, “I’d prefer not to. I may be synthetic but I’m not stupid.” In Alien3, he expresses a desire to be turned off, saying, “I could be reworked, but I’ll never be top‐of‐the‐line again. I’d rather be nothing.” Perhaps this means that he has some pride, and that if his desire to be top‐of‐the‐line can’t be satisfied, he’d prefer not to go on at all. Even Ash seems to have desires or preferences, perhaps most obviously borne out in his weird fetish for the alien: “The alien is a perfect organism. Superbly structured, cunning, quintessentially violent…How can one not admire perfection?”

Though we probably don’t share all the desires of synthetics, they are the sort of thing that gives a life meaning. A more modern version of utilitarianism, called preference utilitarianism, looks at these sorts of things. It cares more about the interests of beings. Preference utilitarianism says we should do whatever satisfies the most preferences, whether they are the preferences of animals, human beings, or androids. However, one needn’t take such a strong view as this; it could be said that just having preferences makes you a moral patient. You might want to say, for example, that although they were all moral patients, the interests of Lambert, Parker, and Ripley were more important than those of Jones the cat because of the type of interests they are or whether these are good things to be interested in.

When we think about interests this broadly, it seems that we can have duties toward synthetics. That might not seem like a big deal generally speaking, but if true artificial intelligence (AI) is invented in the not too distant future, this is an issue that will need serious consideration. With that in mind, synthetic life‐forms can count as moral patients. Perhaps, like us, Ash and Bishop would then count as both moral agents and patients. We might, however, still doubt that androids can really be moral agents. It might be strange to think of an android as a moral agent, considering that it is simply a computer program running through a piece of hardware. As Bishop jokes, he’s “just a glorified toaster.” It seems very strange to think that something running on your laptop could be morally responsible for its outputs.

Actually, it’s not that clear that Bishop would count as autonomous. It’s important for autonomy that you are actually able to choose what to do. Bishop makes it very clear that his programming limits that choice. After he hears about Ash’s turning against the crew of the Nostromo, he tries to explain that it couldn’t happen with him: “That could never happen now with our behavioral inhibitors. It is impossible for me to harm or, by omission of action, allow to be harmed, a human being.” This takes away some of the choices he could make, and this could be seen to threaten his autonomy. Autonomy, it seems, requires free will, and unbreakable rules that restrict your choices limit the freedom of will.

But is this really so different from us? People have psychological conditions that stop them doing certain things: paralyzing phobias, depression, or just treating some actions as unthinkable. These might make it effectively impossible for you to do something, just as Bishop couldn’t do anything to hurt humans because of his own limitations. So, is there any real difference—one that should make a difference in how it’s permissible to treat them—between artificial persons and human beings? One challenge is identifying when a system can actually have interests, and when it is able to consider options for the future. Clearly a toaster can’t do these things, but unless there’s something truly special about human beings that can’t be replicated by a machine, one day we are likely to create artificial persons. If what I’ve speculated about here is correct, it would be wrong to treat them with cruelty, or as tools or slaves, as seems to happen in Alien and Aliens. But then, what else would you expect from the Weyland‐Yutani Corporation?

Notes