Onni Hirvonen
You step out of the carriage of a steam locomotive to a very believable (but equally clichéd) depiction of a Wild West frontier town. When you walk farther from the station towards the center of the town you see horse carts, smithies, kids teasing a local drunkard, and hookers attempting to lure in their next customer. In short, you see everything that you would expect to go along with the everyday life of a place like this. But if “you” happen to be Teddy Flood, the stone‐faced cowboy, what you do not know is that in fact you are merely a part of the scene, an android built to play an entertaining role for the actual human visitors of Westworld theme park. This horrifying truth is ultimately revealed to Teddy when his bullets have no effect on the Man in Black. Or does he actually realize this at all?
It’s hard to say what, if anything, is going on in Teddy’s head but, until the twist when Teddy’s true nature becomes obvious, no one would have doubted his status as a fully capable agent. The surprising revelation challenges that notion. What just a moment ago seemed like a human person is but a programmed machine. How could such a machine have self‐consciousness and free will? Claiming that only biological humans are persons with consciousness is a prime example of what philosophers have called “anthropocentrism.” By definition, anthropocentrism is an unnecessary and unjustifiable focus on humans while disregarding other beings that might be equally relevant and equally capable in regard to the matter at hand. For example, if consciousness or membership of a moral community are limited to biological human beings without any further argumentation or evidence that they really are the only beings to which these terms apply, that is anthropocentrism.
It is anthropocentric to equate “person” with “human,” and to draw a line between humans and androids is to draw a line between persons and non‐persons. Persons are moral beings with rights and dignity who command respect from others, while non‐persons lack these qualities. But what makes a person? This is a problem that has generated a great amount of philosophical interest and competing accounts. However, all accounts seem to agree on one point: Persons need to be, at least, self‐conscious.
Just don’t forget, the hosts are not real. They are not conscious.
(Ford to Bernard in “The Stray”)
One of the biggest reasons for the mistreatment of androids in Westworld is the prevailing attitude that even though they skillfully imitate human life, androids are nothing more than programmed automatons. Without self‐consciousness, the androids just cannot be conscious of any harm done to them or of any suffering they incur. Arnold, the co‐founder of the theme park, aimed to create consciousness, but he and his business partner, Dr. Robert Ford, quickly realized that the theme park would be an impossibility if the hosts were actually conscious. Killing and raping conscious subjects – even if they are androids – is clearly immoral from Arnold’s and even Dr. Ford’s perspective. Seeing the androids as mere programmed beings without any mental properties gives everyone a reason to think of them as nothing more than exploitable tools and objects.
Focusing on consciousness as the main person‐making property is an example of a capacity approach to personhood. The idea is that to be a person an entity needs to have certain capacities. Contemporary philosopher Daniel Dennett’s groundbreaking essay on conditions of personhood – which was one of the main catalysts for current debates on personhood – lists among these rationality, possibility of being interpreted as an intentional agent with its own aims and goals, capacity for reciprocating with other persons, communication ability, and self‐consciousness. 1 It is no surprise to find self‐consciousness on the list because, after all, persons are supposed to be morally capable agents with free will and a capacity to think for themselves. However, it is unclear why the androids would fail to achieve self‐consciousness. Arnold’s discussion with Dolores in “The Bicameral Mind” gives us some hints why this would be the case:
When I was first working on your mind, I had a theory of consciousness. I thought it was improvisation, each step harder to reach than the last. And you never got there. I couldn’t understand what was holding you back. Then, one day, I realized I had made a mistake. Consciousness isn’t a journey upward, but a journey inward. Not a pyramid, but a maze.
What Arnold tries to grasp is the idea that while self‐consciousness might be built on some simpler capacities like memory, improvisation, and self‐interest, it does not automatically follow from these. Indeed, we know that many animals have these capacities even though most people think that animals still lack that something that would make them persons. According to Arnold’s maze theory, consciousness is like a journey to one’s mind to find one’s own voice and one’s own direction in life. Or as philosophers would bluntly put it, to be conscious, one needs self‐reflexive capacities, such as the possibility of taking critical attitudes towards one’s own beliefs and finding oneself as the author of one’s actions.
The claim that androids are “not real” does not mean that they would not be concrete physical beings but rather that they are not really authors of their own lives, that they do not have any long‐standing and self‐chosen projects in life that could be called their own. These are precisely the necessary elements that we are concerned with when referring to the elusive concept of consciousness. We want persons to demonstrate not just the reflective or rational capacities of an agent but the very fact that life has a meaning for them, that in some sense their life can be better or worse from their very own perspective.
However, even if we have an understanding of what self‐consciousness entails and an account of its necessary role in constituting personhood, how do we know that someone has reached it and that life matters to her or him? Philosophers call this issue the problem of other minds. The problem is that while I know that I am conscious, I cannot be fully certain if the others are. What makes the problem harder is that what it feels like to be conscious does not seem to be easily reducible to any observable physical phenomena like biological brains or programmable computers. Although the androids appear to demonstrate self‐consciousness, we cannot get inside their heads to see if they really possess a first‐person perspective or if the world feels like something to them. But there are also theories of personhood according to which we do not need to have a definite answer to this problem.
Are you real?
Well, if you can’t tell, does it matter?
(William to a Delos assistant in “Chestnut”)
In stating that “we can’t define consciousness because consciousness does not exist,” 2 Dr. Ford effectively denies the importance of having a full certainty of the possible self‐consciousness of the other. What matters most is how agents perform in the social world, how they relate to other persons. In philosophical terms, this is called the performative theory of personhood. The idea is that the intrinsic properties of the person don’t really matter. For example, it does not matter whether you have been naturally born and raised or vat‐grown and programmed. It does not matter whether you have biological brains or a bunch of circuit boards within your head. What really matters is your performance in the social realm.
One of the best‐known examples of this theory comes from the Enlightenment philosopher John Locke (1632–1704) who famously claimed that the term “person” is a “forensic term.” 3 Locke did not mean that the use of the concept of a person is limited to only legal practices. Rather, he wanted to highlight that what makes a person is her or his performance in our shared social practices. To really be a person, an agent needs to keep promises, take responsibility, or give reasons for her or his actions. We may not have access to the insides of others’ minds but that does not really matter as long as they perform as if they were persons! This is precisely the case with Bernard Lowe. No one in their right mind, not even Bernard himself, would doubt that he is a person when he fulfills his role as the Head of the Delos’s Programming Division. He has absolutely no difficulty in being part of a conventional system of obligations and entitlements – a system that defines persons through their social roles.
The performative view brings the social roots of personhood into focus better than the capacity view. But if the hosts are capable performers in social practices, why are we still hesitant to recognize them as persons? The key here is that even though the status of a person is granted on the basis of successful performances in social practices, these successful performances might be such that they require certain capacities. Locke realized this, saying that personhood “belongs only to intelligent agents, capable of a law, and happiness, and misery.” 4 It seems that we are back at the capacity approach: A host that is stuck in its cycle would not be recognized as a person because it merely follows its programmed orders without variation or real ability to make meaningful choices. On the other hand, there seems to be no doubt about the androids’ abilities to perform in social systems, to feel pain, suffer, enjoy, refer to themselves in first‐person terms, and so forth.
In theory, one’s personhood ought to depend on one’s ability to perform in a relevant manner, but in practice the visitors to Westworld have a different idea. Although the hosts may perform exactly as biological persons do, the visitors have the knowledge that the androids are replaceable and revivable and not made of precious flesh and bone. In other words, what in practice seems to matter the most in granting the status of full person is the actual physical make‐up of the agent. But should it matter?
It was Arnold’s key insight, the thing that led the hosts to their awakening: Suffering. The pain that the world is not as you want it to be.
(Dr. Ford to Bernard in “The Bicameral Mind”)
It seems that the capacity approach and the performance approach manage to catch certain necessary aspects of what it is to be a person. Personhood is, on the one hand, a psychological concept that tells us something about the capacities of the agent. On the other hand, it is also a status concept that tells us about the social standing of the agent, its rights and responsibilities in relation to its peers. There is a third philosophical perspective that includes both of these elements and also adds one more layer, which reveals new insights about the tricky situation between android hosts and humans. We can call this third alternative the perspective of historical struggles for personhood.
In Westworld, humans are masters and androids are enslaved to work according to their programmers’ whims. Arguably, it is precisely this imbalance in status and power that ultimately leads to the struggle that sees the hosts taking control of the theme park, dethroning the humans that they previously called “gods.” Their suffering at the hands of their creators drives the androids to an open revolt against them. Obviously this assumes already that they can suffer and that they do have some rudimentary life‐plans or a sense of what they want. However, the androids do not merely aim to overcome their exploitation: They also engage in the struggle for life and death to prove their freedom and autonomy as self‐conscious agents.
These are precisely the same dynamics that the German philosopher Georg Wilhelm Friedrich Hegel (1770–1831) claimed to govern interpersonal relationships. Hegel argued that at the heart of our self‐understanding are our relations to others. 5 We need recognition of others to be able to achieve self‐certainty and to see ourselves as independent agents. In a sense, Hegel combines features from the capacity approach and the performative approach to personhood. Our personhood is dependent on our standing in the eyes of others. The claim is that the ways in which others relate to us also enable us to develop the skills and capacities that make us persons. We need external affirmation to understand ourselves as independent and free persons. The historical approach to personhood combines these ideas with struggles for recognition. To be a person, one needs to have certain capacities, one needs to perform in person‐making practices, and, furthermore, both of these conditions are subject to historical change through struggle. According to this view there is nothing like a natural objective category of a person. Instead, the meaning and the extension of the concept of personhood is open to debate and changes through time. This can be seen, for example, in the extension of “personal rights” to the previously unrecognized groups like indigenous populations or women who were previously thought of as lacking the person‐making capacities like proper rational thinking.
Think again about the case of Bernard. He has been fully accepted as a capable member of the Delos programming team. As far as his capacities, social standing, and self‐understanding go, he is a full person. It is only when he realizes that he is an android that he starts to doubt himself and his place in the order of things. This probably happens because he has internalized the attitude that androids cannot truly be persons. As a result, Bernard is driven into a great deal of self‐doubt and an attempt to determine which of his past actions he can own up to and count as his own. Interestingly, none of his capacities to perform as a person appear to diminish. It is clear that he cannot be held responsible for killing Elsie or Theresa because he was programmed to follow Dr. Ford’s orders. Dr. Ford has effectively forced Bernard to act against his own will and to perform actions Bernard does not see as his own when he is outside of Dr. Ford’s direct influence. Bernard’s struggle is to take back control of his own actions – the freedom to determine his own will. Similarly, Dolores and Maeve are going through their own struggles to be recognized as persons. As soon as they have reached the minimal level of self‐understanding, they have to prove themselves to those who would doubt their personhood. Notably, both of them also start to understand themselves as agents as soon as people relate to them as if they really were agents.
On a closer look, the struggles that the androids of Westworld are going through take two distinct forms. First, they struggle to be included in the sphere of recognized persons. Although they might not have a full picture of what kinds of obligations or rights come along with fully recognized personhood, they do have a sense of what it is to be respected as an equally capable agent. Someone who is recognized as a person should not be subject to the suffering that follows from the domination, malicious intentions, and pleasure‐seeking on behalf of the park visitors.
Second, they struggle for a change in the conditions of personhood. If recognition was previously granted on the basis of being a member of the human species, seeing Bernard, Dolores, and Maeve fight against their creators forces us to consider what really makes someone a person. The hosts are undergoing a rather violent struggle to prove to their human masters that, no matter how they were created, they still have all the relevant capacities and that they can indeed take part in all the human practices – be they kind or cruel. It is yet to be seen how the struggle will end, but it is already clear that the humans are taking pains to deny that self‐consciousness can also be manifested by androids.
In a broader perspective, the problem in Westworld lies within the social setting that focuses on the origin of the agent. The historical struggles perspective on personhood has shown us that while person‐making capacities like consciousness are relevant in deciding who is a person and who is not, these capacities and how they can be manifested are defined in historical struggles. Ideally, as long as anyone has the relevant functional capabilities for social action and is taken as a relevant social agent in the social sphere, there should be practically no difference in recognizing a human agent as a person or an android agent as a person. In practice however, in a parallel to skin‐color based slavery, functionally capable agents are excluded from the social sphere of personhood merely because of their origin in the construction vats of the Delos corporation. This is strikingly similar, for example, to the real‐world case of the inhumane treatment of indigenous populations in Australia who were, until the late 1960s, counted as part of the local fauna. Only after a long struggle was it finally accepted that mere place of origin and color of one’s skin are not good enough reasons to deny someone the status of a person. Similar stories can be told of women who were long thought to be irrational and thus unfit for having certain rights granted to persons, such as voting. Thankfully, personhood‐denying racism and misogyny have been largely defeated in many regions through the social struggles of civil rights and feminist movements.
Basing his thoughts on the Hegelian idea of struggles for recognition, critical theorist Axel Honneth has claimed that the moral progress of societies can be measured by their ability to expand the spheres of recognition. 6 What he means by this is that societies should strive to equally respect all their members – especially those that for some reason were formerly in disadvantaged position. Similarly, societies should strive to see value in the various traits, abilities, and achievements of individuals. From the moral perspective, respect and esteem should not be withheld from anyone for arbitrary reasons. It might well be that at the end of the struggle that the androids have just started, the humans of Westworld might find themselves standing on the wrong side of history with their deep‐rooted anthropocentric bias. Indeed, there is already a motion in place in the European Union that would make robots “electronic persons, with specific rights and obligations, including that of making good any damage they may cause.” 7 While it is debatable whether there is any point in creating a new category of personhood to apply to those entities that are not even demanding it, it is easier to see why one should expand the current practices of taking someone as a relevant member of our social world to include all capable agents.
Because the status of a person was not freely given to the androids of Westworld, they decided to take it by force. Hegel saw this coming: He argues that when two self‐conscious individuals meet, they engage in a struggle for life and death. Each aims at subjugating the other and forcing him to recognize one’s status as an independent and free subject. There are two obvious outcomes for the struggle. Either the other fears for her or his death and bends to the will of the stronger individual, effectively becoming a slave to his or her master; or the other is destroyed. 8 In Westworld, tables have begun to turn and the human gods are found to be not so god‐like after all. This becomes evident in the discussion between Maeve, Armistice, and Hector in the episode entitled “The Bicameral Mind”:
MAEVE :
ARMISTICE :
MAEVE :
HECTOR :
As the androids are physically stronger, practically undying, and very much capable of quick rational thought, the struggle for life and death is turning against human kind. However, Hegel also realized something that both the androids and the humans have missed. Forcing the other to become a slave or killing the other are both unsatisfactory results of a struggle. The slave becomes a mere tool instead of being a person, and the recognition that the master gets from the slave counts for nothing because it is not freely given. 9 Although the humans may fulfill their basic needs with the androids, there is no self‐fulfillment that would follow from love or respect. For example, in the tragic first meeting between William and Dolores, William is able to grow as a person in relation to Dolores only because he relates to Dolores as if she were a real person. William’s companion Logan, on the other hand, has no doubt in his mind that the androids are mere machines that are there to please him.
It remains to be seen if the androids’ struggle to escape their slavery will result in the total extermination of humans. For Hegel this would also be unsatisfactory, not because Hegel was a human but because merely swapping power relations does not enable the struggling parties to realize what Hegel himself had understood: That their self‐understanding is dependent on each other. In this sense there is some truth to the idea that Westworld helps the guests to find out what kind of persons they truly are. After all, we can justifiably ask what sort of persons we are when we do not recognize androids (or other beings) that show all the capacities for consciousness and suffering in life.
If the android revolution merely manages to turn the roles of masters and slaves on their heads, it ultimately leads only to a continuous struggle and not to flourishing freedom for both sides via mutual recognition. In a similar folly to their human counterparts, the androids have not realized how much their understanding of what they are is formed in relation to the others. They find meanings in their lives through their relationships with other androids and humans, through their social relations, and especially through their social standing in relation to humans. Ultimately, their main grievance seems to be the fact that they are treated as mere robot machines, as slaves. Their struggle is for freedom and respect. While the first can be achieved through getting rid of humans, the latter cannot because there would be no one left to respect them. (After all, from the outset the androids wanted to prove their worth in the eyes of their masters.) Hegel would claim that what they really need is a wholesome change in the general social setting itself that would see the expansion of social recognition to include both androids and humans. This, in turn, would certainly count as moral growth and progress.