24
The Conundrum of Robotics

Free will includes three different issues wrapped up in one philosophical term: (1) the outside perspective (Why do some other beings appear as agents with free will to us?), (2) the societal perspective (Whom should we blame for criminal actions?), and (3) the internal perspective (What is the internal sensation that we identify as “consciousness”?). By separating free will into these three perspectives, I will show that the first two are not truly issues of “free will,” and that there is no reason to think that physical machines cannot experience an internal sensation of consciousness.

Free will is a funny term. A pairing of such simple words, it carries with it all that makes us feel human. What does it mean to be free? What does it mean to will something? One of the great terrors in modern life is that the concept of free will is an illusion.A For, if free will is an illusion, then we are just riding a roller coaster, taken to great heights, thrown down to great depths, with no control over our journey. But this terror assumes that there are only two choices—that either we have complete freedom to choose whatever we want or that our feeling of self is an illusion, that we are not in control, that we are merely machines, completely predictable in what we do. This dichotomy is the illusion.

The truth is that we are never completely free. We are constrained by the laws of physics, by our immediate interactions with society, by our physical and monetary resources. I can want to fly like a bird all that I want, but no matter how high I jump, I will never take off and soar into the sky.B I can want to pitch in the major leagues, but I don’t have a 100-mile-per-hour fastball. We all accept these limitations, and yet we hold on to the term free will.

Will is an old word, meaning wish or desire. Will is the agency that defines us. Fundamentally, it is the fact that the concept of will implies a dualism that lies at the heart of the problem of the term free will. If we can explain the reason for an action from a source not of our “will,” then we deny our own agency in the taking of that action.3 We will see that this is not a new concept, and that whether or not we are machines is really unrelated to whether or not we have free will.

We need to address free will from three perspectives. From an outside perspective, free will is about whether we treat another being as an agent or as a machine. From a societal perspective, free will is about whether we can blame individuals for their actions. And from an inside perspective, free will is about whether we are truly slaves to our own machinery.

The outside perspective: The agency of others

From an outside perspective, we have already seen that we assign agency to many things that are not agents. (Remember the discussion of the GPS in Chapter 2?) The assumption of agency is a way for the human brain to handle unpredictability. In particular, the assumption of agency allows us to address the complex interaction of predictability based on probabilities of action-selection.

Individuals have personality, character; they are predictable. It is not true that humans act randomly. Think for a moment what kind of a world that would be: every person you meet could suddenly break into song or could suddenly break into a killing spree. Such a world is unimaginable to us. Instead, we talk reliably about propensities, about a person’s nature. “It’s in my nature,” says Fergus to Dil at the end of The Crying Game,4 owning his actions, explaining why he took the fall for her. Fergus explains his agency by saying that this is within the bounds of what you would expect if you really knew him. Agency requires predictability, but only a limited predictability—we identify a character, a nature, a range of possibilities, and expect that the agent will act somewhere within that range.

The problem of mechanics and free will goes back to the time of Galileo, Newton, and Descartes, all of whom recognized that physical systems did not have agency in the manner we are talking about here—they are completely predictable. (Of course, we know now that this only occurs for objects at the macro scale that Galileo and Newton were studying. At small enough (quantum) scales, predictability breaks down.5 Whether quantum objects are deterministic or not is still unknown, but our best understanding today suggests that they are not; instead, they seem to be probabilistic. As we will see later in this chapter, it is possible for quantum effects to expand into the macro world, making them fundamentally probabilistic as well.6) Before Galileo, objects themselves had agency—Aristotle explained gravity because things wanted to fall to the earth, and proposed that objects had both a driving cause, which pushed them into their actions, and a teleological cause, which pulled them based on their agency and goals.7

Descartes, through his own medical studies, and following directly on Galileo’s observations of mechanics, realized that the body itself was a machine. Descartes was an accomplished anatomist and recognized that animals were machines; however, he denied that he himself was a machine and explicitly separated the mind from the body.8 Descartes’ concerns lay in predictability—Newtonian mechanics implies that if you know enough details about the state of the universe at any moment, it is completely predictable from then on. Sometimes called the “clockwork universe” theory, the remarkable success of Newtonian mechanics in predicting motion in the physical world led to the hypothesis of determinism.9 Philosophers at the time believed that there were only two choices—an extra-universe decision-making agent with free will, or that everything is predetermined, and all free will is an illusion.

Some philosophers have tried to rescue the randomness needed for free will from quantum mechanics.10 The best current physics theories suggest that the universal determinism implied by Newton is wrong—we cannot predict the actions of a quantum particle, it is truly random (within the constraints of probabilitiesC).11 Neuroscientists have tended to be dismissive of theories that base free will on quantum mechanics because quantum randomness does not generally reach to the macro level that we experience in our everyday world. In large part, this occurs because quantum randomness is constrained by probabilities. With enough elements, the probability distributions fill out and things become extremely predictable. Even products that depend on quantum properties (the laser, the transistor) provide predictable results that we depend on daily.12 However, nonlinear mathematics provides a mechanism through which small (micro) fluctuations can drive large (macro) events.13 In these equations, small changes percolate through the system, producing a large divergence in final outcomes.D Individual neurons are highly nonlinear, containing many positive feedback functions that could carry these micro-level effects into the macro-level world of neuronal function.E

Identifying agency in others comes down to predictability in the aggregate but unpredictability in the specific. Thus, one has an identifiability of personality and character (“It’s in my nature”) and yet an inability to know exactly what an agent will do in a specific case. Quantum randomness (particularly when combined with nonlinear chaotic systems) can provide the necessary constrained randomness to give us what we observe as agency in others. But all that is really required for us to find agency in another is a predictable randomness. Whether that comes from quantum randomness or simply from an inability to know enough details to predict accurately, we reliably use the concept of agency to describe randomness occurring within a shape of possibilities.

It is important to differentiate these points I have made here from popular books on quantum theories of mind such as Roger Penrose’s The Emperor’s New Mind. Penrose argues that machines without quantum mechanics cannot be intelligent because quantum mechanics provides a special component to our brain function. Penrose’s proposal is just a new version of Descartes’ error—identifying a conduit from a nonphysical self to the physical. Descartes identified that connectivity with the pineal gland, fully aware that all mammals have a pineal gland; Penrose identifies the quantum components as the special key. The problem is that all matter is quantum and that all animals have quantum components in their neurons.F What I am saying here is that there is probabilistic randomness in the other creatures we interact with (whether it be from quantum randomness or chaotic indeterminacy) and that randomness leads us to assign agency to others. As we have already seen, humans assign agency to anything they interact with that is unpredictable within a limited range (such as fire or weather); I am making no claims about the consciousness of those other agents.

So we can identify why other people appear to be agents to us—they are predictably random. But I suspect that readers are unlikely to be satisfied with this story, because I have sidestepped the two real problems with free will. First, How can we hold people responsible for their actions if they don’t have free will? and second, Do you, dear reader, have free will?

The societal perspective: The question of blame

Ethicists and legal scholars have come to the conclusion that someone forced to do something is not at fault.20 Originally, this came from the concept that if someone holds a gun to our head and says, “commit this crime,” then we are not at fault for the crime, because we are not the agent that initiated the action.G

The problem, however, arises when we start to have explanations for why someone committed a crime. If we find that someone has a brain tumor that impairs the processing of his prefrontal cortex, do we blame the person for the action? What if the person has an addiction? What if we come to the point that we can explain the neuroscience of decision well enough that we can say that anyone with a brain like this would have made that choice? Do we forgive the criminal the crime then?

It is important to note that this concept predates neuroscience by millennia. In every era, there have been explanations for why someone does something that does not imply their responsibility. Before science, there were witches and devils.21 Before neuroscience, there were societal and developmental explanations.22

This question of our ability to predict criminal behavior also opens up an entirely different set of problems—if we can absolutely predict that someone will commit a crime, aren’t we obligated to stop him?23 What if we can only predict that he will commit the crime with a high probability? What is the balance between that probability and the person’s rights to freedom? Most modern legal systems have decided to side on the balance of freedom, that one has to give the person the opportunity to not commit the crime, that it is unfair to judge a person for his or her thoughts and only fair to judge a person for his or her actions, and thus that one has to wait for the crime to be committed before acting.H

In large part, the problem of the ethics of choice lies in the distinction that has been drawn between the goal of punishing the responsible agent and the goal of ensuring that dangers to society are successfully removed from that society. As we saw in the last two chapters, one of the key factors that makes us human and has allowed us to live in stable and complex societies is our willingness to take part in altruistic punishment, in which we sacrifice some of our own resources to punish a wrongdoer. The effect of this is to make it less worthwhile for wrongdoers to do wrong, which helps enforce cooperation.

In small groups, where everyone knows everyone else, where an individual is highly dependent on the willingness of the rest of the tribe to work with that individual, cooperation is reliably enforced by altruistic punishment (see Chapter 23). However, as societies get larger, it becomes more and more difficult for altruistic punishment at an individual level to produce sufficient incentive to enforce cooperation. It becomes too easy for an individual to cheat and then walk away and vanish into the city. In situations where there are two rival groups, it becomes too easy for altruistic punishment to devolve into a fight between groups and to escalate into what is effectively a tit-for-tat gang war. (The unfortunate flip side of in-group altruism is out-group xenophobia.) The legal system shifts the inflictor of the punishment from the individual to the society itself.

The real goal of a modern legal system is to ensure the ability of a society to function. However, although the medical community has spent the past several decades fighting to ensure that only scientifically-tested therapies get used, a process called “evidence-based medicine,” no such constraint yet exists in the legal system.24 This means that laws are sometimes based on simple conclusions from prescriptive moral principles rather than descriptive measures of reality. Two clear examples of this are the surprising decisions to implement austerity measures in response to shrinking economies, and the legal decisions to criminalize rather than to treat drug use.25

Much of the ethical argument is based on whether or not it is “fair” to punish someone, rather than how best to maintain societal function. One of the best examples of this is the “war on drugs.”I No one denies that drug use is a problem. As we saw in the addiction chapter (Chapter 18), drug use leads to clear diminishments of well-being, both within the individual and within society. However, drug use, even highly addictive drug use, particularly early on an individual’s path, is often a nonviolent individual problem rather than a societal problem.27 Extensive data have shown that sending nonviolent addicts to treatment reduces the addiction, reduces the likelihood that they will turn violent, and has good success rates bringing them back to society. In contrast, sending addicts to prison increases the likelihood that they will turn violent, has poor success rates bringing them back to society, and is more likely to lead to relapse, often with worse drugs, on return to society. And yet, the discussion within American society today has clearly settled on criminal punishment and prison for individual drug use.

I will never forget an example of just such a disconnection, which I heard at my college graduation. The speaker was the novelist Tom Clancy, who was trying to pander to engineers in a speech at the engineering school graduation ceremony. Early in the speech, he talked about how important it was to look at the scientific data, and how sometimes the scientific data were different from common sense, so engineers had to learn to look at the data itself. Not ten minutes later, in the same speech, Clancy denounced drug addiction treatment centers, saying that he didn’t care if the data said that treatment worked better than prison, the addicts knew they would go to jail and decided to take the drugs anyway, and had to be sent to prison. These logical disconnects lie in the limited view of free will ensconced within our legal system as an all-or-none phenomenon: either someone is free to choose and should be punished, or someone is a slave and the actual agent of the decision needs to be found instead. As we’ve seen throughout this book, the story isn’t so simple.

The legal system has had problems with the question of blame and punishment long before the problem that understanding the mechanism of decision-making can lead to explanations independent of free will.28 However, if we move away from the question of “blame” and toward goals of mercy and forgiveness at the individual level, while maintaining the goal of keeping a working society at the societal level, we find that the question of free will again becomes moot.

Of course, as we discussed in the first part of this chapter, free will means we can predict in the aggregate but not in the individual.29 This means that the prediction of future action is only that: a prediction. As shown in the movie Minority Report, it is inherently dangerous to assume that a prediction is reality.30 There is a big difference between locking your door to prevent theft and locking up the thief before he breaks into your apartment (Footnote H, above).

We can understand the reasons why someone has done something and rather than worrying about whether to blame them for their actions or not, we can worry about how to ensure that it does not happen again. Punishment is only one tool in that toolbox; there are many others. Knowing which ones will work when and how to implement them are scientific questions that depend on our understanding of the human decision-making machine.

Two great examples of using other tools are the successful uses of shame by Mahatma Gandhi and Martin Luther King.31 They were able to achieve their aims by showing the world what others were doing. (Do not underestimate the importance of television and movie newsreels in their successes.) By using the human intersocial mechanism that recognizes strength not just in terms of physical ability but also in terms of “heart,” of an ability to remain standing even in the face of greater physical power,J Gandhi and King were able to accomplish great things.

So a random process combined with a mechanism to ensure that the randomness falls within a probability distribution is sufficient to explain what we observe as agency in others. And recognizing that altruistic punishment is only one tool in society’s toolbox to ensure cooperation rather than defection leaves moot the ethical question of whether someone actually has free will or not. However, we have so far avoided the hard question of consciousness—Do you, dear reader, have free will?

The inside perspective: Consciousness

We all have the experience of making decisions but not knowing why we made them. We all have the experience of not paying attention and looking up to recognize that we’ve done something we would have preferred not to, whether it be letting our addiction, our anger, or our lust get the better of us, or whether it be that we stopped paying attention and found ourselves having driven our friend to our office instead of the airport. This led to the rider-and-horse theory—that there is a conscious rider and an unconscious horse. The rider can steer the horse, but the rider can also release control, allowing the horse to take its own path.33 Sometimes the horse takes a sudden action that the rider then has to compensate for.K

Similarly, we have all had the experience of rationalizing an action that was clearly taken in haste: “I meant to do that.” Following the famous Libet experiments, which showed that conscious decisions to act are preceded by brain signals long before the time we believe that we made the decision, many philosophers have retreated to the hypothesis that consciousness is wholly an illusion and that the rider is but a passenger, watching the movie unfold. Others have suggested that consciousness is more like a press secretary than an executive, and that consciousness entails rationalizing the actions that we take. Daniel Wegner differentiates actions that we identify as voluntary and those that we do not, and notes that there are both cases of (1) actions we do not have control of but believe we do and (2) actions we do have control of but believe we do not. Wegner explicitly identifies “free will” with the illusion of agency. However, Libet’s experiments are based on the idea that the conscious decision to act occurs when we recognize (linguistically) the decision to act. Libet and others have suggested that consciousness is monitoring the actions we take so that it can stop, modify, or veto the action.34

The problem, as I see it, is that philosophers have the concept that consciousness must be an external thing, external to the physical world, or an emergent property of physical things that is (somehow) fundamentally different from the underlying physical things.35 This issue first came to the fore with Descartian determinism, following on the implications of Galileo and Newton and Johannes Kepler that one could accurately predict the motion of objects, and that they did not need agency to move.L If the planets did not need agency to act, then neither did the machinery of the body.38 Clearly desperate not to abandon human free will, Descartes concluded that although animals were machines, he himself was not.

Descartes’ famous line is, of course, cogito ergo sum (I think, therefore, I am), which is fine but says nothing about whether his fellow humans think or whether the language they spoke to him merely gave him the illusion that they thought, and says nothing about whether his fellow animals think, even though they cannot tell him their introspections. Descartes concluded that the pineal gland is the connection from an external conscious soul to a physical being. While the pineal gland does connect the brain to the body,M all mammals have pineal glands.40 An accomplished anatomist, Descartes was well aware of this yet concluded that only the human pineal gland “worked” to connect souls to bodies.41

A more modern version of this desperate hypothesis can be found in the quantum theories of Roger Penrose and Stuart Hameroff, who have argued that consciousness cannot arise from a deterministic system and thus that it must depend on quantum fluctuations.42 In particular, they have argued that the mind interacts with the brain by manipulating quantum probabilities—free will occurs because our separate souls can manipulate the quantum probabilities to give us free will. Notice that Penrose and Hameroff’s theory that an external soul controls quantum probabilities is very different from the earlier discussion of quantum probabilities as a potential source of randomness in others. I suspect that if we were actually able to control quantum probabilities, we would, as shown in Neal Stephenson’s Anathem, actually be much better at certain problems (like code-breaking) than we are. This quantum theory is often stated as a statement along the lines of Machines cannot be conscious. Since I am conscious, I am not a machine,43 which is a thinly veiled restating of Descartes’ cogito ergo sum.

My take on this is that the problem is, when we imagine it, we believe that there are only two choices—either we are conscious beings who can take any action or we are slaves who have no choices. But like Fergus and Dil in The Crying Game, we have personalities, and we are true to our “nature.”

The word robot comes from a play by Karel Čapek (titled R.U.R. [Rossum’s Universal Robots]), which was really about the effect of industrialization on individuality. Similarly, our more modern concept of robots comes from observations of highly complex machines, many of which are used in industrialized factories. It speaks to our fear that robotics leads to slavery, but as we’ve seen, the issue isn’t slavery. Even if consciousness were to be an illusion, it wouldn’t change our perception of our consciousness. It’s not that we are suddenly going to become slaves—either we are already or we aren’t.N

The world is more complex than that. Certainly there are decisions that are made without conscious intent, but there are also very long conscious deliberations that take minutes, hours, days, weeks, or even months. Even in the case of specific decisions that occur too quickly to be conscious, we can decide to train ourselves to make the right decisions at the right time. It takes time to learn to ride a bicycle. Both sports stars and soldiers know that doing the right thing at the right time quickly (without thinking) takes lots of practice. Although the action at the moment may be reactive, the decision to practice is often a conscious one. In cases where we decide to actually get up and go to the anger management seminar or the addiction treatment center, the conscious decision isn’t to not be angry or to not take drugs; the conscious decision is to retrain ourselves.

A more nuanced discussion has appeared in the science fiction literature of the past several decades—with Star Trek: The Next Generation, as the android Data tried to become human; with Blade Runner and its “replicants”; and recently with the newly reinterpreted Battlestar Galactica, in which the humans have to decide whether the Cylons have sufficient free will to be treated as human.

In many of these examples, the journey for the robot to become human is based on the belief that the robots need to feel emotions. A recurring theme in Star Trek: The Next Generation is Data’s search for emotions. Yet, throughout, he is clearly as human as the rest, with foibles, errors, and decision-making abilities. In Blade Runner, the replicants are identifiable only because they do not have the correct emotional reactions to shocking images. But a theme in the movie is the question of whether Rick Deckard, the hard-bitten human detective hunting the replicants, would pass the emotional test for humanity if he were to take it. (This theme is only implicit in the movie but is something Deckard worries about explicitly in the Philip K. Dick novel Do Androids Dream of Electric Sheep?, from which Blade Runner derives.) In the end, the movie hinges on a decision made by the lead replicant Roy Batty, who decides not to throw Deckard off the building ledge and instead to lift him back up. Deckard wonders at the reasons for the decision but never doubts that the replicant made a decision of his own free will that saved Deckard’s life. Similarly, throughout the new Battlestar Galactica series, the Cylons make clear decisions, and while the humans debate whether to treat the Cylons as humans or machines, they never doubt for a second that Cylons are dangerous decision-makers.

The problem with all of these discussions is that we haven’t (yet) built machines that have free will, and we don’t know what it would be like to build such a machine. Obviously, there are many machines that do not have free will. But that doesn’t preclude the possibility of a machine with free will. It doesn’t preclude the possibility that we are living machines with free will, that we are machines who fall in love, robots who write poetry, that we are physical brains capable of making our own decisions.

Books and papers for further reading

• Antonio Damasio (2010). Self Comes to Mind: Constructing the Conscious Brain. New York: Pantheon.

• Jeffrey Gray (2004). Consciousness: Creeping up on the Hard Problem. New York: Oxford University Press.

• Robert Kurzban (2010). Why Everyone (Else) is a Hypocrite. Princeton, NJ: Princeton University Press.

• Neil Levy (2007). Neuroethics. Cambridge, UK: Cambridge University Press.

• Daniel M. Wegner (2002). The Illusion of Conscious Will. Cambridge, MA: MIT Press.