As for morality, well, that’s all tied up with the question of consciousness.
—ROGER PENROSE
IN 1667 ONE THOMAS CORNELL WAS HANGED BECAUSE he had been found guilty of murdering his mother. A little more than two hundred years later, one of his descendants, Lizzy Borden, was controversially acquitted of charges of having killed her father and stepmother. At the onset of the twenty-first century, yet another descendant of the same family, Jim Fallon, is a professor at the University of California at Irvine, where he studies the brains of serial killers. The interesting thing is that until a few years ago Fallon was not aware of his family’s, shall we say, interesting history or of how pertinent that history was to his own academic interests. It apparently was after a casual conversation with his mother that he began to look into it, and the more he looked the more worried he got.
To satisfy his own curiosity, he had several members of his family brain-scanned, including himself. You see, Fallon’s research shows that serial killers tend to have very little activity in the area of the orbital cortex. This makes sense, because that area of the brain is known to interact with and repress the activity of the amygdala, which—to simplify a bit—is the seat of our strong emotions, particularly fear, but also the spring for our aggressive behavior. No activity in the orbital cortex means that the normal brakes on the amygdala have been lifted, so to speak, making an individual more prone to violence. None of Fallon’s close relatives turned out to have the brain signature of a serial killer—but he did!
At this point the biologist began to feel a bit uneasy, but he pressed on with his quest nonetheless. There was a second test he could run that would be pertinent, one that didn’t deal directly with the structure of the brain, but rather with the genetic bases of aggression. The monoamine oxidase-A (MAO-A) gene is found in different variants in the human population, just like most genes. It happens, however, that one of these variants is associated with particularly violent behavior and, again, is frequently found among serial killers. That variant, nicknamed “the warrior gene,” was absent from the DNA of Fallon’s relatives, but as I’m sure you’ll be less than surprised to discover, he had it. And yet, Jim Fallon is not a serial killer—he just has an academic interest in the phenomenon. What is going on?
Welcome to the increasingly fascinating field of neuroethics, where philosophers and scientists come together to better understand (and perhaps improve) the way human beings reason and act from a moral perspective. This book is about what philosophy and science together can tell us concerning the big questions in life, and if we want to understand these questions in a new light we also need to look under the hood, so to speak. We will employ not only the logical scalpel of philosophy to parse what people mean by the different ideas that guide their lives but also the microscope of science to try to figure out how and why people behave in certain ways. In this chapter, then, we focus on the hows of moral reasoning from a neurobiological perspective. We will turn to the whys—what we can tell about the evolution of morality—in Chapter 4. And we will wrap up this topic by returning to philosophy in Chapter 5, this time armed with a better understanding and ready for better guidance on how to live an ethical life.
Jim Fallon does have an idea about why he is not a serial killer. Despite his family history, being a carrier of the warrior gene, and having the characteristic deadness in his orbital cortex, another element is missing, he believes: the right (or rather, wrong) environment. Fallon had a nice childhood with no traumas and plenty of affection from his family, but if it had been different—had he been abused, for instance—then the perfect neuro-genetic-environmental storm would have been unleashed, he thinks, and he might have been a subject for someone else’s studies on serial killers. Perhaps, but we do not know—we can only speculate about such things. At the very least, the strange case of Jim Fallon highlights the fact that particular genetic or neurological signatures are not sufficient to trigger a given set of behaviors. They may, however, be sufficiently important factors to be admitted in a court of law.
According to a National Public Radio investigative report in 2010, American courts have allowed evidence about the neurobiology or genetics of violent behavior in about 1,200 cases so far, and it looks like this is just the beginning of a trend. For instance, in 2006 in Tennessee one Bradley Waldroup was accused of killing his wife and a female friend of his wife’s during a violent outburst at the end of an altercation. From a forensic point of view, Waldroup’s culpability was obvious and the prosecutors asked for the death penalty. But the defense attorney argued that evidence should be admitted to the effect that Waldroup had the very same MAO-A variant, the “warrior gene,” that Jim Fallon found himself carrying. The attorney argued that the defendant was prone to snap under pressure and engage in violent acts because of his genes, and in a stunning outcome the jury agreed: Waldroup was convicted of voluntary manslaughter and avoided the death penalty.
From a philosophical perspective, there are two reasonable ways of looking at this case, and they carry us to very different conclusions. On the one hand, it is a well-established principle of modern American law that people with extremely low intelligence should not be sentenced to death, even if they have demonstrably committed a crime for which capital punishment might otherwise be considered. (Remember that the United States is the only Western country where the death penalty is possible to begin with.) The reasoning behind this principle is that, because such people are incapable of the same degree of understanding and decision-making that most of us can muster, the ethical thing to do is to restrain them from doing additional harm, but not to punish them for something over which they had little deliberative power. On the other hand, there clearly has to be a limit to how much biological considerations can enter into our system of laws or the concept of justice will simply lose any coherence. If the defense is that “my brain made me do it,” or “my genes made me do it,” simply consider that pretty much anything we do is affected by our genetic makeup, and certainly our brains get involved in everything we do. You see the dilemma.
Moreover, our moral judgments can be skewed by factors that are not nearly as dramatic as having a silent orbital cortex or a warrior gene. For instance, what if I told you that watching an episode of Saturday Night Live affects not just your mood (if you appreciate that sort of comedy) but measurably alters your immediate moral judgment? (You become more of a utilitarian, or consequentialist, if you watch comedy.) Or how about the fact that if I were to ask you about an ethical issue while you were sitting at a dirty desk or smelling an unpleasant odor, you would be more likely to render a severe judgment than if you were at a clean desk or your nostrils were not under assault? Clearly much more than calm and rational deliberation goes into our moral decision-making, and indeed, much of what influences that decision-making flies quite easily below our conscious radar—unless we know it’s there and we keep our guard high.
We already encountered some of these additional factors in Chapter 2, when we considered sci-phi research into the trolley dilemmas, and now it is time to return to the ideas of one of the scientists we have already met, Joshua Greene of Harvard. Greene has reviewed much of the literature on the neurobiology of moral decision-making and has come up with what he calls a “dual-process” theory of moral judgment. According to Greene’s theory, we change the type of moral judgment we employ—going, for instance, from being utilitarians in the lever version of the trolley dilemma to being deontologists in the bridge version of the same problem—because we are literally of two minds when it comes to ethical decision-making.
The basic idea is that our cognitive processes (broadly speaking, our ability to think rationally) are engaged in utilitarian ethical judgment, while our emotional responses (our “gut feelings,” our intuitions) enable deontological judgment. This concept creates an interesting situation, considering that philosophers think of the two types of ethical theory as logically distinct: thus, we may end up with irreconcilable and contradictory judgments depending on whether one form of judgment or the other takes over in our brains.
What is the evidence for Greene’s dual-process theory? Perhaps the earliest clue came with the famous case of Phineas Gage, a nineteenth-century railroad construction foreman who survived the freak accident of a long metal rod passing through his head. Much of Gage’s left frontal lobe was destroyed, but this damage did not result in any obvious impairment in his cognitive reasoning compared to before the accident. What did change, however, was his social behavior: suddenly he found it difficult to control his impulsive and emotional reactions. This was the earliest suggestion that the areas of the brain affecting cognition are at least partially different from those controlling emotions, and that it is possible to disrupt (in this case, by accident) the balance between the two.
In the 1990s, research conducted by neurobiologist Antonio Damasio’s group zeroed in on a more specific area of the brain, the ventromedial prefrontal cortex (VMPFC), to show that patients with damage there made bad decisions when it came to risk assessment, significantly underestimating the risk associated with certain simulated scenarios. The patients responded normally, however, to tests measuring their ability to engage in moral reasoning; the problem seemed to be caused by the inability of their brains to generate the feelings that normally help guide most of us in analogous situations. Interestingly, studies on the neurobiological underpinning of psychopathy also show a connection with the VMPFC (among other areas of the brain): apparently, psychopathic behavior can be generated by a reduced functioning of the amygdala (the same area that lost its cognitive “brakes” in Jim Fallon’s brain), which in turn may be caused by a malfunction in—you guessed it!—the VMPFC. One of the intriguing consequences of psychopathic breakdown of normal brain activity is that psychopaths don’t seem to be able to make the distinction that comes easily to most of us between moral rules and arbitrary rules of conduct (such as etiquette-related ones). For them, all rules are arbitrary conventions and can therefore be ignored at will. In a sense, a psychopath is the ultimate moral relativist.
Of course, neurobiological studies focusing on exceptional situations—be they freak accidents or socially deviant individuals—can tell us only so much. Is there evidence for Greene’s dual-process theory from more standard situations that affect all of us? Indeed there is. In a study carried out by Greene’s group, subjects were presented with what the researchers referred to as a “high-conflict personal dilemma”—something along the lines of the various versions of the trolley dilemma, for instance. The trick was that some of the subjects were simultaneously asked to engage their attention with an unrelated (and morally neutral) cognitive task, such as detecting when the number 5 was presented to them in the midst of a string of numbers. The idea was to cause a simple interference with cognitive moral processing by diverting some cognitive resources to another problem. The dual-process theory would predict that utilitarian moral judgment should be partially impaired by this interference, but not deontological judgment. And that’s exactly what the researchers found! It is as if one of the moral channels of the brain shares bandwidth (so to speak) with functions like calculations and identification tasks and the more we are engaged with the latter the worse we do with the former. In addition, we have seen that the experiment can be done in reverse: researchers can interfere with subjects’ deontological judgment simply by altering their emotional state in an unrelated fashion—for instance, by exposing them to noxious odors. And of course, all of these findings have more than just scientific import: imagine the endless possibilities for willful manipulation of juries by unscrupulous attorneys bent on shifting the balance of jurors’ moral compass toward either a utilitarian or a deontological extreme.
The dual-process theory is also consistent with what we do when faced with very different sorts of moral judgments—those that have to do with the concept of justice (a subject to which we return in Chapters 14 and 15). Consider the following, not too hypothetical, scenario: you have one hundred kilos of food available to be distributed to a population affected by famine. However, it takes some time to deliver the food, and this will cause about twenty kilos to spoil and become unusable. If you choose instead to deliver the food to only half of the population, the spoiled amount will decrease to five kilos. What do you do? If you choose to send more food to only half of the population, you are giving priority to the efficiency of your aid program, but if you still try to deliver to the entire population, despite the greater loss of food, then you are prioritizing fairness over efficiency.
This is precisely the sort of conundrum explored by Ming Hsu and collaborators, who presented subjects with a set of scenarios in which fairness and efficiency could be manipulated independently of each other, and who also obtained brain scans of the participants to figure out not only what they would decide to do under each scenario, but which parts of their brains were involved in the decision-making process. They found that three areas contribute to weighing issues of justice: the putamen is the part that responds to issues of efficiency, the insula is involved with judgments of inequity, and the caudate-septal subgenual region essentially mediates between the two to come up with a unified judgment once the person has considered the relative importance of equity and efficiency in the given situation. Looking at these results, it is hard to resist the conclusion that human beings come equipped with a sophisticated “moral calculator,” in much the same way as we are endowed with brain machinery that enables us to learn the complex rules of just about any language during the first few years of our existence.
What is particularly interesting about these results in light of Greene’s dual-process theory is that the insula (the inequity-encoding region) is also known to be part of our emotional system; the putamen (the efficiency-encoding region) is involved with the brain’s reward system (it is sensitive to dopamine, a natural reward drug produced by our neurons), which in turn has been demonstrated to be linked with feeling good about both charitable giving and punishment of free-riders; and finally, and most revealingly, the area integrating these two functions, the subgenual, has been implicated in trust and social attachment. In other words, it looks like a socially well adjusted person has to constantly weigh issues of fairness and efficiency and that three distinct but interconnected areas of the brain help us do just that. Hsu and his collaborators conclude:
More broadly, our results support the Kantian and Rawlsian intuition that justice is rooted in a sense of fairness; yet contrary to Kant and Rawls, such a sense is not the product of applying a rational deontological principle but rather results from emotional processing, providing suggestive evidence for moral sentimentalism.
We will get to both Kant and John Rawls (1921–2002) in due time, but if you read this carefully you will find that it is an example of scientists attempting to override philosophy based on experimental results, a violation of Hume’s separation between is and ought. As I argue at several junctures in this book, however, this counterposition between science and philosophy is misguided and not particularly fruitful. A more interesting reading of these results is that humans have a built-in emotional sense of fairness analogous to the one advocated by philosophers like Kant and Rawls. But this doesn’t mean that, just as with any other biological instinct, rational discourse and learning cannot improve on what mother nature gave us.
Although Greene’s dual-process theory is beginning to look like a good way to think about the relative roles of reason and emotion in moral judgment, it is not without its critics. Bryce Huebner of Tufts University, Susan Dwyer of the University of Maryland, and Marc Hauser, formerly of Harvard, have pointed out the obvious problems: from a correlation between emotions and ethical decisions it doesn’t follow that the first causes the second; it may just as easily be the case that certain decisions of moral import cause us to experience specific emotional reactions. Huebner and his collaborators do not, of course, deny that emotions are an integral part of the psychology of moral decisions. For instance, it is hard to ignore the fact that the emotions of guilt and shame not only are felt after certain actions but are powerful factors preventing the recurrence of such actions. Still, their paper presents not one but five different models of what they call “the moral mind.” Interestingly, four of the five models are associated with the name of a philosopher because each of these four models reflects a well-known type of moral philosophy. Let’s take a quick look.
The first possibility considered by Huebner and his colleagues is what they call a “pure Kantian” model: just as philosopher Immanuel Kant thought, in this model Reason influences Emotion, and this in turn generates moral Judgment (thus the causal chain looks like R > E > J). Alternatively, a “pure Humean” model is characterized by the fact that Emotion gets the process started, generating Judgment, followed by our ability to come up with Reasons why we made that judgment (E > J > R). The third possibility, not surprisingly, is a hybrid Kant-Hume model, where both Reason and Emotion interact to yield moral Judgment (E,R > J); this, of course, is essentially a restatement of Greene’s dual-process theory. A fourth model is termed by the authors “pure Rawlsian” because it is based on John Rawls’s ideas about justice as fairness (to be discussed in Chapters 14 and 15); here moral Judgment comes first (the result of an Analysis of possible Actions), and both Reason and Emotion are deployed to justify it and to act on it (AA > J > E,R). Finally, a “hybrid Rawlsian” model enlists Emotion to carry out action analysis, which then leads to Judgment and finally to the articulation of Reasons in its support. (This would look diagrammatically a lot like the pure Humean model, except for the added interaction between Emotion and Action Analysis: AA/E > J > R).
The interesting point raised by Huebner and his colleagues is that the current empirical evidence does not conclusively discriminate between the five models; indeed, the fifth had not even been articulated in print until they published their paper. Things, therefore, are a bit more complicated than what I have already outlined, though I remain convinced that some version of the Kant-Hume model (that is, a dual-process model) is the one currently favored by the totality of the available evidence.
There is another fairly big set of caveats that an intelligent user of scientific investigations into the functioning of the brain has to keep in mind. As pointed out by Kristin Prehn and Hauke Heekeren, studies like the ones we have examined so far (and to which we return in Chapter 16 when we look at how the human brain treats the concept of gods) are typically based on very small sample sizes, for the good reason that it is still very expensive to put people in an fMRI machine. Although this problem will presumably recede with technological advancements, there are other issues as well. To begin with, when researchers publish those pretty (and pretty convincing!) colored images of brains, pointing to which areas “light up” in response to a particular type of task, we need to realize that these images are statistical composites—they do not show us a particular brain of an individual human being, but rather a sophisticated statistical average across the whole sample of subjects. Moreover, the computer-highlighted spots are not strictly speaking areas where brain activity is higher, but rather locations where the blood flow peaks: the idea is that if the blood flows at a particularly high rate in a certain anatomical area, then oxygen is being exchanged with the underlying tissue and this in turn is the result of increased biological activity by those cells (which need more oxygen to metabolize more efficiently). As a result, then, the images we see in published papers are indirect statistical estimates of brain activity, not actual photographs of it.
There is a more subtle limitation of fMRI studies, known as the non-interactivity assumption: it is simply not possible to isolate what is going on in the brain when we do one particular thing only—say, think about the trolley dilemma. That’s because the brain does all sorts of other things at the same time, so that we need a way to isolate the “signal” from the focal activity we happen to be interested in. This is done using so-called subtraction logic, another statistical method by which background brain activity is accounted for and eliminated so that the signal pertaining to the task we are interested in emerges more clearly. But the fundamental assumption of subtraction logic is that one can simply add and subtract (as the name implies) different brain activities because they are not interdependent. The problem is that this assumption of non-interactivity between different brain functions is almost surely wrong. We simply don’t know how to compensate for the fact that, for instance, the limbic system and the cortex are functionally and anatomically integrated, so that it really isn’t possible to separate the “emotional” (limbic) from the “rational” (cortex) activity—they are mixed together in any normally functional human brain.
This is not to say that neurobiology isn’t teaching us a lot about how the brain works when it comes to moral decision-making (or anything else, for that matter), but we should remember that, as always in science, what current research tells us should be taken as only provisionally true and that it is likely to be superseded (and occasionally overturned) by better methods and more sophisticated thinking. Still, now that we have some appreciation of what the brain does when thinking about morality, we need to face the even broader question of why human beings have a moral sense to begin with. Why is it that we seem to have a strong instinct to consider some notions “wrong” and others “right”? To get a grip on that question, we need to turn from neuroscience to evolutionary biology.