Moral excellence comes about as a result of habit. We become just by doing just acts, temperate by doing temperate acts, brave by doing brave acts.
—ARISTOTLE
IMAGINE THAT YOU ARE AT THE HELM OF A TROLLEY, GINGERLY carrying passengers from one stop to another along the city’s streets. Suddenly you see five people standing close, right in front of you, about to be hit by the trolley! You slam on the brakes, and they don’t work! You shout to them to get the hell out of your way, but they don’t see or hear you! In desperation, you realize that there is only one option available to you: the trolley tracks are about to split, and if you pull a lever you will change course and save the five people. In so doing, however, you will inevitably end up hitting, and probably killing, an innocent bystander. Would you pull the lever?
Most people who are surveyed answer affirmatively, if reluctantly, across cultures. This is one version of one of ethical philosophers’ favorite thought experiments, the trolley dilemma. The idea of course is to see what people’s moral intuitions are when faced with difficult ethical quandaries. The results would suggest that most people adopt what philosophers call a utilitarian, or consequentialist, form of moral decision-making: saving five people is the right thing to do, even though another person will be killed in the process. It’s an example of what Jeremy Bentham—the originator of utilitarianism—would call “moral calculus.”
Almost every time I explain the trolley dilemma someone inevitably begins to rattle out the obvious objections: But what if the five people are all Nazis and the one you kill is your mother? Could you not alert the people to move? Is there no other option available? Or some variant thereof. But the point of the experiment is precisely that there are no other options, and that we do not know anything about the people involved. This kind of experiment allows us to examine people’s moral intuitions, other things being equal. Incidentally, the scenario may appear far-fetched, but it isn’t. There are plenty of situations in real life, from medical emergency rooms to police or military actions, where people are suddenly confronted with the need to exercise some sort of moral calculus quickly, with few available options, and on the basis of very little information. Philosophy sometimes is a question of life and death.
The trolley exercise does not stop there. A variant of the trolley situation reveals something even more interesting about how we think about morality. Imagine now that instead of driving the trolley you are walking on a bridge, below which you see the trolley, on its tracks, about to hit the five bystanders. Now your only course of action, the only way to save the five people, is to quickly grab a large person standing near you and throw him off the bridge, thereby blocking the advancing trolley. Would you do it? (Again, no other options are available; you cannot sacrifice yourself, perhaps because your body mass is too small to stop the advancing car.) It turns out, somewhat surprisingly to philosophers, that in this version of the dilemma most people recoil from sacrificing the one for the many. This is unexpected because clearly the new course of action is not consequentialist at all. Rather, it seems to fit with some other general way of thinking about morality, perhaps a type of deontology, or rule-based ethics, similar to the ones adopted by many religions. (The Ten Commandments are the obvious example.) Maybe the rule here is “Thou shall not kill an innocent person,” or the Kantian imperative not to use other people as means to ends. Yet that can’t be the whole story, because people who adopt a deontological morality in the bridge version of the dilemma are directly contradicting their obviously consequentialist approach in the lever version. We return to the problem created by contradictory ethical doctrines in Chapter 5.
At any rate, here is where cognitive scientists enter the picture. A group of researchers led by Michael Koenigs of the University of Iowa and Antonio Damasio of the University of Southern California in Los Angeles performed an interesting neurobiological experiment using the trolley problem. They compared normal subjects with people who had suffered a specific kind of neurological damage in the ventromedial prefrontal cortex of the brain, an area known to affect emotional reactions. After presenting both sets of subjects with both versions of the trolley dilemma, Koenigs and his colleagues discovered something very interesting about how the brain works when it is engaged in moral decision-making. There was no difference between normal subjects and neurologically damaged patients in their responses to the lever version of the dilemma: most subjects in both groups agreed that it was acceptable to pull the lever, thereby trading five lives for one. However, when faced with the bridge version of the problem, twice as many brain-damaged patients said that the utilitarian trade-off was still acceptable (that is, that it was okay to throw the big guy off the bridge) compared to the controls. What gives?
Joshua Greene, a cognitive scientist at Harvard, thinks he knows what’s going on. His group has shown that different areas of the brain are activated when someone is considering a personal versus an impersonal ethical problem—just like the difference between the bridge and lever versions of the trolley dilemma. Predictably, the lever situation elicits a strong response from areas like the medial frontal gyrus, which is known to be associated with emotions, while the bridge situation stimulates those sections of the brain known to be involved in problem-solving and abstract reasoning. The difference between the brain-damaged and normal subjects in the Koenigs study, then, is explained by the fact that their emotional circuits were impaired, so to speak.
So, does the available science favor consequentialism or deontological ethics? A philosopher would say that this is a strange question, since neurobiology can tell us how people think, but not how they should think. Indeed, a naive scientist could make the claim that the neurobiological evidence favors a consequentialist ethical philosophy over a deontological one based on the observation that when people use their reasoning faculties—and isn’t that the logical thing to do?—they “go consequentialist.” Then again, a neuroscientist with Kantian inclinations could equally reasonably point out that it is the people with incapacitated emotions—people whose brain isn’t working the way it is supposed to—who favor the consequentialist solution in the bridge version of the dilemma. You can see how simple facts, however interesting they may be, are just not enough to decide what’s the right thing to do.
Jonathan Haidt is a social psychologist who has made some intriguing observations about the question of human moral judgment. He has proposed the “mere rationalization hypothesis,” which essentially states that a lot of our moral decisions arise from evolutionarily ingrained instincts or emotions and are not ethical at all. Haidt refers to a study he conducted in which he exposed subjects to actions that caused no harm and yet were likely to provoke a strong emotional reaction. For instance, he looked at how people responded to the idea of cleaning the toilet with one’s national flag. Predictably, most people recoiled from the action, and when asked to elaborate they produced explanations that used moral terms to condemn such a use of the flag. But, Haidt argues, since these actions do not actually harm anyone, in what philosophically coherent sense can they be considered immoral? Instead, he suggests, this is one example of people rationalizing their evolutionarily or culturally engrained emotions and dressing them up as moral when in fact they are arbitrary. According to Haidt, we should learn to distinguish valid moral judgments from those caused by our evolutionary or cultural background and make an effort to discard the latter in favor of the former.
But then the obvious question becomes: How do we tell the difference between spurious and valid moral explanations? Why doesn’t the mere rationalization hypothesis hold all the way down, so to speak, providing scientific backing for the “anything goes” idea of moral relativism? Philosopher William Fitzpatrick points out that in some cases we can clearly distinguish between evolutionary and ethical considerations, as when people make decisions that seem to be guided by moral reasoning that flies straight in the face of their evolutionary instincts. For instance, we may decide not to have more than two children because we are concerned about world population (thus violating the Darwinian imperative to reproduce as much as possible); or we may give up part of our time to volunteer for a humanitarian organization; or we may send a check to a charitable organization so that a child on the other side of the world will have a chance at survival, health care, or education; or, at the extreme, we may even sacrifice our own life for a cause we deem worthy (which amounts to nothing less than evolutionary suicide). None of these decisions make sense from a purely biological standpoint, which would have us focus our efforts on two and only two things: survival and reproduction (and the first imperative is important, from the point of view of natural selection, only if it leads to the second one).
The widespread existence of human behaviors like the ones just mentioned (and many others, of course) is a real problem for any strong evolutionary theory of morality. Still, Fitzpatrick points out, such behaviors do not mean that evolution has no bearing on why we are moral animals. He articulates what he calls a “modest evolutionary explanatory thesis,” according to which our evolutionary history tells us something about why we have the tendency and capacity for moral thinking, as well as why our moral thinking is accompanied by certain emotions. We will examine what evolutionary biology has to say about morality in more depth in Chapter 4. For now, we are still left with a serious problem. Consider the following moral judgments (MJs) (again, from Fitzpatrick’s work):
MJ1: Interracial marriage is wrong.
MJ2: Homosexuality is wrong.
Until recently, both MJ1 and MJ2 were considered true in Western societies, and both are still considered valid in many non-Western societies. However, most Westerners have moved away from MJ1, and an increasing number of them have also abandoned—or at least seriously questioned—MJ2. But the moral skeptic would obviously say: Doesn’t this variety of opinions clearly show that moral judgments are culturally relative? That what is morally “true” in one place or time is not necessarily true in another cultural or temporal context? This is a crucial question. We saw in the last chapter that we cannot derive moral oughts from matters of fact (at least according to Hume). If it now turns out that we have no reason-based approach that leads us to say that something is moral or not, then the relativist might win the field after all, leading to a situation where we have no moral guidance other than our tastes and idiosyncrasies.
This is the territory of so-called metaethics—the discipline that examines the rational justifications for adopting any moral system at all (as opposed to ethics, the branch of philosophy that debates the relative merits of different views of morality and how they apply to individual cases). Metaethical issues are notoriously hard to settle, for a reason very similar to why it has proven doggedly difficult to provide rational foundations even for mathematics and logic, the quintessential areas of pure reasoning. Throughout the twentieth century the top logicians in the world embarked on an epic quest to find a tight logical foundation for mathematics (a quest delightfully recounted in the graphic novel Logicomix by Apostolos Doxiadis and Christos Papadimitriou). That search for the holy grail of reason ended in defeat when Kurt Gödel (1906–1978) demonstrated (logically!) in 1931 that it just wasn’t possible to find such a foundation. Then again, Gödel’s so-called incompleteness theorem has not induced mathematicians to hang up their pencils and paper and go fishing, so maybe we too can set metaethics aside for another day without having to give up the idea of moral reasoning.
That said, it turns out that philosophers think that neither MJ1 nor MJ2 are valid moral judgments. Moreover, they think that the following moral judgment is a valid one:
MJ3: The unmotivated killing of another human being is wrong.
Why? Fitzpatrick summarizes what a philosopher would say about MJ1 and MJ2 in this way: first, both statements fail to withstand critical reflection; second, the reason some people think that MJ1 and MJ2 are true (even though they are not) is nonmoral in nature.
Let’s start with the second criterion: it is easy to attribute an endorsement of MJ1 to racism and an endorsement of MJ2 to homophobia, both of which explanations can be tested independently (that is, we can tell via other means whether a person is racist or homophobic). In the case of MJ3, however, it is hard to think of a nonmoral motive for the judgment.
The first criterion is, of course, trickier, as the type of critical reflection one applies to the alleged moral judgments depends on what type of ethical system (consequentialism, deontology, and so on) one accepts more generally. Still, we could argue that both MJ1 and MJ2 are wrong for a variety of reasons: they discriminate against an arbitrary group of people (members of another race, homosexuals); we would not want to have these sorts of judgments applied to our own decisions in matters of marriage and sexual practices; or such prohibitions infringe on personal liberty in situations where no one is being harmed by individuals’ choices. MJ3, by contrast, stands up to such critical evaluation because, if we did allow random killings, we would soon not have a society to speak of, since a society is a group of individuals who band together for reasons that include increased personal safety. (Notice, of course, that the word unmotivated in MJ3 is a big caveat: it allows for the moral acceptability of killing someone in, say, self-defense, or for other reasons that need to be specified and analyzed. The point is that such reasons cannot be arbitrary and at the whim of cultural trends.)
To take stock, it would seem that moral judgment is still an area where philosophy dominates, because it is hard to justify the equation of what is natural (as in the result of evolutionary processes, or the brains’ way of connecting analytical thinking and emotional reactions) with what is right. This does not mean, of course, that philosophers have an easy time settling ethical disputes or even rationally justifying why we should be ethical to begin with. Still, science does tell us quite a bit about how our brains work when we do exercise ethical judgments, and even about how we acquired this somewhat strange idea that there are “right” and “wrong” things out there. In the next two chapters, we turn to more neurobiology and evolutionary biology to help us make sense of what it means to be a moral animal.