TWO

The Intuitive Dog and Its Rational Tail

One of the greatest truths in psychology is that the mind is divided into parts that sometimes conflict.1 To be human is to feel pulled in different directions, and to marvel—sometimes in horror—at your inability to control your own actions. The Roman poet Ovid lived at a time when people thought diseases were caused by imbalances of bile, but he knew enough psychology to have one of his characters lament: “I am dragged along by a strange new force. Desire and reason are pulling in different directions. I see the right way and approve it, but follow the wrong.”2

Ancient thinkers gave us many metaphors to understand this conflict, but few are more colorful than the one in Plato’s dialogue Timaeus. The narrator, Timaeus, explains how the gods created the universe, including us. Timaeus says that a creator god who was perfect and created only perfect things was filling his new universe with souls—and what could be more perfect in a soul than perfect rationality? So after making a large number of perfect, rational souls, the creator god decided to take a break, delegating the last bits of creation to some lesser deities, who did their best to design vessels for these souls.

The deities began by encasing the souls in that most perfect of shapes, the sphere, which explains why our heads are more or less round. But they quickly realized that these spherical heads would face difficulties and indignities as they rolled around the uneven surface of the Earth. So the gods created bodies to carry the heads, and they animated each body with a second soul—vastly inferior because it was neither rational nor immortal. This second soul contained

those dreadful but necessary disturbances: pleasure, first of all, evil’s most powerful lure; then pains, that make us run away from what is good; besides these, boldness also and fear, foolish counselors both; then also the spirit of anger hard to assuage, and expectation easily led astray. These they fused with unreasoning sense perception and all-venturing lust, and so, as was necessary, they constructed the mortal type of soul.3

Pleasures, emotions, senses … all were necessary evils. To give the divine head a bit of distance from the seething body and its “foolish counsel,” the gods invented the neck.

Most creation myths situate a tribe or ancestor at the center of creation, so it seems odd to give the honor to a mental faculty—at least until you realize that this philosopher’s myth makes philosophers look pretty darn good. It justifies their perpetual employment as the high priests of reason, or as dispassionate philosopher-kings. It’s the ultimate rationalist fantasy—the passions are and ought only to be the servants of reason, to reverse Hume’s formulation. And just in case there was any doubt about Plato’s contempt for the passions, Timaeus adds that a man who masters his emotions will live a life of reason and justice, and will be reborn into a celestial heaven of eternal happiness. A man who is mastered by his passions, however, will be reincarnated as a woman.

Western philosophy has been worshipping reason and distrusting the passions for thousands of years.4 There’s a direct line running from Plato through Immanuel Kant to Lawrence Kohlberg. I’ll refer to this worshipful attitude throughout this book as the rationalist delusion. I call it a delusion because when a group of people make something sacred, the members of the cult lose the ability to think clearly about it. Morality binds and blinds. The true believers produce pious fantasies that don’t match reality, and at some point somebody comes along to knock the idol off its pedestal. That was Hume’s project, with his philosophically sacrilegious claim that reason was nothing but the servant of the passions.5

Thomas Jefferson offered a more balanced model of the relationship between reason and emotion. In 1786, while serving as the American minister to France, Jefferson fell in love. Maria Cosway was a beautiful twenty-seven-year-old English artist who was introduced to Jefferson by a mutual friend. Jefferson and Cosway then spent the next few hours doing exactly what people should do to fall madly in love. They strolled around Paris on a perfect sunny day, two foreigners sharing each other’s aesthetic appreciations of a grand city. Jefferson sent messengers bearing lies to cancel his evening meetings so that he could extend the day into night. Cosway was married, although the marriage seems to have been an open marriage of convenience, and historians do not know how far the romance progressed in the weeks that followed.6 But Cosway’s husband soon insisted on taking his wife back to England, leaving Jefferson in pain.

To ease that pain, Jefferson wrote Cosway a love letter using a literary trick to cloak the impropriety of writing about love to a married woman. Jefferson wrote the letter as a dialogue between his head and his heart debating the wisdom of having pursued a “friendship” even while he knew it would have to end. Jefferson’s head is the Platonic ideal of reason, scolding the heart for having dragged them both into yet another fine mess. The heart asks the head for pity, but the head responds with a stern lecture:

Everything in this world is a matter of calculation. Advance then with caution, the balance in your hand. Put into one scale the pleasures which any object may offer; but put fairly into the other the pains which are to follow, & see which preponderates.7

After taking round after round of abuse rather passively, the heart finally rises to defend itself, and to put the head in its proper place—which is to handle problems that don’t involve people:

When nature assigned us the same habitation, she gave us over it a divided empire. To you she allotted the field of science; to me that of morals. When the circle is to be squared, or the orbit of a comet to be traced; when the arch of greatest strength, or the solid of least resistance is to be investigated, take up the problem; it is yours; nature has given me no cognizance of it. In like manner, in denying to you the feelings of sympathy, of benevolence, of gratitude, of justice, of love, of friendship, she has excluded you from their control. To these she has adapted the mechanism of the heart. Morals were too essential to the happiness of man to be risked on the incertain combinations of the head. She laid their foundation therefore in sentiment, not in science.8

So now we have three models of the mind. Plato said that reason ought to be the master, even if philosophers are the only ones who can reach a high level of mastery.9 Hume said that reason is and ought to be the servant of the passions. And Jefferson gives us a third option, in which reason and sentiment are (and ought to be) independent co-rulers, like the emperors of Rome, who divided the empire into eastern and western halves. Who is right?

WILSON’S PROPHECY

Plato, Hume, and Jefferson tried to understand the design of the human mind without the help of the most powerful tool ever devised for understanding the design of living things: Darwin’s theory of evolution. Darwin was fascinated by morality because any example of cooperation among living creatures had to be squared with his general emphasis on competition and the “survival of the fittest.”10 Darwin offered several explanations for how morality could have evolved, and many of them pointed to emotions such as sympathy, which he thought was the “foundation-stone” of the social instincts.11 He also wrote about feelings of shame and pride, which were associated with the desire for a good reputation. Darwin was a nativist about morality: he thought that natural selection gave us minds that were preloaded with moral emotions.

But as the social sciences advanced in the twentieth century, their course was altered by two waves of moralism that turned nativism into a moral offense. The first was the horror among anthropologists and others at “social Darwinism”—the idea (raised but not endorsed by Darwin) that the richest and most successful nations, races, and individuals are the fittest. Therefore, giving charity to the poor interferes with the natural progress of evolution: it allows the poor to breed.12 The claim that some races were innately superior to others was later championed by Hitler, and so if Hitler was a nativist, then all nativists were Nazis. (That conclusion is illogical, but it makes sense emotionally if you dislike nativism.)13

The second wave of moralism was the radical politics that washed over universities in America, Europe, and Latin America in the 1960s and 1970s. Radical reformers usually want to believe that human nature is a blank slate on which any utopian vision can be sketched. If evolution gave men and women different sets of desires and skills, for example, that would be an obstacle to achieving gender equality in many professions. If nativism could be used to justify existing power structures, then nativism must be wrong. (Again, this is a logical error, but this is the way righteous minds work.)

The cognitive scientist Steven Pinker was a graduate student at Harvard in the 1970s. In his 2002 book The Blank Slate: The Modern Denial of Human Nature, Pinker describes the ways scientists betrayed the values of science to maintain loyalty to the progressive movement. Scientists became “moral exhibitionists” in the lecture hall as they demonized fellow scientists and urged their students to evaluate ideas not for their truth but for their consistency with progressive ideals such as racial and gender equality.14

Nowhere was the betrayal of science more evident than in the attacks on Edward O. Wilson, a lifelong student of ants and ecosystems. In 1975 Wilson published Sociobiology: The New Synthesis. The book explored how natural selection, which indisputably shaped animal bodies, also shaped animal behavior. That wasn’t controversial, but Wilson had the audacity to suggest in his final chapter that natural selection also influenced human behavior. Wilson believed that there is such a thing as human nature, and that human nature constrains the range of what we can achieve when raising our children or designing new social institutions.

Wilson used ethics to illustrate his point. He was a professor at Harvard, along with Lawrence Kohlberg and the philosopher John Rawls, so he was well acquainted with their brand of rationalist theorizing about rights and justice.15 It seemed clear to Wilson that what the rationalists were really doing was generating clever justifications for moral intuitions that were best explained by evolution. Do people believe in human rights because such rights actually exist, like mathematical truths, sitting on a cosmic shelf next to the Pythagorean theorem just waiting to be discovered by Platonic reasoners? Or do people feel revulsion and sympathy when they read accounts of torture, and then invent a story about universal rights to help justify their feelings?

Wilson sided with Hume. He charged that what moral philosophers were really doing was fabricating justifications after “consulting the emotive centers” of their own brains.16 He predicted that the study of ethics would soon be taken out of the hands of philosophers and “biologicized,” or made to fit with the emerging science of human nature. Such a linkage of philosophy, biology, and evolution would be an example of the “new synthesis” that Wilson dreamed of, and that he later referred to as consilience—the “jumping together” of ideas to create a unified body of knowledge.17

Prophets challenge the status quo, often earning the hatred of those in power. Wilson therefore deserves to be called a prophet of moral psychology. He was harassed and excoriated in print and in public.18 He was called a fascist, which justified (for some) the charge that he was a racist, which justified (for some) the attempt to stop him from speaking in public. Protesters who tried to disrupt one of his scientific talks rushed the stage and chanted, “Racist Wilson, you can’t hide, we charge you with genocide.”19

THE EMOTIONAL NINETIES

By the time I entered graduate school, in 1987, the shooting had stopped and sociobiology had been discredited—at least, that’s the message I picked up from hearing scientists use the word as a pejorative term for the naive attempt to reduce psychology to evolution. Moral psychology was not about evolved emotions, it was about the development of reasoning and information processing.20

Yet when I looked outside of psychology, I found many wonderful books on the emotional basis of morality. I read Frans de Waal’s Good Natured: The Origins of Right and Wrong in Humans and Other Animals.21 De Waal did not claim that chimpanzees had morality; he argued only that chimps (and other apes) have most of the psychological building blocks that humans use to construct moral systems and communities. These building blocks are largely emotional, such as feelings of sympathy, fear, anger, and affection.

I also read Descartes’ Error, by the neuroscientist Antonio Damasio.22 Damasio had noticed an unusual pattern of symptoms in patients who had suffered brain damage to a specific part of the brain—the ventromedial (i.e., bottom-middle) prefrontal cortex (abbreviated vmPFC; it’s the region just behind and above the bridge of the nose). Their emotionality dropped nearly to zero. They could look at the most joyous or gruesome photographs and feel nothing. They retained full knowledge of what was right and wrong, and they showed no deficits in IQ. They even scored well on Kohlberg’s tests of moral reasoning. Yet when it came to making decisions in their personal lives and at work, they made foolish decisions or no decisions at all. They alienated their families and their employers, and their lives fell apart.

Damasio’s interpretation was that gut feelings and bodily reactions were necessary to think rationally, and that one job of the vmPFC was to integrate those gut feelings into a person’s conscious deliberations. When you weigh the advantages and disadvantages of murdering your parents … you can’t even do it, because feelings of horror come rushing in through the vmPFC.

But Damasio’s patients could think about anything, with no filtering or coloring from their emotions. With the vmPFC shut down, every option at every moment felt as good as every other. The only way to make a decision was to examine each option, weighing the pros and cons using conscious, verbal reasoning. If you’ve ever shopped for an appliance about which you have few feelings—say, a washing machine—you know how hard it can be once the number of options exceeds six or seven (which is the capacity of our short-term memory). Just imagine what your life would be like if at every moment, in every social situation, picking the right thing to do or say became like picking the best washing machine among ten options, minute after minute, day after day. You’d make foolish decisions too.

Damasio’s findings were as anti-Platonic as could be. Here were people in whom brain damage had essentially shut down communication between the rational soul and the seething passions of the body (which, unbeknownst to Plato, were not based in the heart and stomach but in the emotion areas of the brain). No more of those “dreadful but necessary disturbances,” those “foolish counselors” leading the rational soul astray. Yet the result of the separation was not the liberation of reason from the thrall of the passions. It was the shocking revelation that reasoning requires the passions. Jefferson’s model fits better: when one co-emperor is knocked out and the other tries to rule the empire by himself, he’s not up to the task.

If Jefferson’s model were correct, however, then Damasio’s patients should still have fared well in the half of life that was always ruled by the head. Yet the collapse of decision making, even in purely analytic and organizational tasks, was pervasive. The head can’t even do head stuff without the heart. So Hume’s model fit these cases best: when the master (passions) drops dead, the servant (reasoning) has neither the ability nor the desire to keep the estate running. Everything goes to ruin.

WHY ATHEISTS WON’T SELL THEIR SOULS

In 1995 I moved to the University of Virginia (UVA) to begin my first job as a professor. Moral psychology was still devoted to the study of moral reasoning. But if you looked beyond developmental psychology, Wilson’s new synthesis was beginning. A few economists, philosophers, and neuroscientists were quietly constructing an alternative approach to morality, one whose foundation was the emotions, and the emotions were assumed to have been shaped by evolution.23 These synthesizers were assisted by the rebirth of sociobiology in 1992 under a new name—evolutionary psychology.24

I read Jefferson’s letter to Cosway during my first month in Charlottesville, as part of my initiation into his cult. (Jefferson founded UVA in 1819, and here at “Mr. Jefferson’s University” we regard him as a deity.) But I had already arrived at a Jeffersonian view in which moral emotions and moral reasoning were separate processes.25 Each process could make moral judgments on its own, and they sometimes fought it out for the right to do so (figure 2.1).

In my first few years at UVA I conducted several experiments to test this dual-process model by asking people to make judgments under conditions that strengthened or weakened one of the processes. For example, social psychologists often ask people to perform tasks while carrying a heavy cognitive load, such as holding the number 7250475 in mind, or while carrying a light cognitive load, such as remembering just the number 7. If performance suffers while people are carrying the heavy load, then we can conclude that “controlled” thinking (such as conscious reasoning) is necessary for that particular task. But if people do fine on the task regardless of the load, then we can conclude that “automatic” processes (such as intuition and emotion) are sufficient for performing that task.

FIGURE 2.1. My early Jeffersonian dual-process model. Emotion and reasoning are separate paths to moral judgment, although moral judgment can sometimes lead to post hoc reasoning as well.

My question was simple: Can people make moral judgments just as well when carrying a heavy cognitive load as when carrying a light one? The answer turned out to be yes. I found no difference between conditions, no effect of cognitive load. I tried it again with different stories and got the same outcome. I tried another manipulation: I used a computer program to force some people to answer quickly, before they had time to think, and I forced other people to wait ten seconds before offering their judgment. Surely that manipulation would weaken or strengthen moral reasoning and shift the balance of power, I thought. But it didn’t.26

When I came to UVA I was certain that a Jeffersonian dual-process model was right, but I kept failing in my efforts to prove it. My tenure clock was ticking, and I was getting nervous. I had to produce a string of publications in top journals within five years or I’d be turned down for tenure and forced to leave UVA.

In the meantime, I started running studies to follow up on the moral dumbfounding I had observed a few years earlier in my dissertation interviews. I worked with a talented undergraduate, Scott Murphy. Our plan was to increase the amount of dumbfounding by having Scott play devil’s advocate rather than gentle interviewer. When Scott succeeded in stripping away arguments, would people change their judgments? Or would they become morally dumbfounded, clinging to their initial judgments while stammering and grasping for reasons?

Scott brought thirty UVA students into the lab, one at a time, for an extended interview. He explained that his job was to challenge their reasoning, no matter what they said. He then took them through five scenarios. One was Kohlberg’s Heinz dilemma: Should Heinz steal a drug to save his wife’s life? We predicted that this story would produce little dumbfounding. It pitted concerns about harm and life against concerns about law and property rights, and the story was well constructed to elicit cool, rational moral reasoning. Sure enough, Scott couldn’t whip up any dumbfounding with the Heinz story. People offered good reasons for their answers, and Scott was not able to get them to abandon principles such as “Life is more important than property.”

We also chose two scenarios that played more directly on gut feelings. In the “roach juice” scenario, Scott opened a small can of apple juice, poured it into a new plastic cup, and asked the subject to take a sip. Everyone did. Then Scott brought out a white plastic box and said:

I have here in this container a sterilized cockroach. We bought some cockroaches from a laboratory supply company. The roaches were raised in a clean environment. But just to be certain, we sterilized the roaches again in an autoclave, which heats everything so hot that no germs can survive. I’m going to dip this cockroach into the juice, like this [using a tea strainer]. Now, would you take a sip?

In the second scenario, Scott offered subjects $2 if they would sign a piece of paper that said: I, ________, hereby sell my soul, after my death, to Scott Murphy, for the sum of $2. There was a line for a signature, and below the line was this note: This form is part of a psychology experiment. It is NOT a legal or binding contract, in any way.27 Scott also told them they could rip up the paper as soon as they signed it, and they’d still get their $2.

Only 23 percent of subjects were willing to sign the paper without any goading from Scott. We were a bit surprised to find that 37 percent were willing to take a sip of the roach juice.28 In these cases, Scott couldn’t play devil’s advocate.

For the majorities who said no, however, Scott asked them to explain their reasons and did his best to challenge those reasons. Scott convinced an extra 10 percent to sip the juice, and an extra 17 percent to sign the soul-selling paper. But most people in both scenarios clung to their initial refusal, even though many of them could not generate good reasons. A few people confessed that they were atheists, didn’t believe in souls, and yet still felt uncomfortable about signing.

Here too there wasn’t much dumbfounding. People felt that it was ultimately their own choice whether or not to drink the juice or sign the paper, so most subjects seemed comfortable saying, “I just don’t want to do it, even though I can’t give you a reason.”

The main point of the study was to examine responses to two harmless taboo violations. We wanted to know if the moral judgment of disturbing but harmless events would look more like judgments in the Heinz task (closely linked to reasoning), or like those in the roach juice and soul-selling tasks (where people readily confessed that they were following gut feelings). Here’s one story we used:

Julie and Mark, who are sister and brother, are traveling together in France. They are both on summer vacation from college. One night they are staying alone in a cabin near the beach. They decide that it would be interesting and fun if they tried making love. At the very least it would be a new experience for each of them. Julie is already taking birth control pills, but Mark uses a condom too, just to be safe. They both enjoy it, but they decide not to do it again. They keep that night as a special secret between them, which makes them feel even closer to each other. So what do you think about this? Was it wrong for them to have sex?

In the other harmless-taboo story, Jennifer works in a hospital pathology lab. She’s a vegetarian for moral reasons—she think it’s wrong to kill animals. But one night she has to incinerate a fresh human cadaver, and she thinks it’s a waste to throw away perfectly edible flesh. So she cuts off a piece of flesh and takes it home. Then she cooks it and eats it.

We knew these stories were disgusting, and we expected that they’d trigger immediate moral condemnation. Only 20 percent of subjects said it was OK for Julie and Mark to have sex, and only 13 percent said it was OK for Jennifer to eat part of a cadaver. But when Scott asked people to explain their judgments and then challenged those explanations, he found exactly the Humean pattern that we had predicted. In these harmless-taboo scenarios, people generated far more reasons and discarded far more reasons than in any of the other scenarios. They seemed to be flailing around, throwing out reason after reason, and rarely changing their minds when Scott proved that their latest reason was not relevant. Here is the transcript of one interview about the incest story:

EXPERIMENTER: So what do you think about this, was it wrong for Julie and Mark to have sex?

SUBJECT: Yeah, I think it’s totally wrong to have sex. You know, because I’m pretty religious and I just think incest is wrong anyway. But, I don’t know.

EXPERIMENTER: What’s wrong with incest, would you say?

SUBJECT: Um, the whole idea of, well, I’ve heard—I don’t even know if this is true, but in the case, if the girl did get pregnant, the kids become deformed, most of the time, in cases like that.

EXPERIMENTER: But they used a condom and birth control pills—

SUBJECT: Oh, OK. Yeah, you did say that.

EXPERIMENTER: —so there’s no way they’re going to have a kid.

SUBJECT: Well, I guess the safest sex is abstinence, but, um, uh … um, I don’t know, I just think that’s wrong. I don’t know, what did you ask me?

EXPERIMENTER: Was it wrong for them to have sex?

SUBJECT: Yeah, I think it’s wrong.

EXPERIMENTER: And I’m trying to find out why, what you think is wrong with it.

SUBJECT: OK, um … well … let’s see, let me think about this. Um—how old were they?

EXPERIMENTER: They were college age, around 20 or so.

SUBJECT: Oh, oh [looks disappointed]. I don’t know, I just … it’s just not something you’re brought up to do. It’s just not—well, I mean I wasn’t. I assume most people aren’t [laughs]. I just think that you shouldn’t—I don’t—I guess my reason is, um … just that, um … you’re not brought up to it. You don’t see it. It’s not, um—I don’t think it’s accepted. That’s pretty much it.

EXPERIMENTER: You wouldn’t say anything you’re not brought up to see is wrong, would you? For example, if you’re not brought up to see women working outside the home, would you say that makes it wrong for women to work?

SUBJECT: Um … well … oh, gosh. This is hard. I really—um, I mean, there’s just no way I could change my mind but I just don’t know how to—how to show what I’m feeling, what I feel about it. It’s crazy!29

In this transcript and in many others, it’s obvious that people were making a moral judgment immediately and emotionally. Reasoning was merely the servant of the passions, and when the servant failed to find any good arguments, the master did not change his mind. We quantified some of the behaviors that seemed most indicative of being morally dumbfounded, and these analyses showed big differences between the way people responded to the harmless-taboo scenarios compared to the Heinz dilemma.30

These results supported Hume, not Jefferson or Plato. People made moral judgments quickly and emotionally. Moral reasoning was mostly just a post hoc search for reasons to justify the judgments people had already made. But were these judgments representative of moral judgment in general? I had to write some bizarre stories to give people these flashes of moral intuition that they could not easily explain. That can’t be how most of our thinking works, can it?

“SEEING-THAT” VERSUS “REASONING-WHY”

Two years before Scott and I ran the dumbfounding studies I read an extraordinary book that psychologists rarely mention: Patterns, Thinking, and Cognition, by Howard Margolis, a professor of public policy at the University of Chicago. Margolis was trying to understand why people’s beliefs about political issues are often so poorly connected to objective facts, and he hoped that cognitive science could solve the puzzle. Yet Margolis was turned off by the approaches to thinking that were prevalent in the 1980s, most of which used the metaphor of the mind as a computer.

Margolis thought that a better model for studying higher cognition, such as political thinking, was lower cognition, such as vision, which works largely by rapid unconscious pattern matching. He began his book with an investigation of perceptual illusions, such as the well-known Muller-Lyer illusion (figure 2.2), in which one line continues to look longer than the other even after you know that the two lines are the same length. He then moved on to logic problems such as the Wason 4-card task, in which you are shown four cards on a table.31 You know that each card comes from a deck in which all cards have a letter on one side and a number on the other. Your task is to choose the smallest number of cards in figure 2.3 that you must turn over to decide whether this rule is true: “If there is a vowel on one side, then there is an even number on the other side.”

Everyone immediately sees that you have to turn over the E, but many people also say you need to turn over the 4. They seem to be doing simple-minded pattern matching: There was a vowel and an even number in the question, so let’s turn over the vowel and the even number. Many people resist the explanation of the simple logic behind the task: turning over the 4 and finding a B on the other side would not invalidate the rule, whereas turning over the 7 and finding a U would do it, so you need to turn over the E and the 7.

FIGURE 2.2. The Muller-Lyer illusion.

FIGURE 2.3. The Wason 4-card task. Which card(s) must you turn over to verify the rule that if a card shows a vowel on one face, then it has an even number on the other?

When people are told up front what the answer is and asked to explain why that answer is correct, they can do it. But amazingly, they are just as able to offer an explanation, and just as confident in their reasoning, whether they are told the right answer (E and 7) or the popular but wrong answer (E and 4).32 Findings such as these led Wason to the conclusion that judgment and justification are separate processes. Margolis shared Wason’s view, summarizing the state of affairs like this:

Given the judgments (themselves produced by the non-conscious cognitive machinery in the brain, sometimes correctly, sometimes not so), human beings produce rationales they believe account for their judgments. But the rationales (on this argument) are only ex post rationalizations.33

Margolis proposed that there are two very different kinds of cognitive processes at work when we make judgments and solve problems: “seeing-that” and “reasoning-why.” “Seeing-that” is the pattern matching that brains have been doing for hundreds of millions of years. Even the simplest animals are wired to respond to certain patterns of input (such as light, or sugar) with specific behaviors (such as turning away from the light, or stopping and eating the sugary food). Animals easily learn new patterns and connect them up to their existing behaviors, which can be reconfigured into new patterns as well (as when an animal trainer teaches an elephant a new trick).

As brains get larger and more complex, animals begin to show more cognitive sophistication—making choices (such as where to forage today, or when to fly south) and judgments (such as whether a subordinate chimpanzee showed properly deferential behavior). But in all cases, the basic psychology is pattern matching. It’s the sort of rapid, automatic, and effortless processing that drives our perceptions in the Muller-Lyer illusion. You can’t choose whether or not to see the illusion; you’re just “seeing-that” one line is longer than the other. Margolis also called this kind of thinking “intuitive.”

“Reasoning-why,” in contrast, is the process “by which we describe how we think we reached a judgment, or how we think another person could reach that judgment.”34 “Reasoning-why” can occur only for creatures that have language and a need to explain themselves to other creatures. “Reasoning-why” is not automatic; it’s conscious, it sometimes feels like work, and it’s easily disrupted by cognitive load. Kohlberg had convinced moral psychologists to study “reasoning-why” and to neglect “seeing-that.”35

Margolis’s ideas were a perfect fit with everything I had seen in my studies: rapid intuitive judgment (“That’s just wrong!”) followed by slow and sometimes tortuous justifications (“Well, their two methods of birth control might fail, and the kids they produce might be deformed”). The intuition launched the reasoning, but the intuition did not depend on the success or failure of the reasoning. My harmless-taboo stories were like Muller-Lyer illusions: they still felt wrong, even after you had measured the amount of harm involved and agreed that the stories were harmless.

Margolis’s theory worked just as well for the easier dilemmas. In the Heinz scenario, most people intuitively “see that” Heinz should steal the drug (his wife’s life is at stake), but in this case it’s easy to find reasons. Kohlberg had constructed the dilemma to make good reasons available on both sides, so nobody gets dumbfounded.

The roach juice and soul-selling dilemmas instantly make people “see that” they want to refuse, but they don’t feel much conversational pressure to offer reasons. Not wanting to drink roach-tainted juice isn’t a moral judgment, it’s a personal preference. Saying “Because I don’t want to” is a perfectly acceptable justification for one’s subjective preferences. Yet moral judgments are not subjective statements; they are claims that somebody did something wrong. I can’t call for the community to punish you simply because I don’t like what you’re doing. I have to point to something outside of my own preferences, and that pointing is our moral reasoning. We do moral reasoning not to reconstruct the actual reasons why we ourselves came to a judgment; we reason to find the best possible reasons why somebody else ought to join us in our judgment.36

THE RIDER AND THE ELEPHANT

It took me years to appreciate fully the implications of Margolis’s ideas. Part of the problem was that my thinking was entrenched in a prevalent but useless dichotomy between cognition and emotion. After failing repeatedly to get cognition to act independently of emotion, I began to realize that the dichotomy made no sense. Cognition just refers to information processing, which includes higher cognition (such as conscious reasoning) as well as lower cognition (such as visual perception and memory retrieval).37

Emotion is a bit harder to define. Emotions were long thought to be dumb and visceral, but beginning in the 1980s, scientists increasingly recognized that emotions were filled with cognition. Emotions occur in steps, the first of which is to appraise something that just happened based on whether it advanced or hindered your goals.38 These appraisals are a kind of information processing; they are cognitions. When an appraisal program detects particular input patterns, it launches a set of changes in your brain that prepare you to respond appropriately. For example, if you hear someone running up behind you on a dark street, your fear system detects a threat and triggers your sympathetic nervous system, firing up the fight-or-flight response, cranking up your heart rate, and widening your pupils to help you take in more information.

Emotions are not dumb. Damasio’s patients made terrible decisions because they were deprived of emotional input into their decision making. Emotions are a kind of information processing.39 Contrasting emotion with cognition is therefore as pointless as contrasting rain with weather, or cars with vehicles.

Margolis helped me ditch the emotion-cognition contrast. His work helped me see that moral judgment is a cognitive process, as are all forms of judgment. The crucial distinction is really between two different kinds of cognition: intuition and reasoning. Moral emotions are one type of moral intuition, but most moral intuitions are more subtle; they don’t rise to the level of emotions.40 The next time you read a newspaper or drive a car, notice the many tiny flashes of condemnation that flit through your consciousness. Is each such flash an emotion? Or ask yourself whether it is better to save the lives of five strangers or one (assuming all else is equal). Do you need an emotion to tell you to go for the five? Do you need reasoning? No, you just see, instantly, that five is better than one. Intuition is the best word to describe the dozens or hundreds of rapid, effortless moral judgments and decisions that we all make every day. Only a few of these intuitions come to us embedded in full-blown emotions.

In The Happiness Hypothesis, I called these two kinds of cognition the rider (controlled processes, including “reasoning-why”) and the elephant (automatic processes, including emotion, intuition, and all forms of “seeing-that”).41 I chose an elephant rather than a horse because elephants are so much bigger—and smarter—than horses. Automatic processes run the human mind, just as they have been running animal minds for 500 million years, so they’re very good at what they do, like software that has been improved through thousands of product cycles. When human beings evolved the capacity for language and reasoning at some point in the last million years, the brain did not rewire itself to hand over the reins to a new and inexperienced charioteer. Rather, the rider (language-based reasoning) evolved because it did something useful for the elephant.

The rider can do several useful things. It can see further into the future (because we can examine alternative scenarios in our heads) and therefore it can help the elephant make better decisions in the present. It can learn new skills and master new technologies, which can be deployed to help the elephant reach its goals and sidestep disasters. And, most important, the rider acts as the spokesman for the elephant, even though it doesn’t necessarily know what the elephant is really thinking. The rider is skilled at fabricating post hoc explanations for whatever the elephant has just done, and it is good at finding reasons to justify whatever the elephant wants to do next. Once human beings developed language and began to use it to gossip about each other, it became extremely valuable for elephants to carry around on their backs a full-time public relations firm.42

I didn’t have the rider and elephant metaphor back in the 1990s, but once I stopped thinking about emotion versus cognition and started thinking about intuition versus reasoning, everything fell into place. I took my old Jeffersonian dual-process model (figure 2.1) and made two big changes. First, I weakened the arrow from reasoning to judgment, demoting it to a dotted line (link 5 in figure 2.4). The dots mean that independently reasoned judgment is possible in theory but rare in practice. This simple change converted the model into a Humean model in which intuition (rather than passion) is the main cause of moral judgment (link 1), and then reasoning typically follows that judgment (link 2) to construct post hoc justifications. Reason is the servant of the intuitions. The rider was put there in the first place to serve the elephant.

I also wanted to capture the social nature of moral judgment. Moral talk serves a variety of strategic purposes such as managing your reputation, building alliances, and recruiting bystanders to support your side in the disputes that are so common in daily life. I wanted to go beyond the first judgments people make when they hear some juicy gossip or witness some surprising event. I wanted my model to capture the give-and-take, the round after round of discussion and argumentation that sometimes leads people to change their minds.

FIGURE 2.4. The social intuitionist model. Intuitions come first and reasoning is usually produced after a judgment is made, in order to influence other people. But as a discussion progresses, the reasons given by other people sometimes change our intuitions and judgments. (From Haidt 2001, p. 815. Published by the American Psychological Association. Adapted with permission.)

We make our first judgments rapidly, and we are dreadful at seeking out evidence that might disconfirm those initial judgments.43 Yet friends can do for us what we cannot do for ourselves: they can challenge us, giving us reasons and arguments (link 3) that sometimes trigger new intuitions, thereby making it possible for us to change our minds. We occasionally do this when mulling a problem by ourselves, suddenly seeing things in a new light or from a new perspective (to use two visual metaphors). Link 6 in the model represents this process of private reflection. The line is dotted because this process doesn’t seem to happen very often.44 For most of us, it’s not every day or even every month that we change our mind about a moral issue without any prompting from anyone else.

Far more common than such private mind changing is social influence. Other people influence us constantly just by revealing that they like or dislike somebody. That form of influence is link 4, the social persuasion link. Many of us believe that we follow an inner moral compass, but the history of social psychology richly demonstrates that other people exert a powerful force, able to make cruelty seem acceptable45 and altruism seem embarrassing,46 without giving us any reasons or arguments.

Because of these two changes I called my theory the “social intuitionist model of moral judgment,” and I published it in 2001 in an article titled “The Emotional Dog and Its Rational Tail.”47 In hindsight I wish I’d called the dog “intuitive” because psychologists who are still entrenched in the emotion-versus-cognition dichotomy often assume from the title that I’m saying that morality is always driven by emotion. Then they prove that cognition matters, and think they have found evidence against intuitionism.48 But intuitions (including emotional responses) are a kind of cognition. They’re just not a kind of reasoning.

HOW TO WIN AN ARGUMENT

The social intuitionist model offers an explanation of why moral and political arguments are so frustrating: because moral reasons are the tail wagged by the intuitive dog. A dog’s tail wags to communicate. You can’t make a dog happy by forcibly wagging its tail. And you can’t change people’s minds by utterly refuting their arguments. Hume diagnosed the problem long ago:

And as reasoning is not the source, whence either disputant derives his tenets; it is in vain to expect, that any logic, which speaks not to the affections, will ever engage him to embrace sounder principles.49

If you want to change people’s minds, you’ve got to talk to their elephants. You’ve got to use links 3 and 4 of the social intuitionist model to elicit new intuitions, not new rationales.

Dale Carnegie was one of the greatest elephant-whisperers of all time. In his classic book How to Win Friends and Influence People, Carnegie repeatedly urged readers to avoid direct confrontations. Instead he advised people to “begin in a friendly way,” to “smile,” to “be a good listener,” and to “never say ‘you’re wrong.’ ” The persuader’s goal should be to convey respect, warmth, and an openness to dialogue before stating one’s own case. Carnegie was urging readers to use link 3, the social persuasion link, to prepare the ground before attempting to use link 4, the reasoned persuasion link.

From my description of Carnegie so far, you might think his techniques are superficial and manipulative, appropriate only for salespeople. But Carnegie was in fact a brilliant moral psychologist who grasped one of the deepest truths about conflict. He used a quotation from Henry Ford to express it: “If there is any one secret of success it lies in the ability to get the other person’s point of view and see things from their angle as well as your own.”50

It’s such an obvious point, yet few of us apply it in moral and political arguments because our righteous minds so readily shift into combat mode. The rider and the elephant work together smoothly to fend off attacks and lob rhetorical grenades of our own. The performance may impress our friends and show allies that we are committed members of the team, but no matter how good our logic, it’s not going to change the minds of our opponents if they are in combat mode too. If you really want to change someone’s mind on a moral or political matter, you’ll need to see things from that person’s angle as well as your own. And if you do truly see it the other person’s way—deeply and intuitively—you might even find your own mind opening in response. Empathy is an antidote to righteousness, although it’s very difficult to empathize across a moral divide.

IN SUM

People reason and people have moral intuitions (including moral emotions), but what is the relationship among these processes? Plato believed that reason could and should be the master; Jefferson believed that the two processes were equal partners (head and heart) ruling a divided empire; Hume believed that reason was (and was only fit to be) the servant of the passions. In this chapter I tried to show that Hume was right:

• The mind is divided into parts, like a rider (controlled processes) on an elephant (automatic processes). The rider evolved to serve the elephant.

• You can see the rider serving the elephant when people are morally dumbfounded. They have strong gut feelings about what is right and wrong, and they struggle to construct post hoc justifications for those feelings. Even when the servant (reasoning) comes back empty-handed, the master (intuition) doesn’t change his judgment.

• The social intuitionist model starts with Hume’s model and makes it more social. Moral reasoning is part of our lifelong struggle to win friends and influence people. That’s why I say that “intuitions come first, strategic reasoning second.” You’ll misunderstand moral reasoning if you think about it as something people do by themselves in order to figure out the truth.

• Therefore, if you want to change someone’s mind about a moral or political issue, talk to the elephant first. If you ask people to believe something that violates their intuitions, they will devote their efforts to finding an escape hatch—a reason to doubt your argument or conclusion. They will almost always succeed.

I have tried to use intuitionism while writing this book. My goal is to change the way a diverse group of readers—liberal and conservative, secular and religious—think about morality, politics, religion, and each other. I knew that I had to take things slowly and address myself more to elephants than to riders. I couldn’t just lay out the theory in chapter 1 and then ask readers to reserve judgment until I had presented all of the supporting evidence. Rather, I decided to weave together the history of moral psychology and my own personal story to create a sense of movement from rationalism to intuitionism. I threw in historical anecdotes, quotations from the ancients, and praise of a few visionaries. I set up metaphors (such as the rider and the elephant) that will recur throughout the book. I did these things in order to “tune up” your intuitions about moral psychology. If I have failed and you have a visceral dislike of intuitionism or of me, then no amount of evidence I could present will convince you that intuitionism is correct. But if you now feel an intuitive sense that intuitionism might be true, then let’s keep going. In the next two chapters I’ll address myself more to riders than to elephants.