THREE

Elephants Rule

On February 3, 2007, shortly before lunch, I discovered that I was a chronic liar. I was at home, writing a review article on moral psychology, when my wife, Jayne, walked by my desk. In passing, she asked me not to leave dirty dishes on the counter where she prepared our baby’s food. Her request was polite but its tone added a postscript: “As I have asked you a hundred times before.”

My mouth started moving before hers had stopped. Words came out. Those words linked themselves up to say something about the baby having woken up at the same time that our elderly dog barked to ask for a walk and I’m sorry but I just put my breakfast dishes down wherever I could. In my family, caring for a hungry baby and an incontinent dog is a surefire excuse, so I was acquitted.

Jayne left the room and I continued working. I was writing about the three basic principles of moral psychology.1 The first principle is Intuitions come first, strategic reasoning second. That’s a six-word summary of the social intuitionist model.2 To illustrate the principle, I described a study I did with Thalia Wheatley, who is now a professor at Dartmouth College.3 Back when Thalia was a grad student at UVA, she had learned how to hypnotize people, and she came up with a clever way to test the social intuitionist model. Thalia hypnotized people to feel a flash of disgust whenever they saw a certain word (take for half of the subjects; often for the others).4 While they were still in a trance Thalia instructed them that they would not be able to remember anything she had told them, and then she brought them out of the trance.

Once they were fully awake, we asked them to fill out a questionnaire packet in which they had to judge six short stories about moral violations. For each story, half of the subjects read a version that had their hypnotic code word embedded in it. For example, one story was about a congressman who claims to fight corruption, yet “takes bribes from the tobacco lobby.” The other subjects read a version that was identical except for a few words (the congressman is “often bribed by the tobacco lobby”). On average, subjects judged each of the six stories to be more disgusting and morally wrong when their code word was embedded in the story. That supported the social intuitionist model. By giving people a little artificial flash of negativity while they were reading the story, without giving them any new information, we made their moral judgments more severe.

The real surprise, though, came with a seventh story we tacked on almost as an afterthought, a story that contained no moral violation of any kind. It was about a student council president named Dan who is in charge of scheduling discussions between students and faculty. Half of our subjects read that Dan “tries to take topics that appeal to both professors and students in order to stimulate discussion.” The other half read the same story except that Dan “often picks topics” that appeal to professors and students. We added this story to demonstrate that there is a limit to the power of intuition. We predicted that subjects who felt a flash of disgust while reading this story would have to overrule their gut feelings. To condemn Dan would be bizarre.

Most of our subjects did indeed say that Dan’s actions were fine. But a third of the subjects who had found their code word in the story still followed their gut feelings and condemned Dan. They said that what he did was wrong, sometimes very wrong. Fortunately, we had asked everyone to write a sentence or two explaining their judgments, and we found gems such as “Dan is a popularity-seeking snob” and “I don’t know, it just seems like he’s up to something.” These subjects made up absurd reasons to justify judgments that they had made on the basis of gut feelings—feelings Thalia had implanted with hypnosis.

So there I was at my desk, writing about how people automatically fabricate justifications of their gut feelings, when suddenly I realized that I had just done the same thing with my wife. I disliked being criticized, and I had felt a flash of negativity by the time Jayne had gotten to her third word (“Can you not …”). Even before I knew why she was criticizing me, I knew I disagreed with her (because intuitions come first). The instant I knew the content of the criticism (“… leave dirty dishes on the …”), my inner lawyer went to work searching for an excuse (strategic reasoning second). It’s true that I had eaten breakfast, given Max his first bottle, and let Andy out for his first walk, but these events had all happened at separate times. Only when my wife criticized me did I merge them into a composite image of a harried father with too few hands, and I created this fabrication by the time she had completed her one-sentence criticism (“… counter where I make baby food?”). I then lied so quickly and convincingly that my wife and I both believed me.

I had long teased my wife for altering stories to make them more dramatic when she told them to friends, but it took twenty years of studying moral psychology to see that I altered my stories too. I finally understood—not just cerebrally but intuitively and with an open heart—the admonitions of sages from so many eras and cultures warning us about self-righteousness. I’ve already quoted Jesus (on seeing “the speck in your neighbor’s eye”). Here’s the same idea from Buddha:

It is easy to see the faults of others, but difficult to see one’s own faults. One shows the faults of others like chaff winnowed in the wind, but one conceals one’s own faults as a cunning gambler conceals his dice.5

Jesus and Buddha were right, and in this chapter and the next one I’ll show you how our automatic self-righteousness works. It begins with rapid and compelling intuitions (that’s link 1 in the social intuitionist model), and it continues on with post hoc reasoning, done for socially strategic purposes (links 2 and 3). Here are six major research findings that collectively illustrate the first half of the first principle: Intuitions Come First. (In the next chapter I’ll give evidence for the second half—Strategic Reasoning Second). Elephants rule, although they are sometimes open to persuasion by riders.

1. BRAINS EVALUATE INSTANTLY AND CONSTANTLY

Brains evaluate everything in terms of potential threat or benefit to the self, and then adjust behavior to get more of the good stuff and less of the bad.6 Animal brains make such appraisals thousands of times a day with no need for conscious reasoning, all in order to optimize the brain’s answer to the fundamental question of animal life: Approach or avoid?

In the 1890s Wilhelm Wundt, the founder of experimental psychology, formulated the doctrine of “affective primacy.”7 Affect refers to small flashes of positive or negative feeling that prepare us to approach or avoid something. Every emotion (such as happiness or disgust) includes an affective reaction, but most of our affective reactions are too fleeting to be called emotions (for example, the subtle feelings you get just from reading the words happiness and disgust).

Wundt said that affective reactions are so tightly integrated with perception that we find ourselves liking or disliking something the instant we notice it, sometimes even before we know what it is.8 These flashes occur so rapidly that they precede all other thoughts about the thing we’re looking at. You can feel affective primacy in action the next time you run into someone you haven’t seen in many years. You’ll usually know within a second or two whether you liked or disliked the person, but it can take much longer to remember who the person is or how you know each other.

In 1980 social psychologist Robert Zajonc (the name rhymes with “science”) revived Wundt’s long-forgotten notion of affective primacy. Zajonc was fed up with the common view among psychologists at the time that people are cool, rational information processors who first perceive and categorize objects and then react to them. He did a number of ingenious experiments that asked people to rate arbitrary things such as Japanese pictograms, words in a made-up language, and geometric shapes. It may seem odd to ask people to rate how much they like foreign words and meaningless squiggles, but people can do it because almost everything we look at triggers a tiny flash of affect. More important, Zajonc was able to make people like any word or image more just by showing it to them several times.9 The brain tags familiar things as good things. Zajonc called this the “mere exposure effect,” and it is a basic principle of advertising.

In a landmark article, Zajonc urged psychologists to use a dual-process model in which affect or “feeling” is the first process.10 It has primacy both because it happens first (it is part of perception and is therefore extremely fast) and because it is more powerful (it is closely linked to motivation, and therefore it strongly influences behavior). The second process—thinking—is an evolutionarily newer ability, rooted in language and not closely related to motivation. In other words, thinking is the rider; affect is the elephant. The thinking system is not equipped to lead—it simply doesn’t have the power to make things happen—but it can be a useful advisor.

Zajonc said that thinking could work independently of feeling in theory, but in practice affective reactions are so fast and compelling that they act like blinders on a horse: they “reduce the universe of alternatives” available to later thinking.11 The rider is an attentive servant, always trying to anticipate the elephant’s next move. If the elephant leans even slightly to the left, as though preparing to take a step, the rider looks to the left and starts preparing to assist the elephant on its imminent leftward journey. The rider loses interest in everything off to the right.

2. SOCIAL AND POLITICAL JUDGMENTS ARE PARTICULARLY INTUITIVE

Here are four pairs of words. Your job is to look only at the second word in each pair and then categorize it as good or bad:

           flower–happiness
             hate–sunshine
          love–cancer
cockroach–lonely

It’s absurdly easy, but imagine if I asked you to do it on a computer, where I can flash the first word in each pair for 250 milliseconds (a quarter of a second, just long enough to read it) and then I immediately display the second word. In that case we’d find that it takes you longer to make your value judgment for sunshine and cancer than for happiness and lonely.

This effect is called “affective priming” because the first word triggers a flash of affect that primes the mind to go one way or the other.12 It’s like getting the elephant to lean slightly to the right or the left, in anticipation of walking to the right or the left. The flash kicks in within 200 milliseconds, and it lasts for about a second beyond that if there’s no other jolt to back it up.13 If you see the second word within that brief window of time, and if the second word has the same valence, then you’ll be able to respond extra quickly because your mind is already leaning that way. But if the first word primes your mind for a negative evaluation (hate) and I then show you a positive word (sunshine), it’ll take you about 250 milliseconds longer to respond because you have to undo the lean toward negativity.

So far this is just a confirmation of Zajonc’s theory about the speed and ubiquity of affect, but a big payoff came when social psychologists began using social groups as primes. Would it affect your response speed if I used photographs of black people and white people as the primes? As long as you’re not prejudiced, it won’t affect your reaction times. But if you do prejudge people implicitly (i.e., automatically and unconsciously), then those prejudgments include affective flashes, and those flashes will change your reaction times.

The most widely used measure of these implicit attitudes is the Implicit Association Test (IAT), developed by Tony Greenwald, Mahzarin Banaji, and my UVA colleague Brian Nosek.14 You can take the IAT yourself at ProjectImplicit.org. But be forewarned: it can be disturbing. You can actually feel yourself moving more slowly when you are asked to associate good things with the faces of one race rather than another. You can watch as your implicit attitude contradicts your explicit values. Most people turn out to have negative implicit associations with many social groups, such as black people, immigrants, obese people, and the elderly.

And if the elephant tends to lean away from groups such as the elderly (whom few would condemn morally), then we should certainly expect some leaning (prejudging) when people think about their political enemies. To look for such effects, my UVA colleague Jamie Morris measured the brain waves of liberals and conservatives as they read politically loaded words.15 He replaced the words flower and hate in the above example with words such as Clinton, Bush, flag, taxes, welfare, and pro-life. When partisans read these words, followed immediately by words that everyone agrees are good (sunshine) or bad (cancer), their brains sometimes revealed a conflict. Pro-life and sunshine were affectively incongruous for liberals, just as Clinton and sunshine were for conservatives. The words pro and life are both positive on their own, but part of what it means to be a partisan is that you have acquired the right set of intuitive reactions to hundreds of words and phrases. Your elephant knows which way to lean in response to terms such as pro-life, and as your elephant sways back and forth throughout the day, you find yourself liking and trusting the people around you who sway in sync with you.

The intuitive nature of political judgments is even more striking in the work of Alex Todorov, at Princeton. Todorov studies how we form impressions of people. When he began his work, there was already a lot of research showing that we judge attractive people to be smarter and more virtuous, and we are more likely to give a pretty face the benefit of any doubt.16 Juries are more likely to acquit attractive defendants, and when beautiful people are convicted, judges give them lighter sentences, on average.17 That’s normal affective primacy making everyone lean toward the defendant, which tips off their riders to interpret the evidence in a way that will support the elephant’s desire to acquit.

But Todorov found that there was more going on than just attractiveness. He collected photographs of the winners and runners-up in hundreds of elections for the U.S. Senate and the House of Representatives. He showed people the pairs of photographs from each contest with no information about political party, and he asked them to pick which person seemed more competent. He found that the candidate that people judged more competent was the one who actually won the race about two-thirds of the time.18 People’s snap judgments of the candidates’ physical attractiveness and overall likability were not as good predictors of victory, so these competence judgments were not just based on an overall feeling of positivity. We can have multiple intuitions arising simultaneously, each one processing a different kind of information.

And strangely, when Todorov forced people to make their competence judgments after flashing the pair of pictures on the screen for just a tenth of a second—not long enough to let their eyes fixate on each image—their snap judgments of competence predicted the real outcomes just as well.19 Whatever the brain is doing, it’s doing it instantly, just like when you look at the Muller-Lyer illusion.

The bottom line is that human minds, like animal minds, are constantly reacting intuitively to everything they perceive, and basing their responses on those reactions. Within the first second of seeing, hearing, or meeting another person, the elephant has already begun to lean toward or away, and that lean influences what you think and do next. Intuitions come first.20

3. OUR BODIES GUIDE OUR JUDGMENTS

One way to reach the elephant is through its trunk. The olfactory nerve carries signals about odors to the insular cortex (the insula), a region along the bottom surface of the frontal part of the brain. This part of the brain used to be known as the “gustatory cortex” because in all mammals it processes information from the nose and the tongue. It helps guide the animal toward the right foods and away from the wrong ones. But in humans, this ancient food-processing center has taken on new duties, and it now guides our taste in people. It gets more active when we see something morally fishy, particularly something disgusting, as well as garden-variety unfairness.21 If we had some sort of tiny electrode that could be threaded up through people’s noses and into their insulas, we could then control their elephants, making them steer away from whatever they were viewing at the moment when we pressed the button. We’ve got such an electrode. It’s called fart spray.

Alex Jordan, a grad student at Stanford, came up with the idea of asking people to make moral judgments while he secretly tripped their disgust alarms. He stood at a pedestrian intersection on the Stanford campus and asked passersby to fill out a short survey. It asked people to make judgments about four controversial issues, such as marriage between first cousins, or a film studio’s decision to release a documentary with a director who had tricked some people into being interviewed.

Alex stood right next to a trash can he had emptied. Before he recruited each subject, he put a new plastic liner into the metal can. Before half of the people walked up (and before they could see him), he sprayed the fart spray twice into the bag, which “perfumed” the whole intersection for a few minutes. Before other recruitments, he left the empty bag unsprayed.

Sure enough, people made harsher judgments when they were breathing in foul air.22 Other researchers have found the same effect by asking subjects to fill out questionnaires after drinking bitter versus sweet drinks.23 As my UVA colleague Jerry Clore puts it, we use “affect as information.”24 When we’re trying to decide what we think about something, we look inward, at how we’re feeling. If I’m feeling good, I must like it, and if I’m feeling anything unpleasant, that must mean I don’t like it.

You don’t even need to trigger feelings of disgust to get these effects. Simply washing your hands will do it. Chenbo Zhong at the University of Toronto has shown that subjects who are asked to wash their hands with soap before filling out questionnaires become more moralistic about issues related to moral purity (such as pornography and drug use).25 Once you’re clean, you want to keep dirty things far away.

Zhong has also shown the reverse process: immorality makes people want to get clean. People who are asked to recall their own moral transgressions, or merely to copy by hand an account of someone else’s moral transgression, find themselves thinking about cleanliness more often, and wanting more strongly to cleanse themselves.26 They are more likely to select hand wipes and other cleaning products when given a choice of consumer products to take home with them after the experiment. Zhong calls this the Macbeth effect, named for Lady Macbeth’s obsession with water and cleansing after she goads her husband into murdering King Duncan. (She goes from “A little water clears us of this deed” to “Out, damn’d spot! out, I say!”)

In other words, there’s a two-way street between our bodies and our righteous minds. Immorality makes us feel physically dirty, and cleansing ourselves can sometimes make us more concerned about guarding our moral purity. In one of the most bizarre demonstrations of this effect, Eric Helzer and David Pizarro asked students at Cornell University to fill out surveys about their political attitudes while standing near (or far from) a hand sanitizer dispenser. Those told to stand near the sanitizer became temporarily more conservative.27

Moral judgment is not a purely cerebral affair in which we weigh concerns about harm, rights, and justice. It’s a kind of rapid, automatic process more akin to the judgments animals make as they move through the world, feeling themselves drawn toward or away from various things. Moral judgment is mostly done by the elephant.

4. PSYCHOPATHS REASON BUT DON’T FEEL

Roughly one in a hundred men (and many fewer women) are psychopaths. Most are not violent, but the ones who are commit nearly half of the most serious crimes, such as serial murder, serial rape, and the killing of police officers.28 Robert Hare, a leading researcher, defines psychopathy by two sets of features. There’s the unusual stuff that psychopaths do—impulsive antisocial behavior, beginning in childhood—and there are the moral emotions that psychopaths lack. They feel no compassion, guilt, shame, or even embarrassment, which makes it easy for them to lie, and to hurt family, friends, and animals.

Psychopaths do have some emotions. When Hare asked one man if he ever felt his heart pound or stomach churn, he responded: “Of course! I’m not a robot. I really get pumped up when I have sex or when I get into a fight.”29 But psychopaths don’t show emotions that indicate that they care about other people. Psychopaths seem to live in a world of objects, some of which happen to walk around on two legs. One psychopath told Hare about a murder he committed while burglarizing an elderly man’s home:

I was rummaging around when this old geezer comes down stairs and … uh … he starts yelling and having a fucking fit … so I pop him one in the, uh, head and he still doesn’t shut up. So I give him a chop to the throat and he … like … staggers back and falls on the floor. He’s gurgling and making sounds like a stuck pig! [laughs] and he’s really getting on my fucking nerves so I … uh … boot him a few times in the head. That shut him up … I’m pretty tired by now so I grab a few beers from the fridge and turn on the TV and fall asleep. The cops woke me up [laughs].30

The ability to reason combined with a lack of moral emotions is dangerous. Psychopaths learn to say whatever gets them what they want. The serial killer Ted Bundy, for example, was a psychology major in college, where he volunteered on a crisis hotline. On those phone calls he learned how to speak to women and gain their trust. Then he raped, mutilated, and murdered at least thirty young women before being captured in 1978.

Psychopathy does not appear to be caused by poor mothering or early trauma, or to have any other nurture-based explanation. It’s a genetically heritable condition31 that creates brains that are unmoved by the needs, suffering, or dignity of others.32 The elephant doesn’t respond with the slightest lean to the gravest injustice. The rider is perfectly normal—he does strategic reasoning quite well. But the rider’s job is to serve the elephant, not to act as a moral compass.

5. BABIES FEEL BUT DON’T REASON

Psychologists used to assume that infant minds were blank slates. The world babies enter is “one great blooming, buzzing confusion,” as William James put it,33 and they spend the next few years trying to make sense of it all. But when developmental psychologists invented ways to look into infant minds, they found a great deal of writing already on that slate.

The trick was to see what surprises babies. Infants as young as two months old will look longer at an event that surprises them than at an event they were expecting. If everything is a buzzing confusion, then everything should be equally surprising. But if the infant’s mind comes already wired to interpret events in certain ways, then infants can be surprised when the world violates their expectations.

Using this trick, psychologists discovered that infants are born with some knowledge of physics and mechanics: they expect that objects will move according to Newton’s laws of motion, and they get startled when psychologists show them scenes that should be physically impossible (such as a toy car seeming to pass through a solid object). Psychologists know this because infants stare longer at impossible scenes than at similar but less magical scenes (seeing the toy car pass just behind the solid object).34 Babies seem to have some innate ability to process events in their physical world—the world of objects.

But when psychologists dug deeper, they found that infants come equipped with innate abilities to understand their social world as well. They understand things like harming and helping.35 Yale psychologists Kiley Hamlin, Karen Wynn, and Paul Bloom put on puppet shows for six- and ten-month-old infants in which a “climber” (a wooden shape with eyes glued to it) struggled to climb up a hill. Sometimes a second puppet came along and helped the climber from below. Other times, a different puppet appeared at the top of the hill and repeatedly bashed the climber down the slope.

A few minutes later, the infants saw a new puppet show. This time the climber looked back and forth between the helper puppet and the hinderer puppet, and then it decided to cozy up to the hinderer. To the infants, that was the social equivalent of seeing a car pass through a solid box; it made no sense, and the infants stared longer than when the climber decided to cozy up to the helper.36

At the end of the experiment, the helper and hinderer puppets were placed on a tray in front of the infants. The infants were much more likely to reach out for the helper. If the infants weren’t parsing their social world, they wouldn’t have cared which puppet they picked up. But they clearly wanted the nice puppet. The researchers concluded that “the capacity to evaluate individuals on the basis of their social interactions is universal and unlearned.”37

It makes sense that infants can easily learn who is nice to them. Puppies can do that too. But these findings suggest that by six months of age, infants are watching how people behave toward other people, and they are developing a preference for those who are nice rather than those who are mean. In other words, the elephant begins making something like moral judgments during infancy, long before language and reasoning arrive.

Looking at the discoveries from infants and psychopaths at the same time, it’s clear that moral intuitions emerge very early and are necessary for moral development.38 The ability to reason emerges much later, and when moral reasoning is not accompanied by moral intuition, the results are ugly.

6. AFFECTIVE REACTIONS ARE IN THE RIGHT PLACE AT THE RIGHT TIME IN THE BRAIN

Damasio’s studies of brain-damaged patients show that the emotional areas of the brain are the right places to be looking for the foundations of morality, because losing them interferes with moral competence. The case would be even stronger if these areas were active at the right times. Do they become more active just before someone makes a moral judgment or decision?

In 1999, Joshua Greene, who was then a graduate student in philosophy at Princeton, teamed up with leading neuroscientist Jonathan Cohen to see what actually happens in the brain as people make moral judgments. He studied moral dilemmas in which two major ethical principles seem to push against each other. For example, you’ve probably heard of the famous “trolley dilemma,”39 in which the only way you can stop a runaway trolley from killing five people is by pushing one person off a bridge onto the track below.

Philosophers have long disagreed about whether it’s acceptable to harm one person in order to help or save several people. Utilitarianism is the philosophical school that says you should always aim to bring about the greatest total good, even if a few people get hurt along the way, so if there’s really no other way to save those five lives, go ahead and push. Other philosophers believe that we have duties to respect the rights of individuals, and we must not harm people in our pursuit of other goals, even moral goals such as saving lives. This view is known as deontology (from the Greek root that gives us our word duty). Deontologists talk about high moral principles derived and justified by careful reasoning; they would never agree that these principles are merely post hoc rationalizations of gut feelings. But Greene had a hunch that gut feelings were what often drove people to make deontological judgments, whereas utilitarian judgments were more cool and calculating.

To test his hunch, Greene wrote twenty stories that, like the trolley story, involved direct personal harm, usually done for a good reason. For example, should you throw an injured person out of a lifeboat to keep the boat from sinking and drowning the other passengers? All of these stories were written to produce a strong negative affective flash.

Greene also wrote twenty stories involving impersonal harm, such as a version of the trolley dilemma in which you save the five people by flipping a switch that diverts the trolley onto a side track, where it will kill just one person. It’s the same objective trade-off of one life for five, so some philosophers say that the two cases are morally equivalent, but from an intuitionist perspective, there’s a world of difference.40 Without that initial flash of horror (that bare-handed push), the subject is free to examine both options and choose the one that saves the most lives.

Greene brought eighteen subjects into an fMRI scanner and presented each of his stories on the screen, one at a time. Each person had to press one of two buttons to indicate whether or not it was appropriate for a person to take the course of action described—for example, to push the man or throw the switch.

The results were clear and compelling. When people read stories involving personal harm, they showed greater activity in several regions of the brain related to emotional processing. Across many stories, the relative strength of these emotional reactions predicted the average moral judgment.

Greene published this now famous study in 2001 in the journal Science.41 Since then, many other labs have put people into fMRI scanners and asked them to look at photographs about moral violations, make charitable donations, assign punishments for crimes, or play games with cheaters and cooperators.42 With few exceptions, the results tell a consistent story: the areas of the brain involved in emotional processing activate almost immediately, and high activity in these areas correlates with the kinds of moral judgments or decisions that people ultimately make.43

In an article titled “The Secret Joke of Kant’s Soul,” Greene summed up what he and many others had found.44 Greene did not know what E. O. Wilson had said about philosophers consulting their “emotive centers” when he wrote the article, but his conclusion was the same as Wilson’s:

We have strong feelings that tell us in clear and uncertain terms that some things simply cannot be done and that other things simply must be done. But it’s not obvious how to make sense of these feelings, and so we, with the help of some especially creative philosophers, make up a rationally appealing story [about rights].

This is a stunning example of consilience. Wilson had prophesied in 1975 that ethics would soon be “biologicized” and refounded as the interpretation of the activity of the “emotive centers” of the brain. When he made that prophecy he was going against the dominant views of his time. Psychologists such as Kohlberg said that the action in ethics was in reasoning, not emotion. And the political climate was harsh for people such as Wilson who dared to suggest that evolutionary thinking was a valid way to examine human behavior.

Yet in the thirty-three years between the Wilson and Greene quotes, everything changed. Scientists in many fields began recognizing the power and intelligence of automatic processes, including emotion.45 Evolutionary psychology became respectable, not in all academic departments but at least among the interdisciplinary community of scholars that now studies morality.46 In the last few years, the “new synthesis” that Wilson predicted back in 1975 has arrived.

ELEPHANTS ARE SOMETIMES OPEN TO REASON

I have argued that the Humean model (reason is a servant) fits the facts better than the Platonic model (reason could and should rule) or the Jeffersonian model (head and heart are co-emperors). But when Hume said that reason is the “slave” of the passions, I think he went too far.

A slave is never supposed to question his master, but most of us can think of times when we questioned and revised our first intuitive judgment. The rider-and-elephant metaphor works well here. The rider evolved to serve the elephant, but it’s a dignified partnership, more like a lawyer serving a client than a slave serving a master. Good lawyers do what they can to help their clients, but they sometimes refuse to go along with requests. Perhaps the request is impossible (such as finding a reason to condemn Dan, the student council president—at least for most of the people in my hypnosis experiment). Perhaps the request is self-destructive (as when the elephant wants a third piece of cake, and the rider refuses to go along and find an excuse). The elephant is far more powerful than the rider, but it is not an absolute dictator.

When does the elephant listen to reason? The main way that we change our minds on moral issues is by interacting with other people. We are terrible at seeking evidence that challenges our own beliefs, but other people do us this favor, just as we are quite good at finding errors in other people’s beliefs. When discussions are hostile, the odds of change are slight. The elephant leans away from the opponent, and the rider works frantically to rebut the opponent’s charges.

But if there is affection, admiration, or a desire to please the other person, then the elephant leans toward that person and the rider tries to find the truth in the other person’s arguments. The elephant may not often change its direction in response to objections from its own rider, but it is easily steered by the mere presence of friendly elephants (that’s the social persuasion link in the social intuitionist model) or by good arguments given to it by the riders of those friendly elephants (that’s the reasoned persuasion link).

There are even times when we change our minds on our own, with no help from other people. Sometimes we have conflicting intuitions about something, as many people do about abortion and other controversial issues. Depending on which victim, which argument, or which friend you are thinking about at a given moment, your judgment may flip back and forth as if you were looking at a Necker cube (figure 3.1).

FIGURE 3.1. A Necker cube, which your visual system can read in two conflicting ways, although not at the same time. Similarly, some moral dilemmas can be read by your righteous mind in two conflicting ways, but it’s hard to feel both intuitions at the same time.

And finally, it is possible for people simply to reason their way to a moral conclusion that contradicts their initial intuitive judgment, although I believe this process is rare. I know of only one study that has demonstrated this overruling experimentally, and its findings are revealing.

Joe Paxton and Josh Greene asked Harvard students to judge the story about Julie and Mark that I told you in chapter 2.47 They supplied half of the subjects with a really bad argument to justify consensual incest (“If Julie and Mark make love, then there is more love in the world”). They gave the other half a stronger supporting argument (about how the aversion to incest is really caused by an ancient evolutionary adaptation for avoiding birth defects in a world without contraception, but because Julie and Mark use contraception, that concern is not relevant). You’d think that Harvard students would be more persuaded by a good reason than a bad reason, but it made no difference. The elephant leaned as soon as subjects heard the story. The rider then found a way to rebut the argument (good or bad), and subjects condemned the story equally in both cases.

But Paxton and Greene added a twist to the experiment: some subjects were not allowed to respond right away. The computer forced them to wait for two minutes before they could declare their judgment about Julie and Mark. For these subjects the elephant leaned, but quick affective flashes don’t last for two minutes. While the subject was sitting there staring at the screen, the lean diminished and the rider had the time and freedom to think about the supporting argument. People who were forced to reflect on the weak argument still ended up condemning Julie and Mark—slightly more than people who got to answer immediately. But people who were forced to reflect on the good argument for two minutes actually did become substantially more tolerant toward Julie and Mark’s decision to have sex. The delay allowed the rider to think for himself and to decide upon a judgment that for many subjects was contrary to the elephant’s initial inclination.

In other words, under normal circumstances the rider takes its cue from the elephant, just as a lawyer takes instructions from a client. But if you force the two to sit around and chat for a few minutes, the elephant actually opens up to advice from the rider and arguments from outside sources. Intuitions come first, and under normal circumstances they cause us to engage in socially strategic reasoning, but there are ways to make the relationship more of a two-way street.

IN SUM

The first principle of moral psychology is Intuitions come first, strategic reasoning second. In support of this principle, I reviewed six areas of experimental research demonstrating that:

• Brains evaluate instantly and constantly (as Wundt and Zajonc said).

• Social and political judgments depend heavily on quick intuitive flashes (as Todorov and work with the IAT have shown).

• Our bodily states sometimes influence our moral judgments. Bad smells and tastes can make people more judgmental (as can anything that makes people think about purity and cleanliness).

• Psychopaths reason but don’t feel (and are severely deficient morally).

• Babies feel but don’t reason (and have the beginnings of morality).

• Affective reactions are in the right place at the right time in the brain (as shown by Damasio, Greene, and a wave of more recent studies).

Putting all six together gives us a pretty clear portrait of the rider and the elephant, and the roles they play in our righteous minds. The elephant (automatic processes) is where most of the action is in moral psychology. Reasoning matters, of course, particularly between people, and particularly when reasons trigger new intuitions. Elephants rule, but they are neither dumb nor despotic. Intuitions can be shaped by reasoning, especially when reasons are embedded in a friendly conversation or an emotionally compelling novel, movie, or news story.48

But the bottom line is that when we see or hear about the things other people do, the elephant begins to lean immediately. The rider, who is always trying to anticipate the elephant’s next move, begins looking around for a way to support such a move. When my wife reprimanded me for leaving dirty dishes on the counter, I honestly believed that I was innocent. I sent my reasoning forth to defend me and it came back with an effective legal brief in just three seconds. It’s only because I happened—at that very moment—to be writing about the nature of moral reasoning that I bothered to look closely at my lawyer’s arguments and found them to be historical fictions, based only loosely on real events.

Why do we have this weird mental architecture? As hominid brains tripled in size over the last 5 million years, developing language and a vastly improved ability to reason, why did we evolve an inner lawyer, rather than an inner judge or scientist? Wouldn’t it have been most adaptive for our ancestors to figure out the truth, the real truth about who did what and why, rather than using all that brainpower just to find evidence in support of what they wanted to believe? That depends on which you think was more important for our ancestors’ survival: truth or reputation.