2 / HYPOCRISY VS. MORALITY

Why no one should throw stones

It was the eve of Valentine’s Day, 2008, when George slipped out the side door of one of Washington, D.C.’s most luxurious hotels. All the pieces for the night’s romantic rendezvous were in place—he had secured a lavish suite, arranged for his lover’s ride to the encounter, and made sure the champagne was on ice. He had even carved out several hours for this tryst, which, for a man of his stature, attested to its importance. George was a powerful man of powerful means. He’d spent the majority of his career in noble pursuits, fighting depravity and corruption of every type, protecting the little guy at every turn. George was under a lot of pressure; tonight, he told himself, he deserved a night off.

As he entered the grand lobby of the hotel and headed toward the elevator, his pulse quickened in anticipation of the romantic pleasures that awaited him. But George Fox, as Eliot Spitzer preferred to be called when he checked into the Mayflower Hotel, wasn’t going to meet his wife. No, that night Governor Spitzer, who himself had famously crusaded against the scourge of prostitution in New York, working tirelessly to put hundreds of johns behind bars, was in fact a john himself, and he was about to be publicly outed in a major scandal that would destroy both his image and his career virtually overnight.

What’s more, that night at the Mayflower wasn’t one single dalliance, one isolated moral lapse. No, this anti-prostitution poster boy was a regular client of the Emperor’s Club and had spent many hours—and thousands of dollars—in the company of the highest-class call girls. Here was a man who had made ethics and integrity the hallmarks of his administration, a man who loudly and repeatedly decried the decline of good old American family values. Yet Eliot Spitzer (or “Client #9,” as he was to become known) would in one month’s time be implicated in the most famous prostitution case of the decade and immortalized in history books as the very paragon of moral hypocrisy.

Of course, Spitzer is hardly an anomaly. In our society, examples of hypocrisy abound. Consider how Rush Limbaugh railed against the moral failings of drug abusers while he just happened to be racking up an impressive collection of illegal prescriptions to feed his oxycodone habit. Or how Senator Larry Craig, who very publicly admonished President Bill Clinton for being a “bad boy, a naughty boy,” during the Monica Lewinsky scandal, was caught soliciting sexual favors in men’s restroom stalls (and, by the way, he was a fierce opponent of gay rights as well).1 And it’s not just politicians. Think about how countless sports icons, from Mark McGwire to Barry Bonds, Marion Jones, and others, have condemned fellow athletes for the use of performance-enhancing drugs, only to later be implicated in juicing scandals themselves. Or how William Bennett, probably one of the best-known advocates for moral education in this country, a pundit who repeatedly and vocally extolled the benefits of self-control and restraint in his best-selling tome The Book of Virtues, was, during the many years he spent promulgating this message, a gambler extraordinaire. While his political organization, Empower America, was publishing editorials decrying lawmakers who “pollute our society with a slot machine on every corner,” he was playing stakes so high that he gambled up to $1.4 million in a single two-month period.2

As each one of these people fell from grace quickly and publicly, most of us couldn’t help wondering what they had been thinking. How could they have been such hypocrites? How could they have done the exact opposite of what they proclaimed to be virtuous behavior? These are all good questions, and they’ve been exhaustively debated. But they’re the wrong ones to ask. It’s not that these people ignored or purposely defied what they thought was right. No, it’s that what they thought was right was relative. As we’ll show in this chapter, hypocrisy isn’t so much a matter of violating your own moral beliefs as it is of shifting your moral beliefs to suit your needs and desires at any given point in time. So the right question isn’t whether Spitzer and the rest knew what they were doing was wrong. Rather, we should ask how their minds tricked them into believing, at that particular moment, that what they were doing was okay.

Now, you may still be thinking, “But everyone knows politicians and celebrities are an exceptionally questionable lot when it comes to morals. They’re not like the rest of us good folks. We certainly would never act like that, would we?” Well, that question raises an interesting point. Is hypocrisy a trait confined to a few bad seeds? Or might the potential to act hypocritically lurk in all of us? Given our theoretical view, we suspected the later. Not because we believe human beings are inherently flawed or morally bankrupt but because, as we discussed in Chapter 1, the mind is subject to a constant and often hidden battle that frequently drives us to say or do one thing one minute, only to turn around and do the very opposite the next.

But how exactly does this battle play out? How can we experience such powerful and seismic swings in our beliefs about right and wrong? This is exactly what we were dying to find out. But there was just one problem: how to study hypocrisy in the lab. Clearly, we couldn’t just ask people whether or not they would violate their beliefs in a given situation; after all, no one thinks he or she is a hypocrite, and even if some people did, we sincerely doubted they’d be willing to admit it. No, we needed to create situations where people would have something to gain by going against their stated values—situations that provided as close an approximation of a real-world moral dilemma, with all its true temptations, as possible. So we did what we always do: we staged a situation to put people’s moral calculus to the test, to see how’d they actually behave when push came to shove. In essence, we conned them. Hey, it’s all in the name of science.

But there was one more complication: to study hypocrisy, we had to see not only how people would evaluate their own behavior but also how they would evaluate the same behaviors when they were committed by others. This meant we needed a “bad guy,” an accomplice (or what psychologists call a confederate) we could count on to do something morally questionable so that we could see how the true participants would react. Enter Alex. Alex was one of those fine students who was so intrinsically interested in the workings of the human mind that he was willing to risk the wrath of his peers by acting as the universal jerk (for lack of a better word) in our studies. He agreed to repeatedly screw over other students and let them judge him for it. Now that takes guts!

Meet your inner hypocrite

“Maybe I can get out of here early,” James thought as Carlo Valdesolo left him alone in the lab. James was there to take part in an experiment that he believed (okay, because we told him so) was examining problem-solving skills. When James arrived, Carlo sat him at a computer and told him that he would need to complete one of two tasks. One was a fun and easy photo hunt that would take only about ten minutes. The other task was a series of logic problems that Carlo warned might be difficult and might take as long as forty-five minutes to complete. But, as Carlo next explained, he, as the experimenter, needed to be kept “blind” as to which task James and the other participants would complete so that he wouldn’t bias their performance in any way (a false but believable tale; you’ll see why we needed it in a minute). “So,” Carlo went on, “certain participants are going to be randomly selected to assign themselves—and, therefore, the person going after them—to one of the two tasks. The tasks alternate, so the next person will complete whatever task the first person doesn’t.” James just happened to be one of these “deciders.” (In reality, of course, all our subjects were “deciders.”) Next Carlo casually told James that most people believe the fairest way to make a choice is to flip a coin, and handed over a computerized device that flipped a virtual coin, “just in case you want to use it.” Then Carlo left.

Now came the fun part (for us, that is): showtime on the hidden cameras. James sat back in his seat, looked at the coin flipper, looked back at his computer screen, and did what a whopping 92 percent of his fellow participants would also do—assigned himself to the quick, easy task without using the flipper. And in so doing, he knowingly doomed the next soul to forty-five minutes of drudgery. Then, just as James finished the short task, the computer posed the following question to him (which, of course, was the point of the whole experiment, even though he didn’t know it): “How fairly, on a scale ranging from not at all to very much, did you act in the assignment procedure?”

It’s a simple but telling question, as it requires people to evaluate the rightness of their actions on a very fundamental dimension—fairness. When we tallied the results, we found that the people who assigned themselves the easy task, like James, rated their actions on average somewhere near the middle—they believed their behavior to be not completely fair but not terribly egregious either. Simply put, they believed taking the easy task at someone else’s expense was a somewhat acceptable thing to do.3

“Okay,” you might be wondering, “so what? Maybe most people don’t see this behavior as such a bad thing. That doesn’t make them hypocrites.” But wait, we weren’t done yet. Soon it was Jack’s turn. Jack also was there to take part in a study that was purportedly about problem solving. This time, however, we made one important change to the experiment. Carlo told Jack that he wouldn’t be solving any problems. Instead, his job was to provide feedback on the experiment and problem-solving tasks as an observer. Jack, then, was to surreptitiously watch (via webcam on his computer) as another person went through that same procedure James had just completed. That meant he’d be able to see and hear everything that happened in the session, including whether the person flipped the virtual coin or just took the easy task for him- or herself. Then Jack would be asked his opinions about the whole process. Simple enough.

Jack readily agreed to participate, enjoying the idea of playing the somewhat stealthy role of the “secret watcher.” At this point, Alex, our universal “bad guy,” entered the room. Jack watched and listened as Alex received his instructions from Carlo. They were the same as before. Alex was told about the two tasks, and that he was selected to be the decider. He was presented with the virtual coin flipper and then left alone. Jack then watched as Alex looked at the flipping device, shook his head slightly, turned back to his computer, and assigned himself the preferable task. Next Jack’s computer stopped showing what Alex was doing in the other room and asked for Jack’s feedback on the experimental procedures, including his opinion of how fairly Alex acted. This part of the experiment was repeated forty-five more times, all with different “Jacks.”

In this version, ratings were not so charitable. Jack and the other “watchers” universally condemned Alex for choosing the good option for himself. To them, the decision was completely unfair and immoral, and even colored their opinion about poor Alex himself. Jack wasn’t the only one who gave Alex a dirty look when passing him in the hallway after the experiment; one woman even stopped to lean in, look disparagingly at him, and whisper, “I know what you did.” Alex was shunned, a moral outcast. Good thing for him he was graduating soon.

Now, remember, in both of the situations we posed, the same decision occurred: one person chose to assign himself the preferable task at another’s expense rather than risking a coin flip. The only difference is who was judging the choice: the person who made it or an outside observer. Yet that was enough to produce wildly different answers to the question of fairness. If the scales of morality were fixed, this shouldn’t happen—the answers should be the same regardless of whether people were judging themselves or someone else. An act of cheating should be dishonest, an act of selfishness should be selfish, no matter who committed it. The “badness” of a transgression shouldn’t depend on the identity of the transgressor, right? But this is not what happened. People judged the selfish act as far less morally reprehensible when they committed it than when someone else (Alex) did. And it wasn’t that one group simply had higher moral standards than the other—we assigned students to the two conditions randomly, as we do in all our experiments, to control for this type of complication. Here, then, we had the very picture of hypocrisy, among the most normal of people.

Now, it’s true that sticking someone with thirty-five extra minutes of work isn’t exactly a sin on the scale of cheating on one’s wife with a high-class hooker. Still, these results tell us a great deal about the nature of hypocrisy and why it’s so easy for any of us to fall into its grip. First, they show that our judgments of what is a morally acceptable action seem to be quite fluid. Second, they tell us that our short-term impulses for rewards in the moment—whether those rewards are a night of uninhibited passion with a stranger or getting out of a tedious lab experiment in time for happy hour—can temporarily squelch the voice reminding us about the benefits of a solid reputation in the long term. It’s not that we silence this voice purposely, or even consciously; it’s a result of the ongoing battle we’ve been talking about between our short-term interests and our long-term ones. When we act hypocritically, then, it’s often not that we’re ignoring or deliberately disregarding our beliefs and morals; it’s merely that our short-term concerns have momentarily triumphed. That’s exactly what happened in this experiment. The people who judged themselves more leniently for taking the easy task weren’t aware that they were allowing their minds to adjust their beliefs about right and wrong to serve their immediate interests. It’s just that when our inner grasshopper—our desire for short-term rewards—wins us over, we’re very good at rationalizing our actions, tricking ourselves into believing that what we did wasn’t wrong.

At this point, you may be wondering what happened to the mental mechanisms of the ant, the ones that are supposed to protect us from being socially ostracized by steering us toward fairness and honesty. We had the same question. If hypocrisy were allowed to run completely unfettered, how could we ever trust anyone’s judgments or even our own? Selfishness would reign, stable relationships would be impossible to sustain, and our social order would essentially fall apart. So the mechanisms of the ant must be working to some extent, trying to put the brakes on shortsighted, self-serving judgments. In the case of hypocrisy, we figured those mechanisms would look a lot like guilt. The problem, though, was that with the current experiment, we couldn’t tell whether the desire to avoid the unpleasant task had trumped the guilt or whether those students simply hadn’t felt any guilt at all. To answer this question, we had to go back to the lab.

As we noted in Chapter 1, every decision we make in our lives involves a whole host of related mental processes; some we control and others we don’t. And because so many of these processes lie beneath our level of awareness, disentangling them can be a bit tricky. Still, if we wanted to uncover the actual workings of the social mind, we needed a way to isolate the systems we control from the ones we don’t. If our theories were correct, hypocrisy would be, in part at least, a function of time. We suspected that at the outset of our experiment, our participants would feel some innate, automatic impulse to be fair, especially given the long-standing importance of fairness norms for interpersonal relationships. With every passing second, though, each person’s grasshopper would work harder and harder to help him or her rationalize acting unfairly in order to win immediate gains. It would be almost as though, if you listened closely enough, you could hear the grasshopper saying, “The experiment is anonymous. The other person being screwed over wouldn’t know what was happening, so there’s nothing to lose,” as it worked to tip the scale its way. In other words, we suspected that the “hypocrisy” we observed in the experiment resulted from the mental jujitsu involved in this act of rationalization. To test this theory, all we had to do was stop the rationalizing in its tracks.

One common trick psychologists use to disentangle dueling mental processes such as these is simply to inhibit one of them. We figured that if we could hamper or even knock out the rationalizing part of the brain by keeping it busy, then we would be able to see what, if anything, the ant was up to. So we decided to have our participants memorize strings of random digits. After all, we figured, it’s more difficult to craft clever justifications when your mind is working hard to remember something.4 Here’s the way it worked. We ran the two conditions of the experiment (judging oneself and judging another) exactly as before, but with a single exception. This time half the students were presented with a different set of seven digits before each question that the computer asked them in the final task, including the question about fairness. They were told that after they answered each question, they would have to type in the seven digits that preceded it.

The point was this: In order to remember the string of digits, the participants would have to mentally rehearse the digits while they answered the question. This kept their minds occupied, so they wouldn’t have the mental energy left to devote to rationalizing away their own less than moral actions. In essence, we tied the hands of the short-term system, to see whether or not the long-term one—the one fighting on the side of fairness—was working. As it turned out, it was. When we prevented rationalization by limiting the systems of the grasshopper, the hypocrisy we’d observed earlier completely vanished. In a fascinating twist, this time people judged the act of assigning oneself the easy task without using the flipper just as morally objectionable when they committed it as when others did—there was absolutely no difference in how morality was applied.5 What this finding tells us is that we do feel in our gut that screwing over the next guy is wrong. The pangs of guilt are immediately there at the intuitive level; it’s just that our minds are very good at squashing them with reasoned excuses when it serves our short-term interests, especially when it’s unlikely that we’ll be caught.

So if the desire to avoid a mere thirty-five extra minutes of tedium was enough for most people’s rational minds to overrule their intuitive drives to be fair, it suddenly doesn’t seem so surprising that Limbaugh was able to rationalize popping oxycodone while condemning “drug addicts,” or that Spitzer told himself it was okay to plan trysts with employees of the Emperor’s Club while fighting prostitution, or that so many athletes think it’s fine to use steroids to help them win medals while at the same time decrying the problem. After all, you can’t deny that the short-term rewards of all these activities are very seductive; the incentives to rationalize away any moral qualms about behavior are all there. What’s more, as our studies and others show, when an incentive to commit an immoral act is salient, our rational minds are very good at coming up with reasons to justify it. For Spitzer, maybe it was that the pressures of the job and power that came with it entitled him to some extramarital pleasure. For those doping athletes, it might have been that the bump in pay they’d get from winning a game, a series, or a title would help them better provide for their families. As for Limbaugh and Bennett, well, addicts are the best of all at this game. Point is, the excuses our minds can come up with are many and varied. And when we try hard enough, we can convince ourselves of any of them. Considering that the same potential for hypocrisy resides in each of us, it suddenly seems a lot more perilous to start throwing stones.

The dynamics of elastic morality

The fact that hypocrisy can come so easily to any of us goes to show just how elastic our ethics and morality can be. It’s not that we don’t have any deeply held ideas or values about what is right and wrong. It’s just that these basic notions are malleable and subject to change at times. The tricky part of acting morally, then, doesn’t center on if we can judge what’s right or wrong and act accordingly—it centers on how we judge right and wrong and on how changeable these judgments, and thereby our character, can be.

The above evidence shows pretty conclusively that our moral codes aren’t completely stable or static and can change from one situation to another. What you may not realize, however, is that sometimes they can change even for what appears to be no reason at all. Over the past decade, much research has begun to show that our morals are often shaped as much, or even more, by our emotional responses than by our so-called rational ones. Don’t believe us? Consider this example that our colleague, the psychologist Jonathan Haidt, often poses to participants in experiments:

Julie and Mark are brother and sister. They are traveling together in France on summer vacation from college. One night they are staying alone in a cabin near the beach. They decide that it would be interesting and fun if they tried making love. At the very least it would be a new experience for each of them. Julie was already taking birth control pills, but Mark uses a condom too, just to be safe. They both enjoy making love, but they decide not to do it again. They keep that night as a special secret, which makes them feel even closer to each other. Was it okay for Mark and Julie to make love?

Assuming you’re like most people, the answer is probably a resounding no, likely accompanied by an uncomfortable visceral feeling and a look of disgust. After all, everyone knows incest is unequivocally immoral and can only lead to trouble. Yet when people are asked to explain their rationale for why Julie and Mark’s action was so repugnant, something interesting happens. Some cite the health risks associated with inbreeding, only to be reminded that the multiple forms of birth control the siblings used preclude this possibility. Others venture that the act surely will cause psychological harm to one or both of the siblings or destroy the dynamics of the family, but Haidt reminds them that the scenario ruled these possibilities out as well. Usually people keep scratching their heads, searching for a logical explanation to justify their moral outrage, but come up empty. After all, there are no objective consequences of Mark and Julie’s actions. Still, most people steadfastly maintain that this act is just wrong, even though they can’t seem to articulate exactly why it’s wrong in the current case. Even when presented with all the reasons why no harm could possibly come from this night of lovemaking, their gut aversion to the incestuous act is so powerful, they can’t shake it.

The reason this gut feeling is so strong is because it serves an important evolutionary purpose. Thousands of years ago, these innate emotional responses were all our ancestors had to guide social living. Remember that because of the way our brains evolved, the mental capacities for abstract reasoning about things like ethics constitute a relatively new ability, evolutionarily speaking. The capacity for emotion, however, is much older and existed well before we had the cognitive wherewithal to weigh the consequences of our actions. So way back when, before the capacity for reason evolved, our innate revulsion to incest was very adaptive; it protected our species from serious risk, namely, the genetic defects and diseases associated with inbreeding. Back then, sex without the risk of procreation didn’t exist. Trojan wasn’t always around to help prevent unwanted pregnancies, and oral contraception is less than a century old. So the moral revulsion you feel at the thought of sex with a sibling was, in a way, an ancient form of birth control. After all, it’s difficult to be too turned on when you’re feeling disgusted.

The point is, emotions regulated our ancestors’ social behavior and still perform this function for us today. Emotions tell us quickly and almost effortlessly what we should do in a given situation. Sometimes, as with incest, there is only one right answer. Even if a small physical pleasure were to come from the act itself, we know instinctively that it would pale in comparison to the biological risks, so the long-term system (the ant) wins easily. However, other times, as we just saw in the hypocrisy example, when both short- and long-term benefits exist (e.g., getting what we want now vs. building a reputation as fair and trustworthy), the battle of our emotions may have no clear winner but rather may shift back and forth.

It’s important to realize, though, that finding the best course of action when moral dilemmas arise isn’t always as simple as choosing to go with your gut. Even though it’s tempting for many of us to want to trust our intuition, especially in light of having just seen how so-called rational thought can lead to hypocrisy, it’s becoming clearer than ever that when it comes to moral decisions, there is no perfect strategy. Following your gut is no more foolproof than listening to reason. The following experiment illustrates why.

Derailing the moral mind

John and Ben had signed up to take part in an experiment about opinions. It sounded interesting; who doesn’t like to tell others what they think? “We’re trying to understand your views on the world,” Carlo told them when they arrived at the lab. “So you’ll simply see some questions appear on your computer screens, and all you need to do is tell us which of the presented options you think is the right or most acceptable one.” Piece of cake, thought John and Ben. Carlo continued, “We’re also collecting opinions on television shows, so before we get to the main questions, we’re going to have you watch a brief video clip and tell us what you think of it.” For John and Ben, it didn’t get any better than this. They were actually going to get paid to watch television.

Carlo sat John and Ben at their computers, gave them headphones, and ran the video clips. When the clips had finished, hypothetical situations began popping up on their screens. Most of the dilemmas were quite innocuous and neutral, such as “If you were marooned on a desert island, would you rather have a jar of peanut butter or a book of matches?” But, unbeknownst to John and Ben, the third scenario was the one that mattered:

You find yourself standing on a footbridge overlooking trolley tracks. Barreling down these tracks is a runaway trolley that, if allowed to continue unimpeded, will run over and kill five workmen who are up ahead on the tracks. Standing next to you is a rather large man. The only way to stop the trolley would be to push the large stranger off the bridge and onto the tracks, whereby it would kill him but stop the trolley before it reached and killed the other five. Should you push him?

On this particular day, John answered quickly and decisively: no. For Ben, the answer was easy as well: yes. Exact same moral conundrum, completely opposite answers (and it’s not that John and Ben differed in age, background, or some other fundamental way—we’d controlled for that). Why the difference? The answer lies in that video clip.

It just so happens that the clip Ben watched right before he was presented with the moral dilemma was a Saturday Night Live comedy sketch. John, on the other hand, got stuck watching part of some dull documentary on life in remote Spanish villages. “So what?” you’re probably wondering. “Something as minor as watching television couldn’t possibly lead our intrepid participants to report such wildly different views about whether it’s morally acceptable to push someone to certain death, could it?”

Actually, yes. In fact, when we analyzed the results of the seventy-nine people like John and Ben that we ran through the experiment, those who watched the SNL skit were more than three times as likely to say they’d push the man off the bridge.6 It seems perplexing at first. After all, these clips didn’t have anything to do with weighty issues of morality or life and death. But that’s exactly the point. Watching these videos didn’t change people’s deeply held moral principles or beliefs. It did, however, change their emotional states, and that’s what matters. As we’ve said, our emotional instincts and impulses often guide our moral choices. Consequently, anything that can alter what we’re feeling has the potential to derail (no pun intended) our moral reasoning, whether we’re aware of it or not.

Turns out this is exactly what was going on with Ben and the others like him who decided to push the stranger off the bridge. They weren’t callous, coldhearted killers. Nor were they simply unfeeling logicians. It’s not that their characters were fundamentally different from those of John and the others like him; it’s just that their intuitive feelings got smacked down due to a little experimental interference.

Decades of research have shown that when we’re experiencing an emotion, it can’t help coloring all our actions and decisions—even ones that have nothing to do with what we’re feeling in the first place. You have a fight with your boss, and you come home and feel like kicking your dog (though we hope you don’t). You feel anxious about a new promotion and suddenly think your odds of contracting cancer are higher. Simply put, we all unwittingly use our emotional states as information, or cues, to guide our decisions about what’s likely to happen or what we should do.7 If we’re feeling sad, we can’t help feeling that depressing things must be just around the corner.

In the present case, those who watched the SNL skit were understandably feeling more buoyant and cheerful than those who had watched the snore-inducing documentary. As a result, the visceral negative feelings that otherwise would have been triggered by the thought of pushing another to his death were momentarily blocked. With these gut feelings held at bay, it became easier to rationally weigh the consequences of the two scenarios and conclude, quite logically, that it is morally acceptable to sacrifice one life to save five others.

To fully understand the role that emotions play in this kind of moral decision, consider what happens if we slightly change the specifics of the dilemma. In this new scenario, the runaway trolley is still barreling down the tracks. This time, however, there are two directions it can go. If left as is, the trolley will roll straight ahead and kill those five track workers. However, if you flip a rail switch, the trolley will be diverted onto a different track, where it would kill only one worker instead of five. Would this change your decision? You’re still deciding whether to sacrifice one person’s life to save five others, only now you don’t have to physically push someone onto the tracks. Would you flip the switch? In our experiment and those of many others, the answer is almost unanimously yes. Flipping the switch is judged the right thing to do. Saving five is better than saving one, period.

If that’s the case, though, then why do countless studies reveal that when confronted with the otherwise equivalent version where you have to physically knock someone off the footbridge to save the five others, the vast majority of people (assuming they haven’t just been made to feel happy)—a staggering 90 percent—believe it wrong to do so? Logically, it’s the same trade-off in numbers saved and killed. The answer, however, has nothing to do with logic. It’s much simpler: the two situations feel different. Take a moment to think of how it would feel to wrap your hands around the flesh of another living, breathing human as he teeters perilously at the edge of a high bridge, to see the fear in that person’s eyes as he struggles fruitlessly to escape your grip. Assuming you don’t have psychopathic tendencies and aren’t smiling right about now, that pit you feel in your gut when thinking about shoving the guy, even to save five others, results from the intuitive systems of the ant screaming, “Don’t do it!” For most of us, this impulse usually wins.

Human minds are programmed to have an innate aversion to inflicting harm on another (unless the person poses a threat), and it is precisely this aversion or sense of horror that usually prevents most people from choosing to push the stranger off the footbridge, even though it might make logical sense to do so. In this instance, the systems of the ant, on the intuitive level at least, are on steroids because, evolutionarily speaking, causing intentional harm to an innocent person is a big no-no. Hurting others outside of war is almost never good for a person’s reputation and thus threatens our long-term survival. So considering pushing someone off a bridge, even for a good reason, makes us feel quite uneasy. However, in the switch version of the dilemma, although the trade-off between life and death is quantitatively the same, the action in question doesn’t alarm the ant to the same degree. Imagining throwing a switch doesn’t feel nearly as awful on a gut level as pushing someone to his demise, even if the results are the same.

Recent research in neuroimaging supports the view that the decision about whether or not to actively push the man off the bridge is guided by intuitive emotional responses, whereas the decision about whether or not to flip the switch is more grounded in conscious reasoning. In groundbreaking work, psychologist Joshua Greene and his colleagues used fMRI techniques to peer into people’s brains as they grappled with these moral decisions. They found that the centers involved in experiencing emotion were much more active when people were considering whether to push someone off a footbridge than they were when the question was whether to flip a switch.8 In the case of the footbridge, the ant pushes hard on the intuitive level to keep us from pushing hard on the large stranger. In the case of the switch, there is no initial intuitive response, and so the rational mind doesn’t need to fight against an initial decision. As we said earlier, sometimes the choices of the intuitive and rational minds can differ even when the goals are the same. Because intuitive mechanisms are guided by what has tended to work best over millennia (e.g., don’t directly harm someone), they can short-circuit when confronting novel situations that our ancestors never faced. Back then, if you were going to kill someone, you had to do it with your own two hands; there were no switches.

Given that we’re clearly not on the savannah anymore, this raises another set of questions: Can’t the mind adapt? Are we doomed to forever make decisions that feel right but end up being logically or even morally wrong? Well, let’s go back to our first experiment, with the video clips. If we look at the relatively few people among those who watched the comedy clip who did decide to push the guy off the footbridge, an intriguing pattern emerges—they took markedly longer to make their decision than did the majority of people who chose not to push the hapless stranger. In this finding, you can see the tug-of-war between the intuitive and rational minds. The reason the decision to push the one to save the five others took longer to make was precisely because people’s minds had to work to override their intuitive impulse not to cause direct harm to someone. In essence, their minds were doing exactly what the minds of Eliot Spitzer, Rush Limbaugh, and all the other “hypocrites” were doing: constructing rational explanations for their actions and decisions. But there is one fundamental difference: unlike the hypocrisy cases, the trolley dilemmas don’t present any immediate potential for self-interest, so reasoning can be more objective. Without anything to gain in the short term by making one decision or the other, the grasshopper doesn’t perk up to fight the ant.

The significance of these experiments is twofold. First, these findings unequivocally show that what we feel, not only what we think, guides our moral judgments. Second, given that our feelings can and do change quickly and seemingly unpredictably, our moral judgments, and therefore our character, are quite flexible too. The mechanisms of the mind aren’t perfect. Though they serve us well most of the time, they can be tripped up by context. Potentially more troubling still is that such changes in context aren’t always random; they are readily susceptible to intentional manipulation. After all, if something as seemingly trivial as watching a short video clip or hearing a joke can alter our moral judgments, imagine how vulnerable we are to deliberate manipulation by politicians, lawyers, PR specialists, ex-boyfriends, and others who try to shape our views about right and wrong, or guilt and innocence, by playing on our feelings. When our scales of morality are as wobbly as we now know them to be, it can be incredibly easy for other people to deliberately tip them.

The perils of dirty tissues and soapy hands

If simply watching a television show can alter your morals, where does the power of emotions stop? Surprisingly, there really isn’t a good answer to this question. Basically, anything that can appeal to your intuitions and change your feelings can pretty much impact your moral decisions. Take, for example, a dirty tissue—the crumpled kind oozing with some bodily fluid you’d really rather not think about. What’s the first feeling that popped into your mind when you pictured this image? If you’re like most people, it was queasiness or a feeling of disgust. Okay, you may be thinking, “So what? A used tissue is gross.” We agree. Such an object repulses us—the feeling stems from deep down in our gut. Funny thing is, though, sitting next to a used tissue can actually sway your moral judgments about completely unrelated issues, such as gay marriage or failing to recycle. Why? Because that feeling of disgust can give one side of the scale a head start in shaping your judgments.

Simone Schnall and her colleagues demonstrated just this fact.9 In one series of experiments, they asked participants to rate the moral acceptability of various acts: How immoral is it for first cousins to have sex? To eat your dog after it dies? To eat your friends if they’re killed in a plane crash that leaves you stranded on a glacier? But unbeknownst to the participants, the researchers had “decorated” the room where these decisions would be made (for half the participants, that is) prior to their arrival. This lucky half found the room to be, shall we say, a little messy. The researchers replaced the clean chairs with stained ones. They replaced new pens with chewed pens. They replaced empty trash cans with filled ones, topped off with dirty tissues. And lo and behold, the participants who made their decisions in the messy room overwhelmingly rated each possible moral transgression as far more reprehensible than did their counterparts in the clean condition. Why? Because the feelings of disgust generated by the mess primed the intuitive system to be disgusted by whatever happened to come next. In essence, that feeling of disgust bled over onto the next things that entered consciousness. So when people were asked how they felt about a somewhat tenuous moral action, the answer was already there: it was disgusting. And condemn those actions they did.

Luckily (or maybe not, depending on one’s point of view), we can sometimes use this vulnerability of the mind to our advantage. Take for example, the case of Sam. Sam was a friend of one of ours in college. (Okay, his real name isn’t Sam, and no, we won’t tell you which one of us knew him. We have to give the guy some cover!) Anyway, Sam was a nice guy from New York City who suddenly arrived at college as a freshman and realized that he could reinvent himself. To put it simply, Sam, who had never had much luck with the ladies, became a player. As the weeks of the fall semester passed, Sam’s friends couldn’t help enviously noticing that he was dating more and more women. (Well, it’s hard to call it dating when the relationships usually consisted of one-night stands, but let’s go with it.) Women were drawn to Sam because, amazingly, even though he was playing the field, his reputation was still that of Mr. Nice Guy, someone who would respect you in the morning and be there when you needed him. How was he fooling them? To his buddies, he seemed to have become a total playboy, some sort of modern-day Lothario racking up notches on his bedpost. But to the ladies, he was seen as sensitive, caring, and sweet. It was puzzling. Then his friends noticed one thing: Sam seemed to have developed a new habit of stopping off at the restroom at frequent intervals to wash his hands. Not to use the toilet or look in the mirror, just to wash.

Now, Sam was no clean freak—far from it. His room was as untidy as ever. He still lived for days in the same pair of jeans. He wasn’t shaving and getting haircuts more often. And he certainly wasn’t trying to avoid germs—he’d take a drink from anyone’s glass. The next year, by which time he’d settled into a long-term relationship with a woman he’d met over the summer, the hand-washing behavior stopped as abruptly as it had begun. And so the mystery lingered.

It wasn’t until years later that we found the answer. Sam’s change in character had simply been a temporary victory of the mental system favoring his short-term interests (his desire for casual sex). And the hand washing? That was simply a subconscious attempt to assuage his own feelings of guilt about using these women. Just like Lady Macbeth, he was trying to wash his sins away. He didn’t know it, but in adopting this one little ritual of cleanliness, he was alleviating feelings of disgust and guilt at his less than upstanding actions. And it worked. Once Sam had convinced himself he was still the same good guy he had always been, he projected that image to the women, who in turn were readily convinced.

The science underlying this “Macbeth effect” has been documented by Chen-Bo Zhong and Katie Liljenquist in a series of clever experiments.10 In one, Zhong and Liljenquist found that participants asked to recall an unethical deed or write about an unethical act later purchased more cleaning products than their guilt-free counterparts—their intuitive minds felt a need to be “clean.” Even more pertinent to Sam’s case, Zhong and Liljenquist found that if they allowed guilt-racked participants to wash their hands after recalling their questionable actions, the need to “cleanse” themselves, or atone for their sins, went away. Among those with a guilty conscience who were allowed to wash, fewer than half as many volunteered to help a peer in need of assistance. Just as dirty tissues prime us to feel morally repulsed, the simple act of washing—the feeling of being clean—sends a signal to the older, intuitive mental mechanisms that moral violations have disappeared. Thus it’s easier for the “sinning” to continue.

Like all the other emotional impulses we’ve discussed in this chapter, the feeling of disgust has an important evolutionary purpose. It began as a simple reflexive feeling and action meant to keep our ancestors away from dangerous things. Think about it. Eating rotten meat, feces, or toxins is certainly bad for you, and consequently all are considered disgusting. Over thousands of years of cultural evolution, that original biological disgust response came to be generalized not just to impure food but to all things considered “impure.” This is why feelings of moral disgust or guilt can be held at bay through simple acts of physical cleansing. On an intuitive level, feeling clean is feeling clean.

Sinful saints or saintly sinners?

Are we all hypocrites, then? Are all our moral compasses broken? Do we even have compasses to begin with? The answer to these questions, it seems, is both yes and no. We all can be hypocrites, but we’re not always hypocrites. Acting hypocritically is different from being a hypocrite. Sinning is different from being a sinner. The first implies an instance; the second suggests a deep-seated disposition. As we’ve said, our moral compass isn’t broken, but it isn’t fixed either. It just works differently than most people think it does. As our research reveals, not only is our morality flexible, but the scale that determines it is constantly being tipped back and forth by mechanisms that operate under our radar.

Don’t feel bad about this news. It doesn’t make us inherently flawed, weak, or bad people. It’s not that we don’t feel pangs of guilt over our own morally questionable actions; it’s just that our minds are remarkably good at quieting them. Even Spitzer probably felt pangs of guilt as he made calls to the Emperor’s Club. But then his desire for immediate pleasure (“I need some fun and those Emperor’s Club women are so hot”) went to battle with the voices warning him about the long-term consequences (“This can only spell problems for my family and my career”). And, well, we know which of them won.

Spitzer is no different from the rest of us in terms of the way his mind works. Whether it’s because of the battle between our own inner mental mechanisms or changes in our external environments, we don’t always act as morally as we’d like. But that doesn’t mean we should give up trying. Understanding how the system truly works is the first step toward being able to manage it better.

For example, now that you know how readily moral judgments are influenced by emotional states, it becomes easier to understand why telling that off-color joke about your mother-in-law seemed okay yesterday at a celebratory dinner but feels like a horrible idea today. Why it seemed okay last night to sleep with that married person you met at happy hour, even though you woke up this morning deeply regretting it.

So how can we avoid falling prey to such lapses in moral judgment? The first step is to remind ourselves that if we’re feeling happy or aroused, whether it’s because we’ve been imbibing or just because we’ve been having a good time, those feelings can color moral judgments by squashing the emotional impulses of the ant—those that are looking out for our long-term interests—by giving precedence to impulses favoring pleasure in the here and now. So when you’re laughing or partying it up, it helps to realize that the warm glow you’re feeling may be blocking out the hesitation you normally might have felt before doing something you’re likely to regret the next morning. Of course, we’ve seen it can work the other way too. If you’re feeling disgusted or angry, these negative feelings can shift your moral judgments and actions in the other direction—everything and everyone, yourself included, will feel wrong or tainted. The first step in making better moral decisions, then, is to learn to interpret the signals for each kind of bias.

It’s easy to assume that the key to living a more virtuous life is to try to gain control of the mind. If we never trust our intuitions, then we can’t be misled by irrelevant feelings, right? While this is partially true, as we’ve shown in this chapter, it doesn’t mean that following our conscious reasoning is always the best strategy either. Think of Spitzer. It was his conscious mind, not his emotions, that ultimately led him down the wrong path by allowing him to rationalize his behavior. Morality, contrary to popular belief, can’t be controlled simply by strength of will and reason. Sure, self-control can steer you toward the “right” decision sometimes, but in other cases you’re better off listening to your gut impulses. As we’ve seen, it’s the context that determines which is best. The “hypocrites” in our experiments really thought that their actions weren’t so bad. They had a good reason for taking the better option for themselves. It was okay in this instance. It was only when we prevented their minds from engaging in justification—from continuing to adjust the scale—that we could tell they were burying pangs of guilt deep down. And the more they thought about their actions, the less guilt they felt.

So, as we hinted in the last chapter, the question isn’t whether we should trust the rational system or the intuitive system—both can serve our interests—but rather when to trust each one. One answer can be found in a simple gut check. When faced with a moral decision, take a few seconds to pause and listen to your inner voices. Is there a hint of guilt, a hint of shame, a gut feeling of unease? If so, don’t ignore it. Feel it! Forget anything you’ve read about the importance of reason in making good decisions. If you’re feeling a visceral emotion, weigh that feeling in your conscious analysis of what to do. Of course, it’s not the only piece of information, but it’s an important one. It also doesn’t mean that emotions will always be right; as we’ve seen, many gut emotions stem from an ancient calculus that no longer applies (remember the footbridge dilemma), while others are colored by the situation you’re in. The point is not to trust either your conscious will or your intuitions 100 percent of the time, but to try to see whether what you’re feeling and what you’re thinking stem from ulterior motives or extraneous contexts.

Lastly, don’t assume you’re good at this tactic or that you’ll get it right every time. As we’ve seen, none of us is a saint; we all err in our moral judgments every now and again. And in fact, a little humility can be useful. As recent work by Sonya Sachdeva, Rumen Iliev, and Douglas Medin at Northwestern University has shown, having an outsized sense of moral superiority often gives people license to act less morally in the future.11 The researchers asked participants to use one of two sets of words in writing a short story about themselves—a set of words suggesting high moral character (e.g., generous, caring) or one suggesting low moral character (e.g., greedy, disloyal). After a little time passed, they asked the participants if they’d like to make a donation to charity. What they found is not only at odds with what most people would expect but opposite to the view of fixed character as well. The people who wrote stories about themselves using the “moral” descriptors gave far less on average ($1.11) than did their counterparts who used the immoral descriptors ($5.56). Describing oneself as moral didn’t make these people act morally. To the contrary, trumpeting their moral qualities apparently gave their short-term systems greater room to urge them to keep more money for themselves. As we said, the fight between the grasshopper and the ant isn’t usually a fair one.

It’s easy to see this same phenomenon outside the lab as well. Take, for example, Oral Suer, the former CEO of the Washington, D.C.–area United Way. He labored tirelessly over his thirty-year career to raise more than $1 billion for local charities, but it was later revealed that he had been diverting hundreds of thousands of dollars from it to “reward” himself for his charitable work. The same phenomenon may also have been partially at play in Spitzer’s decisions to indulge himself. After all, didn’t all his victories against the scourge of corruption give him license on some level to enjoy himself in an unsavory act now and again? The point here is to be careful by knowing where these pitfalls lie. The human mind, as we’ll continue to see, is capable of much contradiction and all manners of tricks.