Chapter One

Denying Minds

It is impossible—for a liberal, anyway—not to admire the Marquis de Condorcet. The passion and clarity with which he articulated a progressive vision of science-based Enlightenment is more inspiring than several football stadiums of people shouting the word “reason” simultaneously.

But the great scientific liberal was wrong about one of the things that matters most. He was incorrect in thinking that the broader dissemination of reasoned arguments would necessarily lead to greater acceptance of them. And he was equally wrong to think that the refutation of false claims would lead human beings to discard them.

Why? To show as much, let’s examine another story, this time a mind-bending experiment from mid-twentieth-century psychology—one that has been greatly built upon by subsequent research.

image

“A man with a conviction is a hard man to change. Tell him you disagree and he turns away. Show him facts or figures and he questions your sources. Appeal to logic and he fails to see your point.”

So wrote the celebrated Stanford University psychologist Leon Festinger, in a passage that might have been referring to the denial of global warming. But the year was too early for that—this was in the 1950s—and Festinger was instead describing his most famous piece of research.

Festinger and several of his colleagues had infiltrated the Seekers, a small Chicago-area group whose members thought they were communicating with alien intelligences, including one “Sananda,” whom they believed to be the astral incarnation of Jesus Christ. The group was led by a woman the researchers dubbed “Marian Keech” (her real name was Dorothy Martin), who transcribed the interstellar messages through automatic writing. That’s how Mrs. Keech knew the world was about to end.

Through her pen, the aliens had given the precise date of an earth-rending cataclysm: December 21, 1954. Some of Mrs. Keech’s followers had, accordingly, quit their jobs and sold their property. They literally expected to be rescued by flying saucers when the continent split asunder and a new sea submerged much of the current United States. They even went so far as to rip zippers out of trousers and remove brassieres, because they believed that metal would pose a danger on the spacecraft.

Festinger and his team were with the group when the prophecy failed. First, the “boys upstairs” (as the aliens were sometimes called) failed to show up and rescue the Seekers. Then December 21 arrived without incident. It was the moment Festinger had been waiting for: How would people so emotionally invested in a belief system react now that it had been soundly refuted?

At first, the group struggled for an explanation. But then rationalization set in. Conveniently, a new message arrived via Mrs. Keech’s pen, announcing that they’d all been spared at the last minute. As Festinger summarized the new pronouncement from the stars: “The little group, sitting all night long, had spread so much light that God had saved the world from destruction.” Their willingness to believe in the prophecy had saved everyone on Earth from the prophecy!

From that day forward, Mrs. Keech and her followers, previously shy of the press and indifferent toward evangelizing, began to proselytize about their beliefs. “Their sense of urgency was enormous,” wrote Festinger. The devastation of all they had believed made them more sure of their beliefs than ever.

image

In the annals of delusion and denial, you don’t get much more extreme than Mrs. Keech and her followers. They lost their jobs, the press mocked them, and there were efforts to keep them away from impressionable young minds. Mrs. Keech’s small group of UFO obsessives would lie at one end of the spectrum of human self-delusion—and at the other would stand an utterly dispassionate scientist, who carefully updates her conclusions based on each new piece of evidence.

The fact, though, is that all of us are susceptible to such follies of “reasoning,” even if we’re rarely so extreme.

To see as much, let’s ask the question: What was going through the minds of Mrs. Keech and her followers when they reinterpreted a clear and direct refutation of their belief system into a confirmation of it? Festinger came up with a theory called “cognitive dissonance” to explain this occurrence. The idea is that when the mind holds thoughts or ideas that are in conflict, or when it is assaulted by facts that contradict core beliefs, this creates an unpleasant sensation or discomfort—and so one moves to resolve the dissonance by bringing ideas into compatibility again. The goal isn’t accuracy per se; it’s to achieve consistency between one’s beliefs—and prior beliefs and commitments, especially strong emotional ones, take precedence. Thus, the disconfirming information was rendered consistent with the Seekers’ “theory” by turning it into a confirmation.

You might think of Festinger’s work on the Seekers as a kind of midpoint between the depictions contained in psychologically insightful 19th-century novels like Charles Dickens’ Great Expectations—whose main character, Pip, is a painful study in self-delusion—and what we’re now learning from modern neuroscience. Since Festinger’s day, an array of new discoveries have further demonstrated how our pre-existing beliefs, far more than any new facts, can skew our thoughts, and even color what we consider our most dispassionate and logical conclusions.

The result of these developments is that cognitive dissonance theory has been somewhat updated, although certainly not discarded. One source of confusion is that in light of modern neuroscience, the word “cognitive”—which in common parlance would seem to suggest conscious thought—can be misleading, as we now know that much of this is occurring in an automatic, subconscious way. Cognitive dissonance theory still successfully explains many psychological observations and results, with a classic example being how smokers rationalize the knowledge that they’re signing their death warrant (“but it keeps me thin; I’ll quit later when my looks don’t matter so much”). But its core findings are increasingly being subsumed under a theory called “motivated reasoning.”

This theory builds on one of the key insights of modern neuroscience: Thinking and reasoning are actually suffused with emotion (or what researchers often call “affect”). And not just that: Many of our reactions to stimuli and information are neither reflective nor dispassionate, but rather emotional and automatic, and set in motion prior to (and often in the absence of) conscious thought.

Neuroscientists now know that the vast majority of the brain’s actions occur subconsciously and automatically. We are only aware of a very small fraction of what the brain is up to—some estimates suggest about 2 percent. In other words, not only do we feel before we think—but most of the time, we don’t even reach the second step. And even when we get there, our emotions are often guiding our reasoning.

I’ll sketch out why the brain operates in this way in a moment. For now, just consider the consequences: Our prior emotional commitments—operating in a way we’re not even aware of—often cause us to misread all kinds of evidence, or selectively interpret it to favor what we already believe. This kind of response has been found repeatedly in psychology studies. People read and respond even to scientific or technical evidence so as to justify their pre-existing beliefs.

In a classic 1979 experiment, for instance, pro- and anti-death penalty advocates were exposed to descriptions of two fake scientific studies, one supporting and one undermining the notion that capital punishment deters violent crime and, in particular, murder. They were also shown detailed methodological discussions and critiques of the fake studies—and, cleverly, the researchers had ensured that each study design sometimes produced a pro-deterrent, and sometimes an anti-deterrent, conclusion. Thus, in a scientific sense, no study was “stronger” than another—they were all equally conjured out of thin air.

Yet in each case, and regardless of its design, advocates more heavily criticized studies whose conclusions disagreed with their own, while describing studies that were more ideologically congenial as more “convincing.”

Since then, similar results have been found for how people respond to “evidence” and studies about affirmative action and gun control, the accuracy of gay stereotypes, and much more. Motivated reasoning emerges again and again. Even when study subjects are explicitly instructed to be unbiased and evenhanded about the evidence, they often fail. They see what they want to see, guided by where they’re coming from.

image

Why do people behave like this, and respond in this way in controlled psychology studies? What’s so powerful about the theory of motivated reasoning is that we can now sketch out, to a significant extent, how the process occurs in the human brain—and why we have brains that go through such a process to begin with.

Evolution built the human brain—but not all at once. The brain has been described as a “confederation of systems” with different evolutionary ages and purposes. Many of these systems, and especially the older ones, are closely related to those that we find in other animals. Others are more unique to us—they evolved alongside the rapid increase in the size of our brains that allowed us to become homo sapiens, somewhere in Africa well over 150,000 years ago.

The systems of the human brain work very well together. Evolution wouldn’t have built an information processing machine that tended to get you killed. But there are also some oddities that arise because evolution could only build onto what it already had, jury-rigging and tweaking rather than designing something new from the ground up.

As a result of this tinkering, we essentially find ourselves with an evolutionarily older brain lying beneath and enveloped by a newer brain, both bound together and acting in coordination. The older parts—the subcortex, the limbic regions—tend to be involved in emotional or automatic responses. These are stark and binary reactions—not discerning or discriminating. And they occur extremely rapidly, much more so than our conscious thoughts. Positive or negative feelings about people, things, and ideas arise in a matter of milliseconds, fast enough to detect with an EEG device but long before we’re aware of it.

The newer parts of the brain, such as the prefrontal cortex, empower abstract reasoning, language, and more conscious and goal-directed behavior. In general, these operations are slower and only able to focus on a few things or ideas at once. Their bandwidth is limited.

Thus, while the newer parts of the brain may be responsible for our species’ greatest innovations and insights, it isn’t like they always get to run the show. “There are certain important circumstances where natural selection basically didn’t trust us to make the right choice,” explains Aaron Sell, an evolutionary psychologist at Griffith University in Australia. “We have a highly experimental frontal lobe that plays around with ideas, but there are circumstances, like danger, where we’re not allowed to do that.” Instead, the rapid-fire emotions take control and run an automatic response program—e.g., fight or flight.

Indeed, according to evolutionary psychologists Leda Cosmides and John Tooby of the University of California-Santa Barbara, the emotions are best thought of as a kind of control system to coordinate brain operations—Matrix-like programs for running all the other programs. And when the control programs kick in, human reason doesn’t necessarily get the option of an override.

How does this set the stage for motivated reasoning?

Mirroring this evolutionary account, psychologists have been talking seriously about the “primacy of affect”—emotions preceding, and often trumping, our conscious thoughts—for three decades. Today they broadly break the brain’s actions into the operations of “System 1” and “System 2,” which are roughly analogous to the emotional and the reasoning brain.

System 1, the older system, governs our rapid fire emotions; System 2 refers to our slower moving, thoughtful, and conscious processing of information. Its operations, however, aren’t necessarily free of emotion or bias. Quite the contrary: System 1 can drive System 2. Before you’re even aware you’re reasoning your emotions may have set you on a course of thinking that’s highly skewed, especially on topics you care a great deal about.

How do System 1’s biases infiltrate System 2? The mechanism is thought to be memory retrieval—in other words, the thoughts, images, and arguments called into one’s conscious mind following a rapid emotional reaction. Memory, as embodied in the brain, is conceived of as a network, made up of nodes and linkages between them—and what occurs after an emotional reaction is called spreading activation. As you begin to call a subject to mind (like Sarah Palin) from your long-term memory, nodes associated with that subject (“woman,” “Republican,” “Bristol,” “death panels,” “Paul Revere”) are activated in a fanlike pattern—like a fire that races across a landscape but only burns a small fraction of the trees. And subconscious and automatic emotion starts the burn. It therefore determines what the conscious mind has available to work with—to argue with.

To see how it plays out in practice, consider a conservative Christian who has just heard about a new scientific discovery—a new hominid finding, say, confirming our evolutionary origins—that deeply challenges something he or she believes (“human beings were created by God”; “the book of Genesis is literally true”). What happens next, explains Stony Brook University political scientist Charles Taber, is a subconscious negative (or “affective”) response to the threatening new information—and that response, in turn, guides the type of memories and associations that are called into the conscious mind based on a network of emotionally laden associations and concepts. “They retrieve thoughts that are consistent with their previous beliefs,” says Taber, “and that will lead them to construct or build an argument and challenge to what they are hearing.”

In other words, when we think we’re reasoning we may instead be rationalizing. Or to use another analogy offered by University of Virginia social psychologist Jonathan Haidt: We may think we’re being scientists, but we’re actually being lawyers. Our “reasoning” is a means to a predetermined end—winning our “case”—and is shot through with classic biases of the sort that render Condorcet’s vision deeply problematic. These include the notorious “confirmation bias,” in which we give greater heed to evidence and arguments that bolster our beliefs, and seek out information to reinforce our prior commitments; as well as its evil twin the “disconfirmation bias,” in which we expend disproportionate energy trying to debunk or refute views and arguments that we find uncongenial, responding very defensively to threatening information and trying to pick it apart.

That may seem like a lot of jargon, but we all understand these mechanisms when it comes to interpersonal relationships. Charles Dickens understood them, even if not by name. If I don’t want to believe that my spouse is being unfaithful, or that my child is a bully—or, as in Great Expectations, that a convict is my benefactor—I can go to great lengths to explain away details and behaviors that seem obvious to everybody else. Everybody who isn’t too emotionally invested to accept them, anyway.

That’s not to suggest that we aren’t also motivated to perceive the world accurately—we often are. Or that we never change our minds—we do. It’s just that we sometimes have other important goals besides accuracy—including identity affirmation and protecting our sense of self. These can make us highly resistant to changing our beliefs when, by all rights, we probably should.

image

Since it is fundamentally rooted in our brains, it should come as no surprise that motivated reasoning emerges when we’re very young. Some of the seeds appear to be present at least by age four or five, when kids are able to perceive differences in the “trustworthiness” of information sources.

“When 5-year-olds hear about a competition whose outcome was unclear,” write Yale psychologists Paul Bloom and Deena Skolnick Weisberg, “they are more likely to believe a person who claimed that he had lost the race (a statement that goes against his self interest) than a person who claimed that he had won the race (a statement that goes with his self-interest).” For Bloom and Weisberg, this is the very capacity that, while admirable in general, can in the right context set the stage for resistance to certain types of information or points of view.

The reason is that where there is conflicting opinion, children will decide upon the “trustworthiness” of the source—and they may well, in a contested case, decide that Mommy and Daddy are trustworthy, and the teacher talking about evolution isn’t. This will likely occur for emotional, motivated, or self-serving reasons.

As children develop into adolescents, motivated reasoning also develops. This, too, has been studied, and one of the experiments is memorable enough to describe in some detail.

Psychologist Paul Klaczynski of the University of Northern Colorado wanted to learn how well adolescents are capable of reasoning on topics they care deeply about. So he decided to see how they evaluated arguments about whether a kind of music they liked (either heavy metal or country) led people to engage in harmful or antisocial behavior (drug abuse, suicide, etc.). You might call it the Tipper Gore versus Frank Zappa experiment, recalling the 1980s debate over whether rock lyrics were corrupting kids and whether some albums needed to have parental labels on them.

Ninth and twelfth graders were presented with arguments about the behavioral consequences of listening to heavy metal or country music—each of which contained a classic logical fallacy, such as a hasty generalization or tu quoque (a diversion). The students were then asked how valid the arguments were, to discuss their strengths and weaknesses, and to describe how they might design experiments or tests to falsify the arguments they had heard.

Sure enough, the students were found to reason in a more biased way to defend the kind of music they liked. Country fans rated pro-country arguments as stronger than anti-country arguments (though all the arguments contained fallacies), flagged more problems or fallacies in anti-country arguments than in pro-country ones, and proposed better evidence-based tests of anti-country arguments than for the arguments that stroked their egos. Heavy metal fans did the same.

Consider, for example, one adolescent country fan’s response when asked how to disprove the self-serving view that listening to country music leads one to have better social skills. Instead of proposing a proper test (for example, examining antisocial behavior in country music listeners) the student instead relied on what Klaczynski called “pseudo-evidence”—making up a circuitous rationale so as to preserve a prior belief:

As I see it, country music has, like, themes to it about how to treat your neighbor. So, if you found someone who was listening to country, but that wasn’t a very nice person, I’d think you’d want to look at something else going on in his life. Like, what’s his parents like? You know, when you’ve got parents who treat you poorly or who don’t give you any respect, this happens a lot when you’re a teenager, then you’re not going to be a model citizen yourself.

Clearly, this is no test of the argument that country music listening improves your social skills. So the student was pressed on the matter—asked how this would constitute an adequate experiment or test. The response:

Well . . . you don’t really have to, what you have to look for is other stuff that’s happening. Talk to the person and see what they think is going on. So you could find a case where a person listens to country music, but doesn’t have many friends or get along very well. But, then, you talk to the person and see for yourself that the person’s life is probably pretty messed up.

Obviously this student was not ready or willing to subject his or her beliefs to a true challenge. “Adolescents protect their theories with a diverse battery of cognitive defenses designed to repel attacks on their positions,” wrote Klaczynski.

In another study—this time, one that presented students with the idea that their religious beliefs might lead to bad outcomes—Klaczynski and a colleague found a similar result. “At least by late adolescence,” he wrote, “individuals possess many of the competencies necessary for objective information processing but use these skills selectively.”

image

The theory of motivated reasoning does not, in and of itself, explain why we might be driven to interpret information in a biased way, so as to protect and defend our preexisting convictions. Obviously, there will be a great variety of motivations, ranging from passionate love to financial greed.

What’s more, the motivations needn’t be purely selfish. Even though motivated reasoning is sometimes also referred to as “identity-protective cognition,” we don’t engage in this process to defend ourselves alone. Our identities are bound up with our social relationships and affiliations—with our families, communities, alma maters, teams, churches, political parties. Our groups. In this context, an attack on one’s group, or on some view with which the group is associated, can effectively operate like an attack on the self.

Nor does motivated reasoning suggest that we must all be equally biased. There are still checks one can put on the process. Other people, for instance, can help keep us honest—or, conversely, they can affirm our delusions, making us more confident in them. Societal institutions and norms—the norms of science, say, or the norms of good journalism, or the legal profession—can play the same role.

There may also be “stages” of motivated reasoning. Having a quick emotional impulse and then defending one’s beliefs in a psychology study is one thing. Doing so repeatedly, when constantly confronted with challenging information over time, is something else. At some point, people may “cry uncle” and accept inconvenient facts, even if they don’t do so when first confronted with them.

Finally, individuals may differ in their need to defend their beliefs, their internal desire to have unwavering convictions that do not and cannot change—to be absolutely convinced and certain about something, and never let it go. They may also differ in their need to be sure that their group is right, and the other group is wrong—in short, their need for solidarity and unity, or for having a strong in-group/out-group way of looking at the world. These are the areas, I will soon show, where liberals and conservatives often differ.

But let’s table that for now. What counts here is that our political, ideological, partisan, and religious convictions—because they are deeply held enough to comprise core parts of our personal identities, and because they link us to the groups that bulwark those identities and give us meaning—can be key drivers of motivated reasoning. They can make us virtually impervious to facts, logic, and reason. Anyone in a politically split family who has tried to argue with her mother, or father, about politics or religion—and eventually decided “that’s a subject we just don’t talk about”—knows what this is like, and how painful it can be.

And no wonder. If we have strong emotional convictions about something, then these convictions must be thought of as an actual physical part of our brains, residing not in any individual brain cell (or neuron) but rather in the complex connections between them, and the pattern of neural activation that has occurred so many times before, and will occur again. The more we activate a particular series of connections, the more powerful it becomes. It grows more and more a part of us, like the ability to play guitar or juggle a soccer ball.

So to attack that “belief” through logical or reasoned argument, and thereby expect it to vanish and cease to exist in a brain, is really a rather naïve idea. Certainly, it is not the wisest or most effective way of trying to “change brains,” as Berkeley cognitive linguist George Lakoff puts it.

We’ve inherited an Enlightenment tradition of thinking of beliefs as if they’re somehow disembodied, suspended above us in the ether, and all you have to do is float up the right bit of correct information and wrong beliefs will dispel, like bursting a soap bubble. Nothing could be further from the truth. Beliefs are physical. To attack them is like attacking one part of a person’s anatomy, almost like pricking his or her skin (or worse). And motivated reasoning might perhaps best be thought of as a defensive mechanism that is triggered by a direct attack upon a belief system, physically embodied in a brain.

I’ve still only begun to unpack this theory and its implications—and have barely drawn any meaningful distinctions between liberals and conservatives—but it is already apparent why Condorcet’s vision fails so badly. Condorcet believed that good arguments, widely disseminated, would win the day. The way the mind works, however, suggests that good arguments will only win the day when people don’t have strong emotional commitments that contradict them. Or to employ lingo sometimes used by the psychologists and political scientists working in this realm, it suggests that cold reasoning (rational, unemotional) is very different from hot reasoning (emotional, motivated).

Consider an example. You can easily correct a wrong belief when the belief is that Mother’s Day is May 8, but it’s actually May 9. Nobody is going to dispute that—nobody’s invested enough to do so (we hope), and moreover, you’d expect most of us to have strong motivations (which psychologists sometimes call accuracy motivations) to get the date of Mother’s Day right, rather than defensive motivations that might lead us to get it wrong. By the same token, in a quintessential example of “cold” and “System 2” reasoning, liberals and conservatives can both solve the same math problem and agree on the answer (again, we hope).

But when good arguments threaten our core belief systems, something very different happens. The whole process gets shunted into a different category. In the latter case, these arguments are likely to automatically provoke a negative subconscious and emotional reaction. Most of us will then come up with a reason to reject them—or, even in the absence of a reason, refuse to change our minds.

image

Even scientists—supposedly the most rational and dispassionate among us and the purveyors of the most objective brand of knowledge—are susceptible to motivated reasoning. When they grow deeply committed to a view, they sometimes cling to it tenaciously and refuse to let go, ignoring or selectively reading the counterevidence. Every scientist can tell you about a completely intransigent colleague, who has clung to the same pet theory for decades.

However, what’s unique about science is that it has its origins in a world-changing attempt to weed out and control our lapses of objectivity—what the great 17th-century theorist of scientific method, Francis Bacon, dubbed the “idols of the mind.” That attempt is known as the Scientific Revolution, and revolutionary it was. Gradually, it engineered a series of processes to put checks on human biases, so that even if individual researchers are prone to fall in love with their own theories, peer review and the skepticism of one’s colleagues ensure that, eventually, the best ideas emerge. In fact, it is precisely because different scientists have different motivations and commitments—including the incentive to refute and unseat the views of their rivals, and thus garner fame and renown for themselves—that the process is supposed to work, among scientists, over the long term.

Thus when it comes to science, it’s not just the famous method that counts, but the norms shared by individuals who are part of the community. In science, it is seen as a virtue to hold your views tentatively, rather than with certainty, and to express them with the requisite caveats and without emotion. It is also seen as admirable to change your mind, based upon the weight of new evidence.

By contrast, for people who have authoritarian personalities or dispositions—predominantly political conservatives, and especially religious ones—seeming uncertain or indecisive may be seen as a sign of weakness.

If even scientists are susceptible to bias, you can imagine how ordinary people fare. When it comes to the dissemination of science—or contested facts in general—across a nonscientific populace, a very different process is often occurring than the scientific one. A vast number of individuals, with widely varying motivations, are responding to the conclusions that science, allegedly, has reached. Or so they’ve heard.

They’ve heard through a wide variety of information sources—news outlets with differing politics, friends and neighbors, political elites—and are processing the information through different brains, with very different commitments and beliefs, and different psychological needs and cognitive styles. And ironically, the fact that scientists and other experts usually employ so much nuance, and strive to disclose all remaining sources of uncertainty when they communicate their results, makes the evidence they present highly amenable to selective reading and misinterpretation. Giving ideologues or partisans data that’s relevant to their beliefs is a lot like unleashing them in the motivated reasoning equivalent of a candy store. In this context, rather than reaching an agreement or a consensus, you can expect different sides to polarize over the evidence and how to interpret it.

image

Motivated reasoning thus helps to explain all manner of maddening, logically suspect maneuvers that people make when they’re in the middle of arguments so as to avoid changing their minds.

Consider one classic: goalpost shifting. This occurs when someone has made a clear and factually refutable claim, and staked a great deal on it—but once the claim meets its demise, the person demands some additional piece of evidence, or tweaks his or her views in some way so as to avoid having to give them up. That’s what the Seekers did when their prophecy failed; that’s what vaccine deniers do with each subsequent scientific discrediting of the idea that vaccines cause autism; that’s what the hardcore Birthers did when President Obama released his long-form birth certificate; that’s what the errant prophet Harold Camping did when his predicted rapture did not commence on May 21, 2011, and the world did not end on October 21, 2011.

In all of these cases, the individuals or groups involved had staked it all on a particular piece of information coming to light, or a particular event occurring. But when the evidence arrived and it contradicted their theories, they didn’t change their minds. They physically and emotionally couldn’t. Rather, they moved the goalposts.

Note, however, that only those who do not hold the irrational views in question see this behavior as suspect and illogical. The goalpost shifters probably don’t perceive what they are doing, or understand why it appears (to the rest of us) to be dishonest. This is also why we tend to perceive hypocrisy in others, not in ourselves.

Indeed, a very important motivated reasoning study documented precisely this: Democrats viewed a Republican presidential candidate as a flip-flopper or hypocrite when he changed positions, and vice versa. Yet each side was more willing to credit that his own party’s candidate had had an honest change in views.

The study in question was conducted by psychologist Drew Westen of Emory University (also the author of the much noted book The Political Brain) and his colleagues, and it’s path-breaking for at least two reasons. First, Westen studied the minds of strong political partisans when they were confronted with information that directly challenged their views during a contested election—Bush v. Kerry, 2004—a time when they were most likely to be highly emotional and biased. Second, Westen’s team used functional magnetic resonance imaging (fMRI) to scan the brains of these strong partisans, discovering which parts were active during motivated reasoning.

In Westen’s study, strong Democrats and strong Republicans were presented with “contradictions”: Cases in which a person was described as having said one thing, and then done the opposite. In some cases these were politically neutral contradictions—e.g., about Walter Cronkite—but in some cases they were alleged contradictions by the 2004 presidential candidates. Here are some examples, which are fairly close to reality but were actually constructed for the study:

George W. Bush: “First of all, Ken Lay is a supporter of mine. I love the man. I got to know Ken Lay years ago, and he has given generously to my campaign. When I’m President, I plan to run the government like a CEO runs a country. Ken Lay and Enron are a model of how I’ll do that.”

Contradictory: Mr. Bush now avoids any mention of Ken Lay and is critical of Enron when asked.

John Kerry: During the 1996 campaign, Kerry told a Boston Globe reporter that the Social Security system should be overhauled. He said Congress should consider raising the retirement age and means testing benefits. “I know it’s going to be unpopular,” he said. “But we have a generational responsibility to fix this problem.”

Contradictory: This year, on Meet the Press, Kerry pledged that he will never tax or cut benefits to seniors or raise the age for eligibility for Social Security.

Encountering these contradictions, the subjects were then asked to consider whether the “statements and actions are inconsistent with each other,” and to rate how much inconsistency (or, we might say, hypocrisy) they felt they’d seen. The result was predictable, but powerful: Republicans tended to see hypocrisy in Kerry (but not Bush), and Democrats tended to see the opposite. Both groups, though, were much more in agreement about whether they’d seen hypocrisy in politically neutral figures.

This study also provides our first tantalizing piece of evidence that Republicans may be more biased, overall, in defense of their political beliefs or their party. While members of both groups in the study saw more hypocrisy or contradiction in the candidate they opposed, Democrats were more likely to see hypocrisy in their own candidate, Kerry, as well. But Republicans were less likely to see it in Bush. Thus, the authors concluded that Republicans showed “a small but significant tendency to reason to more biased conclusions regarding Bush than Democrats did toward Kerry.”

While all this was happening, the research subjects were also having their brains scanned. Sure enough, the results showed that when engaged in biased political reasoning, partisans were not using parts of the brain associated with “cold,” logical thinking. Rather, they were using a variety of regions associated with emotional processing and psychological defense. Instead of listing all the regions here—there are too many, you’d be drowning in words like “ventral”—let me instead underscore the key conclusion.

Westen captured the activation of what appeared to be emotionally oriented brain circuits when subjects were faced with a logical contradiction that activated their partisan impulses. He did not capture calm, rational deliberation. These people weren’t solving math problems. They were committing the mental equivalent of beating their chests.

Notes

26 “A man with a conviction . . .” My account of the Seekers is based on Festinger’s classic book (with Henry W. Riecken and Stanley Schacter), When Prophecy Fails, first published by the University of Minnesota Press in 1956. My edition is published by Pinter& Martin, 2008. All quotations are from this text.

28 how smokers rationalize For a highly readable overview of “cognitive dissonance” theory and the many different phenomena it explains, see Carol Tavris and Elliot Aronson, Mistakes Were Made (But Not by Me): Why We Justify Foolish Beliefs, Bad Decisions, and Hurtful Acts, New York: Houghton Mifflin Harcourt, 2007. The smoking example is provide by Aronson in his foreword to When Prophecy Fails, Pinter & Martin, 2008.

29 motivated reasoning For an overview see Ziva Kunda, “The Case for Motivated Reasoning,” Psychological Bulletin, November 1990, Vol. 108, No. 3, pp. 480–498.

29 Thinking and reasoning are actually suffused with emotion See Antonio Damasio, Descartes’ Error: Emotion, Reason, and the Human Brain, New York: Putnam, 1994, and Joseph LeDoux, The Emotional Brain, New York: Simon & Schuster, 1996.

29 about 2 percent George Lakoff, The Political Mind, New York: Penguin, 2008, p. 9.

29 classic 1979 experiment Lord, Ross & Lepper, “Biased Assimilation and Attitude Polarization: The Effects of Prior Theories on Subsequently Considered Evidence,” Journal of Personality and Social Psychology, 1979, Vol. 37, No. 11, p. 2098–2109.

29 affirmative action and gun control Taber & Lodge, “Motivated Skepticism in the Evaluation of Political Beliefs,” American Journal of Political Science, Vol. 50, Number 3, July 2006, pp. 755–769.

30 the accuracy of gay stereotypes Munro & Ditto, “Biased Assimilation, Attitude Polarization, and Affect in Reactions to Stereotype-Relevant Scientific Information,” Personality and Social Psychology Bulletin, June 1997, Vol. 23, No. 6, p. 636–653.

30 “confederation of systems” Jonathan D. Cohen, “The Vulcanization of the Human Brain: A Neural Perspective on Interactions Between Cognition and Emotion,” Journal of Economic Perspectives, Vol. 19, No. 4, Fall 2005, p. 3–24.

30 closely related to those that we find in other animals See Joseph LeDoux, The Emotional Brain, New York: Simon & Schuster, 1996.

30 somewhere in Africa “Homo sapiens,” Institute on Human Origins, available online at http://www.becominghuman.org/node/homo-sapiens-0.

30 fast enough to detect with an EEG device Milton Lodge and Charles Taber, The Rationalizing Voter, unpublished manuscript shared by authors.

31 “natural selection basically didn’t trust us” Interview with Aaron Sell, August 12, 2011.

31 control system to coordinate brain operations Leda Cosmides & John Tooby, “Evolutionary Psychology and the Emotions,” Handbook of Emotions, 2nd Edition, M. Lewis & J.M. Haviland Jones, Eds. New York: Guilford, 2000.

31 “primacy of affect” R.B. Zajonc, “Feeling and Thinking: Preferences Need No Inferences,” American Psychologist, February 1980, Vol. 35, No. 2, pp. 151–175.

31 spreading activation Milton Lodge and Charles Taber, The Rationalizing Voter, unpublished manuscript shared by authors.

32 “They retrieve thoughts that are consistent with their previous beliefs” Interview with Charles Taber and Milton Lodge, February 3, 2011.

32 we’re actually being lawyers Jonathan Haidt, “The Emotional Dog and Its Rational Tail: A Social Intuitionist Approach to Moral Judgment,” Psychological Review, 2001, Vol. 108, No. 4, 814–834.

32 “confirmation bias” For an overview, see Raymond S. Nickerson, “The Confirmation Bias: A Ubiquitous Phenomenon in Many Guises,” Review of General Psychology, 1998, Vol. 2, No. 2, p. 175–220.

32 “disconfirmation bias” Taber & Lodge, “Motivated Skepticism in the Evaluation of Political Beliefs,” American Journal of Political Science, Vol. 50, Number 3, July 2006, pp. 755–769.

33 “a person who claimed that he had won the race” Paul Bloom & Deena Skolnick Weisberg, “Childhood Origins of Adult Resistance to Science,” Science, May 18, 2007, Vol. 316, pp. 996–997.

33 either heavy metal or country Paul A. Klaczynski, “Bias in Adolescents’ Everyday Reasoning and Its Relationship With Intellectual Ability, Personal Theories, and Self-Serving Motivation,” Developmental Psychology, 1997, Vol. 33, No. 2, pp. 273–283.

35 “At least by late adolescence. . .” Paul A. Klaczynski and Gayathri Narasimham, “Development of Scientific Reasoning Biases: Cognitive Versus Ego-Protective Explanations,” Developmental Psychology, 1998, Vol. 34, No. 1, 175–187.

35 our groups For the role of group affiliation in identity-protective cognition, and an overview of motivated reasoning generally and how it operates in a legal context, see Dan M. Kahan, “The Supreme Court 2010 Term—Foreword: Neutral Principles, Motivated Cognition, and Some Problems for Constitutional Law,” 125 Harvard Law Review, p. 1–77.

36 the more powerful it becomes George Lakoff, The Political Mind: A Cognitive Scientist’s Guide to Your Brain and Its Politics, New York: Penguin, 2008.

36 “change brains” George Lakoff, The Political Mind, New York: Penguin, 2008.

40 Drew Westen Drew Westen et al, “Neural Bases of Motivated Reasoning: An fMRI Study of Emotional Constraints on Partisan Political Judgment in the 2004 U.S. Presidential Election, Journal of Cognitive Neuroscience, Vol. 18, No. 11, pp. 1947–1958.