It is useless to attempt to reason a man out of a thing he was never reasoned into.
—Jonathan Swift
Suppose you’re in a library, and you need to photocopy some pages from a book. You find the copy machine, and happily you discover that you have some quarters. You’re about to drop a coin in the slot when a stranger approaches you. He asks whether he can use the photocopier. Would you let the stranger use the machine, or would you politely decline, given that, after all, you were there first?
In this chapter, we will be less interested in whether or not you would comply with this request, and more interested in whether or not you would think before answering. It would seem that a social interaction—deciding whether or not to grant a small favor—would require thought. But it doesn’t, according to a landmark study conducted by Ellen Langer.1 An experimenter approached individuals just as they were about to use a coin-operated copy machine, with one of three requests:
The first request offers no reason, whereas the second offers a socially acceptable reason. The third request is odd. It offers a reason that is not a reason—if you’re asking to use the photocopier, then obviously you have to make copies.
The surprising finding was that people found this nonreason persuasive. Sixty percent of people complied with the request when no reason was offered, but 93 percent complied when the nonsensical “reason” was added—about the same percentage as when the bona fide reason was added. What’s going on?
Langer argued that people are not thinking during this seemingly complex interchange. People are willing to do small favors for strangers, especially if the stranger makes the request politely and if the stranger offers some reason for the imposition. What the experiment seems to show is that the person hears the word “because” in the request and thus knows that a reason has been offered, but the person doesn’t take the trouble to evaluate the quality of the reason.
The idea that we are on autopilot even when we engage in complex behaviors is familiar to most of us. Obviously, you don’t need to consciously guide the movements of your hands as you button your shirt in the morning or tie your shoes; you did consciously control those movements at age two or three, but now they have become automatic. And routinized behaviors can be more complex than simple movements like shoe tying. You’ve probably found yourself pulling your car into your driveway and realizing that you had daydreamed the whole way home—stewed about a problem or fantasized about a vacation—and all the while obeyed traffic laws, braked for pedestrians, made the correct turns, and so on. It’s as though there is a computer program in your mind that you initiate when you climb in the car, and the “drive home” program runs without your supervision, leaving you free to think about other things.
An autopilot program is especially noticeable if it plays at a moment you wish it wouldn’t. If you want to stop at the supermarket on the way home, you may well find yourself in your driveway without having made the stop. The “drive home” program dictates a left turn at Elm Street, and you didn’t interrupt it to make sure that you took a right to go to the market. Or, to use the example offered by the great nineteenth-century psychologist William James, “Very absent-minded persons in going to their bedrooms to dress for dinner have been known to take off one garment after another and finally to get into bed, merely because that was the habitual issue of the first few movements when performed at a later hour.”2
This phenomenon—that consciousness may contribute little or nothing to the initiation of complex behaviors and the making of complex decisions—has created something of a revolution in social psychology. Researchers have discovered that more and more of the thought that drives our social lives happens outside of awareness.a
Here’s another example. When you speak with someone who has an accent, have you ever noticed yourself slipping into the accent yourself, without quite noticing that you’re doing so?b This is an instance of a more general phenomenon: humans imitate each other during social interactions.3 In one experiment demonstrating this phenomenon, subjects were paired with someone they thought was another subject, but who was actually a research assistant. The pair was to describe the contents of ambiguous photographs. During the task, the research assistant engaged in one of two nervous habits—either shaking her foot or touching her face. The subjects unconsciously mimicked the behavior of the research assistant.
Why do we mimic? Mimicry breeds liking. We like people who are similar to us. In First Corinthians, Paul says, “To the Jews I became like a Jew, to win the Jews. To those under the law, I became like one under the law to win those under the law. To the weak, I became weak, to win the weak.”4 Similarity aids persuasion even when based on something as trivial as having the same nervous tic or taking an ice cream sample of similar size.5 We unconsciously imitate each other to smooth social interactions.
The emerging picture is that we have two modes of social interaction. One is conscious and involves the logical integration of evidence. For example, a waiter puts the check down on the table and says to me, “I hope you enjoyed your meal!” I think to myself that the steak was a bit tough, but the salad was expertly prepared. Consciously weighing the good and the bad, I offer the waiter a measured comment like “Yeah, it was pretty good.” In the other, automatic mode, I merely detect certain cues or signals that mark the waiter’s comment to me as belonging to a category of social interactions—in this case, “social pleasantry.” Other categories might be “acquaintance asks a small favor” or “perform a task with a stranger.” Once I’ve identified the category, I can act appropriately to the situation (grant the favor, mimic the stranger) with little or no conscious thought. Sometimes this mental process goes wrong. We miscategorize what someone has said, or the automatically generated behavior doesn’t quite fit. More than once, a waiter has set a check on my table and said with a farewell intonation, “Enjoy the rest of your dessert!” and I’ve responded “Thanks, you too.” My unconscious mind coded the waiter’s comment as a social pleasantry and then my unconscious mind generated a response that typically works, but in this case was inappropriate.
If indeed we have two modes of social thought—conscious and unconscious—is each mode capable of evaluating persuasive messages? Can persuasion happen outside awareness or, at least, with little thought? The answer is an emphatic yes.6
First, let’s be clear what unconscious persuasion does not mean. You may have heard about subliminal (that is, unconscious) persuasion effects in advertisements. The idea is that advertisers embed messages in their ads that are not consciously perceived but will nonetheless affect behavior. For example, the words EAT POPCORN might appear in a single frame of a movie, too briefly for the conscious mind to perceive. Or a stylized sexual image might be worked into a product photograph—for example, a swirl of butter that, if you squint, looks a bit like a woman’s breast. The theory is that the hidden message or drawing will still be perceived unconsciously, leaving the moviegoer with a yen for popcorn and the magazine reader thinking that a particular brand of butter is somehow strangely appealing.
This idea has been around since the 1950s7 and seems to be perennial,8 probably because it’s such a fascinating, if chilling, possibility. Researchers have found it interesting as well, and lots of evidence compiled in the last few decades shows that this sort of subliminal persuasion doesn’t work.c There are some circumstances in which stimuli you don’t consciously see can influence your behavior, but the behaviors subject to this influence are pretty low-level laboratory tasks that wouldn’t have much impact in your daily life, such as how rapidly you can verify that a string of letters forms a word (“bread”) rather than a nonword (“plonch”). You can’t get people to buy popcorn or other products with this method.
The real concern is not that you are persuaded by things outside your awareness. The real concern arises when you’re aware of these messages but don’t recognize that they persuade you. Subjects who surrendered the copier were aware of the request, but surely did not notice that their response was prompted by the phantom “reason” given by the experimenter. The cue “reason offered” tells our inattentive mind to accede to an innocuous request from a stranger. What are the cues that tell our inattentive mind “This message is probably true”?
One such cue is familiarity. Things that are familiar seem reliable, safe, likable, and believable.9 In a typical experiment investigating this phenomenon, subjects heard a series of statements presented as little-known facts—for example, that comedian Bob Hope’s father was a fireman or that the right arm of the Statue of Liberty is forty-two feet long.10 (The “facts” were fabricated, to be certain that subjects could not have known any of them before the experiment.) Later, subjects were presented with a set of trivia statements of the same sort, and they were asked to judge the likelihood that each is true. Some of the statements were repetitions of the prior set, and these statements were judged as more likely to be true. The effect is just as large if you tell subjects which statements were presented earlier and warn them that “these statements might feel true just because you heard them recently.”11
Even more remarkable, familiarity affects credibility even if people know they shouldn’t believe the source at the time. In one experiment, subjects were told who made each statement—for example, “John Yates says that three hundred thousand pencils can be made from the average cedar tree.”12 Subjects were told that statements from males were always accurate and that statements from females were always inaccurate. (Half of the subjects were told the opposite gender-truth relationship.) Later, subjects read a list of statements, with the instruction to judge the credibility of each. They were told that they had heard some of the statements earlier in the experiment, and they were reminded that some of them were false. So what happened?
Familiar statements were still judged as more likely to be true. Why? Well, during the trivia test when a subject reads, “Eighteen newborn possums can be placed in a teaspoon,” she might say to herself, “Hmm . . . that seems familiar. Did I hear that during the experiment, or is it just one of those odd facts you pick up somewhere?” If she doesn’t remember hearing it during the experiment, she will judge that it’s true. But even if she remembers hearing it during the experiment, she still might not remember who said it—a member of the lying or truth-telling gender.
In general, source information (where and when we heard something, and who told us) is more fragile than content information (what we heard). For example, think how often it happens that you remember something but can’t remember who mentioned it: “Oh, someone told me that movie was terrible.” Much less frequently does the opposite happen: “Sam told me he saw that movie, and I know he had an opinion . . . now what in the world was it?”
This type of result gives us insight into why propaganda works. We might hear information from a source that we know to be unreliable—the propaganda minister of a totalitarian government, for example—and we discount the truth of the information at the time. But later, there’s some chance that we’ll remember the content and forget that it came from an unreliable source.
Another aspect of familiarity is knowing that something is familiar—and accepted—by others. This is often called “social proof”—you see that others find something credible. The logic of using social proof is easy to appreciate in purchasing decisions. For example, one of the drains in my house becomes clogged perhaps once every two years. So I’m an occasional buyer of drain cleaner. When I’m in the store, confronted by half a dozen brands, how am I supposed to choose? I could pick the cheapest one, but a clogged drain is such a nuisance that I don’t want to risk buying an inferior product. Ah, there’s Liquid-Plumr, a familiar brand. I’ve seen ads for it since I was a kid. It’s not only familiar, but I can infer that people must use it. At the very least, it can’t be terrible—if the stuff didn’t work, surely the company would have gone out of business. So instead of buying the brand I’ve never heard of (which probably works just fine), I pay more for Liquid-Plumr.
Social proof can become a real problem if an inaccurate belief becomes widely accepted. For example, I mentioned in the Introduction that something like 90 percent of American adults believe that people differ in their learning styles.d There’s actually no laboratory evidence that people learn in fundamentally different ways. But there is surprisingly little doubt among Americans. I don’t think it occurs to most people that the truth of the learning style idea is open to doubt. It’s like doubting atomic theory—it’s just one of those things that “they” have figured out to be true. If everyone knows it, it must be true.
It is also the case that liking—that is, liking a person—makes what the person says seem more credible. Even our snap impression of a stranger influences how believable we find him. There must be a reason that advertisers use attractive people in their ads. Indeed, there are a couple of reasons. First—surprise, surprise—people shown ads for fictitious products in a laboratory setting are more likely to say they would be willing to buy a product if the person in the ad is attractive than if they are ordinary looking.13 (My colleague David Daniel notes that it’s easy to separate bona fide scientists who seek to apply research to K–12 education from charlatans: charlatans are more attractive and have beautifully coiffed hair. Being bald myself, I thought this comment showed great insight.)
The second way that attractiveness persuades you may take much longer to develop, but it’s also more powerful. Sometimes the attractive person is not there to give us a message at all—he or she is there simply to look attractive (Figure 1.1). How might an attractive woman help sell a car? Consider the case of the Honda 600 Coupe (Figure 1.2), an inexpensive, rather boxy little car sold in the early 1970s. About the last thing you’d call this car is sexy.
FIGURE 1.1: Many advertisers use attractive models in an overt way to sell products. Although we think it has no impact on us, it does make us regard their products more favorably.
FIGURE 1.2: The Honda 600 Coupe, which most would see as lacking sex appeal.
Honda came up with a clever advertisement, highlighting the car’s low price. The image showed eight attractive women standing behind the car. The text implied that by spending less on his car, a guy would have more money to date these beautiful women.
Most people think that advertisements have little impact on them . . . although they think they do affect other people.14 The Honda ad may not convince readers via the soundness of the suggested dating-investment strategy—in fact, I’m guessing most guys would dismiss it—but it might work by classical conditioning—the same type of learning that made Pavlov’s dog salivate when it heard the bell (Figure 1.3).
FIGURE 1.3: The three steps of learning via classical conditioning.
Before Pavlov began the experiment, there was a natural association in his dog between food and salivating: if you put food in a dog’s mouth, it will produce saliva as part of the digestive process. The experiment really begins with step 2, in which Pavlov repeatedly presents the bell and the food together. With enough repetitions, these two become associated, and the bell is enough to elicit salivation, shown as step 3.
Advertisers are not interested in getting you to salivate, but they are interested in changing your emotional response to their products, and that can be done with classical conditioning. A Honda 600 Coupe can be made to seem sexy if it is associated with something that people already think is sexy (Figure 1.4).
FIGURE 1.4: Emotional responses—such as the positive emotion of seeing attractive women—can be classically conditioned as easily as salivation can.
Step 1 represents a preexisting response—in this case, the positive emotion that the magazine reader feels when seeing attractive women. In step 2, the sight of the Honda 600 Coupe is paired with the sight of the attractive women. If this step is repeated enough (that is, the person sees the Honda 600 Coupe advertisement repeatedly), eventually the sight of the car will come to elicit the emotional response elicited by the attractive women.e So you don’t need to believe the overt content of advertisements for them to have an effect on you.15 The point of the ad was probably not to entice young men who could afford a 1972 Ford Mustang (about $3,000) to buy a Honda (about $1,700) so that they could use the extra money to attract and date beautiful women. That’s a tough sell. The point was to make the Honda seem like a little bit less of a dud, to make the emotional reaction to it a little more positive, so that someone with only $2,000 to spend would prefer the Honda to the Volkswagen Beetle.
Perhaps the best example of the impact of emotional conditioning comes from a famous blunder: the introduction of New Coke. The early 1980s was a difficult time for Coca-Cola. The brand, which had long dominated its closest competitor, Pepsi, was losing market share. Pepsi ran a series of effective advertisements showing hidden-camera accounts of dedicated Coke drinkers comparing Coke and Pepsi in blind taste tests and preferring Pepsi. And Pepsi claimed that such taste tests had been conducted in a rigorous fashion and that more than half of avowed Coke drinkers actually preferred the taste of Pepsi.
In a move that in retrospect looks panicky, executives at Coke decided to change the taste of their flagship product. New Coke was introduced in 1985, and consumers hated it immediately, thoroughly, and with finality. Attention has been drawn to the fact that the famous Pepsi taste tests didn’t match the way people actually use the products. After all, you don’t take a few sips of a cola; you typically drink eight ounces or more. The argument goes that Pepsi tastes good initially because it’s a little sweeter than Coke, but after a few ounces, people prefer Coke. That may be true, but that can’t explain the emotional outrage that followed the introduction of New Coke. A consumer hotline at the company was receiving eight thousand calls per day, virtually all of them complaints. When New Coke ads appeared on screens at sporting events, crowds booed.16
People were angry about the disappearance of Coke not simply because they thought it tasted better. People had an emotional attachment to Coke. The Coca-Cola corporation had spent decades and untold millions of dollars building an association in people’s minds between Coke and patriotism, Coke and Santa Claus, Coke and young love, and so on. Then the corporation took all of that away, offering the promise that New Coke tasted better. It’s as though I went to a teenager’s house and said, “You know how your mom is always nagging you and won’t get you the cool cell phone you want and embarrasses you in public? I found someone who won’t do those things. Here’s New Mom!” New Mom might have objective features that Old Mom didn’t, but the emotional attachment to Old Mom is not so easily replaced.
We like (and therefore believe) not only people who are attractive but also people whom we perceive to be similar to us. The classic experiment studying this phenomenon was conducted in the spring of 1954, just before the Supreme Court decision on school desegregation. Black college freshmen were asked to listen to a radio broadcast during which a guest argued that if the Supreme Court ruled segregation unconstitutional, it would still be desirable to maintain some private black colleges as all black, in order to preserve black culture, history, and tradition. It was known that a large majority of the subjects opposed that idea. Yet they found the communication fairly persuasive when the speaker was presented as similar to them; he was described as the president of the student council at a leading black university. Black students were much less persuaded when the speaker was described as a white adult.17
People who are like us seem more trustworthy, less likely to steer us wrong. But of course, they are not always more likely to be knowledgeable. On occasion they are, as when a teacher finds a message about classroom practice more believable because it is delivered by another teacher. In that case, the teacher finds the message more believable not only because he can identify with the teacher but also because the teacher has expertise that’s relevant to the message. That expertise effect still applies when the similar-to-me effect is absent. In short, people figure that experts know what they are talking about.18 This seems only logical; shouldn’t I believe my pediatrician rather than my friend the graphic designer when each makes a different recommendation for treating my child’s rasping cough? Sure, but as we’ll see in Chapter Six, the issue of expertise is more complex than you might guess.
• • •
Let’s take a step back to remind ourselves of the big picture. We’re talking about why people believe what they believe, and in particular how they evaluate new information. I’ve suggested that we are often on autopilot, even when exposed to messages that are meant to persuade us. Rather than carefully evaluating the factual basis of the message and the logic of the argument, we rely on what are often called peripheral features of the message (in contrast to the facts and logic, which would be the central features). Peripheral features include things like the familiarity of the message, how it makes us feel, the attractiveness of the source of the message, whether we identify with him, and his apparent expertise.
But surely we think some of the time? Okay, I probably won’t think too carefully about the car advertisement as I stand in line at the bank. But what if I’m in the market for a car? Won’t I pay more attention to the advertisement? Won’t I evaluate the meaning of “it has the best repair record of any American car in its class” and whether the car really is the “quintessence of luxury”?
Yes. We are much more likely to snap out of autopilot and really evaluate persuasive messages when we perceive the stakes to be high. The stakes are high when the persuasive message is personally relevant (as when we’re in the market for a car) or when we think we might be called on to describe the pros and cons of the argument (for example, when we make a decision at work and the boss asks for an explanation).
But wanting to evaluate a message is not the same as evaluating it. And evaluating it is not the same as evaluating it effectively.
Unfortunately, we still make plenty of mistakes when we evaluate arguments, even when we are not on autopilot, when we’re really doing our best to think things through. Why?
Two things must be in place for us to evaluate an argument successfully. We must be motivated to do so—as mentioned, that usually happens when we have some personal stake in the argument or when we think we might be called upon later to summarize it or explain a decision. But in addition to being willing to evaluate the argument, we must also be able to do so, and here we may encounter significant stumbling blocks.
The first of these is attention. Suppose I’m a teacher, and I’m required to attend a presentation by a district official who will describe a new scheduling scheme for my school. But I didn’t sleep much the night before, and it’s warm in the auditorium. I try to keep my mind on what the speaker is saying, but my wife’s birthday is the next day, and I haven’t planned any sort of celebration, and ideas of what I might pick up on the way home keep popping in my head. In short, I want to listen, but I’m tired and distracted, and I can’t really think through the speaker’s argument as to why this change is going to save money and benefit students, yet will call for no extra work by staff.
Evaluating the strength of her argument and judging the truth of the facts she’s citing might be hard when I’m tired and distracted, but picking up on the peripheral cues of the message is not demanding at all. I can do that even when I’m tired and distracted. I notice that the speaker is attractive, and her manner is warm and sincere. She mentions several times her own experiences in the classroom, so I know she’s a teacher, like me. And even though I’m not really following the argument, she seems quite confident, and she seems to be listing a lot of reasons that this is a good idea, including citations from some research experts.
When someone presents an argument and we’re too tired to really figure out what she’s saying, most of us don’t withhold judgment, even though we know that’s probably the smartest thing to do. We’re likely to use peripheral cues. I won’t leave the auditorium as a cheerleader for the new plan, but I might very well leave with a vague sense that it’s going to be all right.
Now suppose I’m not tired and distracted. The district official is giving her talk, and I’m giving her my full attention. But I’m still not getting it. She’s explaining how the schedule change saves money, but it doesn’t make any sense to me. She emphasizes that everyone will work the same hours at the same salary, and when it comes to the savings part, she uses some accounting jargon that I don’t know. The same thing happens when she talks about research that is supposed to show that this new schedule helps students. She doesn’t just say, “The research shows it works”; she’s actually describing the research in detail, which I appreciate . . . but it’s too much detail. She’s talking as if we’re all researchers, and again, I’m not really following it. At the end of the presentation, a friend who I know is quite sharp on business matters asks a question about the details of the accounting, and the speaker answers promptly. My friend nods, apparently satisfied. A little later, someone I don’t know very well asks a question about the research studies, and again she answers promptly, and the questioner seems to think the answer was okay.
Just as you do when you’re tired, if the argument is too technical to follow, you use peripheral cues:20 the speaker’s attractiveness and likeability, the fact that you identify with her and that she seems well informed, and the social proof that others at the presentation seem to be persuaded. So the first challenge to critically evaluating scientific research in education is pretty obvious. We’re talking about technical information that is hard to evaluate. And you know that a speaker can twist results or cite only the studies that support her case and omit the ones that don’t, and will likely get away with it, unless you know the research literature quite well.
You might think that people surely would refrain from using peripheral cues when the stakes are high. But they don’t. Even when we’re picking a president, we care very much about the candidate’s attractiveness and how he or she makes us feel—more than we care about his or her ideas.21 Another example comes from higher education. Selecting a college is certainly a high-stakes decision, and presumably it’s one that people would consider carefully. But comparing candidate colleges is complicated, so parents and kids use peripheral cues: some global sense of “reputation” (which is just another name for social proof) and, curiously enough, price. When we are unsure of the quality of a product, we use price as a guide: if it’s expensive, surely it’s good. Traditional economic theory would indicate that raising tuition would decrease the number of people wanting to go to a college. In fact, the opposite is true. Raising tuition increases the number of applicants.22
Another stumbling block in trying to evaluate the strength of an argument is perhaps the most troubling. Each of us is pretty reluctant to change our beliefs. We like to imagine ourselves as impartial judges, rationally weighing evidence and ready to accept any conclusion to which the facts point. We’re not. An enormous amount of research shows that we are biased to conclude that new evidence supports what we already believe. To extend the metaphor, we are not judges weighing evidence: we are attorneys building a case, and we build the case, not to convince a jury, but ourselves. We seek to persuade ourselves that our beliefs have always been correct and that the new information before us merely confirms what we already knew. This tendency is called the confirmation bias, and it affects all stages of thinking: what information we seek, how we interpret information when we find it, and how we remember it later.
Here’s a simple example of our bias when we’re gathering information. Suppose I challenge you to guess the number I have in mind. I tell you that it’s between one and ten, but rather than have you guess outright, I ask you to pose yes-no questions to deduce the number. Suppose you know that I think seven is my lucky number, so you’re guessing I picked seven. You have a hypothesis, and now you must gather some information to test whether it’s true. Consider this: you could ask me, “Is the number odd?” or you could just as well ask, “Is the number even?” The confirmation bias refers to our tendency to seek information that confirms our hypothesis—if you hypothesize that the number is odd, you’re more likely to ask “Is the number odd?” than “Is the number even?”23
A bias in playing a guessing game is harmless, but seeking only confirming information in other contexts can lead to trouble. Your hypothesis can be wrong—even very wrong—but you still might find a few positive examples, and they will make you think you’re correct. Suppose I’m a job interviewer, and I’m interviewing an applicant who is an acquaintance of someone in my office. My coworker tells me that the applicant is quite introverted. The confirmation bias will make me more likely to pose questions that assume the applicant is an introvert, and the person will come off looking like one.24 Worse yet, suppose I’m a physician, and a few symptoms lead me to suspect that a patient has a particular disease. Might not the confirmation bias lead me to order tests that might confirm my diagnosis, instead of other tests? The answer is yes,25 although more experienced doctors may be better at resisting this tendency.26
The confirmation bias is not restricted to how we seek out information. We’re more likely to notice confirming evidence and to ignore or discount disconfirming evidence. This phenomenon was first demonstrated in a clever experiment using college classrooms.27 An experimenter appeared in a college course and told students that their regular professor was out of town and that a substitute would be arriving soon. The regular professor had given him (the experimenter) permission to collect the students’ opinions of this substitute as part of an ongoing research study. To provide a bit of background information, the experimenter said that each student could read a brief biography of the substitute. Each student received a written paragraph. The biographies were all identical, with one crucial exception: half of the students saw this sentence as part of the biography: “People who know him consider him to be a rather cold person, industrious, critical, practical, and determined.” For the other students, the words “rather cold” were replaced by “very warm.” Naturally, the substitute professor had no idea which students had seen which description. But after the class, people who had expected to see a warm person felt that they had seen one. They rated the substitute as more considerate, more good-natured, and funnier than the students who expected the substitute to be a cold fish.
We see what we think we’ll see. This helps us understand how stereotypes can be maintained. The bigot who thinks, for example, that African Americans are lazy will tend to notice and remember any instance of laziness he observes in African Americans. Hence, the bigot will note (and remember) an encounter with a lackadaisical store clerk who is black, but the same interaction with a white clerk will go unnoticed, or the bigot will assume that the clerk has a valid excuse for being a little slow.28
The confirmation bias also applies to how we interpret ambiguous information: it’s interpreted as being consistent with our beliefs. For example, in one study, subjects were presented with true facts about politicians that showed them as contradicting themselves. Thus subjects read that in 1996, John Kerry had said that the Social Security system had to be overhauled, including cutting benefits and raising the retirement age. Subjects were then told that during the 2004 presidential campaign, Kerry had promised that he would never cut Social Security benefits or raise the retirement age. When subjects were asked what they thought of this, virtually all faulted Kerry for the contradiction. Not too surprising. But the really interesting part of the experiment came next. Subjects were given a potential explanation for Kerry’s contradiction; they were told that in 1996, economists had thought that the Social Security system would run out of money in 2020, and that urgent action was needed to save it. But at the time of his campaign statement, economists had reversed their opinion, and it seemed that the system was no longer in imminent danger. This third statement renders Kerry’s apparent turnaround ambiguous: Did he rationally respond to changing economic conditions, or did he go back on his word so that he could appeal to an important political constituency? Once the information was ambiguous, the confirmation bias came out in full flower. Subjects who identified themselves as Democrats thought Kerry’s change of heart was perfectly justified, whereas Republicans thought that Kerry was using economic forecasts as an excuse and was obviously dishonest.29
Even if we are forced to acknowledge that some evidence goes against our beliefs, and even if this evidence cannot be twisted in our minds so that it seems ambiguous, we still have another way to maintain our beliefs: we set a higher standard for disconfirming evidence than for confirming evidence.30 In one study, the subjects’ attitudes on two controversial issues—gun control and affirmative action—were measured.31 Then they read arguments on both sides of each issue and were asked to rate the strength of the arguments. Subjects were urged to set any personal opinions aside and to try to be as objective as possible. And subjects believed that they were doing so . . . but—you guessed it—their ratings were influenced by their beliefs. People who favored gun control thought that the pro–gun control arguments were very strong and that the anti–gun control arguments were weak. People who did not favor gun control showed the opposite pattern of ratings. It seems that when we encounter a conclusion we disagree with, our minds spring into action, looking for flaws in the argument. But if we agree with someone, we’re more likely to say to ourselves, “Yes, yes, I already know this. I’m so glad you agree with me.”f 32
Sometimes the beliefs that we seek to confirm can be more subtle. They do not concern a specific object or fact about the world, but rather constitute a more global sense we have about the nature of things. We might call them meta-beliefs because their generality means that they will influence many other beliefs. One example might be that “natural things are generally good, and are better than similar objects that are artificial.” Some confirmation biases would be an obvious consequence of this belief. For example, someone who held this belief might set a low standard for evidence that an artificial sweetener like Aspartame causes cancer. But this meta-belief could have more subtle consequences as well. For example, if you think that natural things are good, you might be open to the idea that humans left in a more natural state are more likely to be healthy, virtuous, and morally upright. It is modern, urban society—an unnatural human construction—that leads to crime, depravity, and wickedness.
Scientists have identified a few meta-beliefs that many of us share. An example is the just-world belief, a sense that the world is basically fair. According to this belief, living a moral, just life brings happiness and good fortune, whereas immoral behavior is punished by fate, eventually.33 The subtlety and importance of this belief to persuasion can be appreciated from this experiment.34 Researchers first measured college students’ knowledge about global warming and their attitudes toward the issue—how real was the danger, what is likely to happen to the climate in the future, and so forth. Next, subjects read an article describing the dangers of global warming, which ended in one of two ways. One version concluded with an apocalyptic warning of terrible danger to future generations. The other ended with similar facts but a more hopeful message about possible solutions through new technologies. Subjects who read the doomsday message became more skeptical about the existence of global warming. Researchers hypothesized that this was a consequence of the just-world belief: if the world is just, innocent people do not deserve to die as a consequence of global warming, so it is deemed less likely to be a problem.
The confirmation bias sounds . . . well . . . stupid. Confronted by evidence that we’re wrong, we put all our cognitive energy into figuring out why we must be right. It doesn’t seem very adaptive. But when you think about it, it’s not quite as dumb as it seems. It would be disruptive indeed if you changed your beliefs every time you encountered a new bit of evidence. I say “disruptive” because very few of our beliefs are wholly isolated. For example, my belief that global warming is a serious problem is connected to my belief that I was smart and virtuous to buy a hybrid car. It’s also connected to my dislike for my coworker who is full of loud scorn for global warming. So if I change my belief about global warming, that affects my belief about my car (I was a sucker to pay extra for a “green” car) and about my coworker (that loudmouth was right all along).35
A useful metaphor is to think of belief as a web, with each fact we believe varying in its interconnectedness to other facts.36 The greater this interconnectedness, the more we can expect that I will struggle to maintain this belief, because changing it will have far-reaching consequences throughout my web of belief. Beliefs that are newly acquired have not had much time to be thoroughly incorporated into the web, so are relatively isolated from other beliefs. These I can change without disrupting other beliefs, so I’ll be more ready to do so. As he so often did, Tolstoy captured this human truth in vivid terms: “The most difficult subjects can be explained to the most slow-witted man if he has not formed any idea of them already; but the simplest thing cannot be made clear to the most intelligent man if he is firmly persuaded that he knows already, without a shadow of doubt, what is laid before him.”37
Beliefs are not simply matters of fact. Emotion is intertwined with belief, a factor that we have until now ignored and to which we must now turn.
I’ve made it sound as though people are both ruled by logic and completely illogical. On the one hand, I’ve said that sometimes we don’t bother to think logically, and even when we try to do so, we are nevertheless influenced by peripheral cues like a speaker’s attractiveness, and we stack the evidence in such a way as to maintain our current beliefs. On the other hand, I’ve made it sound as though the only acceptable motivation for belief is accuracy, that all we ought to care about in choosing what to believe or not to believe is whether the belief is aligned with the real world.
People do care about accuracy.38 But we’re not so coolheaded that we care about accuracy to the exclusion of all else. People have other motivations for believing or disbelieving:
Some beliefs may be linked to important aspects of our identity, our self-concept. For example, suppose that you see yourself as politically liberal. You conscientiously recycle, you contribute to progressive political candidates, you believe that the government plays an important and effective role in righting social wrongs, and you are somewhat distrustful of large corporations. You think corporations put profit above human values and that executives in large corporations inevitably do likewise. What’s more, you think of liberal values as an important part of who you are. When asked “Tell me about yourself,” it comes up early in your description.
Now imagine that your school district is considering hiring a superintendent who has no experience in education, but has worked for the last thirty years as a high-level manager in the corporate world. You decide to look at published research on the track records of business leaders who have run school districts with no prior education experience. Here’s a case where you have two motivations for belief. On the one hand, you are motivated to be accurate in assessing how likely the candidate will be to succeed. On the other hand, you are motivated to believe that he will not succeed. There’s more to it than maintaining your current beliefs. Part of your self-identity as a liberal is that you see important differences between yourself and corporate executives; those people do not have the right values nor the right sense of community nor a good understanding of children. To find that corporate executives have made excellent school superintendents would cast doubt on the accuracy of your view of the corporate world, and for you to conclude that the corporate world may not be so bad is threatening to your self-image as a liberal.g “So now I think that big corporations are just fine and that the vultures who sell us stuff we don’t need and pollute our environment and trample the underprivileged should be in charge of our children? Who the heck am I, anyway?”
A second motivation for belief is to protect values that you see as sacred. Examples might be “I believe that people should be free,” or “I believe in the sanctity of human life,” or “God’s intention is that sex be between a man and a woman.” The last of these examples is controversial in American society today, but even beliefs that are not controversial become controversial when we begin to interpret and apply them. Everyone believes that human life is sacred, and everyone believes in freedom; the controversy over abortion is largely due to the pitting of those two values against one another: if an hours-old zygote is a human life, then abortion is unconscionable, but if it’s not, than restricting a person’s right to abort it is government interference with an individual’s liberties. Could scientists provide a definitive answer to whether life does indeed begin at the moment of conception? I doubt it, but even if they could, most people would not want to hear the answer. Their position on abortion is not driven by facts but by values.
As with the maintenance of self-concept, the protection of sacred values can have far-flung implications, depending on how the value is interpreted. For example, consider the belief “All people are equal.” Most people interpret this idea to mean “equal before the law” and “equal in dignity” and “equally important as living beings.” But someone may also have the sense that “equal” extends to abilities. If so, he may be uncomfortable with the idea that apparent differences in intelligence are largely genetic, that some people are just not very bright and there is not much that they can do about it. That would seem to be a cosmic violation of one of his core values. Nature or God seems not to intend that people be equal.
So how does he resolve this conflict? One choice is to deny the evidence that intelligence is genetically determined. People start life with roughly equivalent abilities, but some live in poverty or have careless parents or come from crime-ridden neighborhoods. Society makes them unequal. Or he could conclude that, yes, intelligence is largely determined by genes, but when someone is shortchanged in intelligence, nature makes up for it by endowing that person with greater emotional sensitivity or athletic ability or some other skill. Drawing either of these conclusions can, in turn, affect one’s views on other large-scale policy matters. Consider how your views would differ on funding for public education, on public assistance programs like welfare, on criminal justice policies, depending on whether you think that people are smart or not-so-smart either because of their genes or because society made them that way. My point is not about the scientific support for any of these beliefs.h My point is that the shaping of these beliefs does not depend solely on a hunger for factual accuracy about the nature of the world. People’s values shape their beliefs about scientific matters, such as the relative contribution of genes and environment to intelligence. They then interpret data to confirm those beliefs.
A third reason that we adopt beliefs is that they help build a sense of social identity, of solidarity with a group. Some beliefs and behaviors that we adopt for this purpose are quite obvious. When I arrived at college, I had never attended a basketball game. I doubt I knew how the game was played, beyond a crude knowledge of the rules. But I was attending Duke, home of a basketball dynasty, and a school that had the wisdom or foolhardiness to reserve the plum seats of Cameron indoor stadium not for big donors but for undergraduates, affectionately called “Cameron Crazies.” Like many fellow students, I waited for hours in foul weather to get tickets, I knew all the statistics, and I shouted myself hoarse at games. I absorbed from my peers not only passion but beliefs: beliefs about the value of big-ticket athletics to campus spirit, for example, and beliefs about the indirect benefits of athletics to the common good of the university through improved fundraising. I developed these beliefs solely because of the social environment and my desire for solidarity with my peers. Had I attended a school with weak athletic teams, my beliefs likely would have been different.
It is hard to be unaffected by one’s social group. For example, my social group is composed of college professors, and college professors are, compared to other Americans, politically liberal. Suppose that I start my job with relatively conservative views. My reaction to this strong current of opinion need not be to absorb the opinions of the group, as I did with basketball as a student. That’s less likely because my political views are more settled than my views on basketball. But at the very least, I am going to meet a number of nice, helpful people who hold liberal political views. Because I’m surrounded by left-leaning people, I will have greater access to liberal views on current events than I have had in the past. And whether I like it or not, I will absorb the idea that liberal views are part of what it means to be a college professor, just as being a basketball fan was part of what it meant to be an undergraduate at Duke.
A final contributor to my belief may be strongly held emotion. Consider this example. In the summer of 2010, there was an acrimonious national debate over the building of a mosque and cultural center near the site of the September 11 attacks in New York City. Not all of the information entering this debate was accurate, and one often-repeated rumor was that the imam behind the plan, Feisal Abdul Rauf, was a terrorist sympathizer. Two fact-checking organizations, known and respected for their objectivity (Factcheck.org and Politifact) had investigated this rumor and found it to be false. Yet it was widely believed. Two psychology professors at Ohio State University decided to see if they could persuade people that the rumor was false.39 It wasn’t easy. When people who either believed the rumor or were unsure about it were exposed to the information from the fact-checking organizations, only 25 percent concluded that the rumor was false. Furthermore, the researchers found that it was relatively easy to undo the persuasive power of the facts. If the text were accompanied by a picture of the imam in traditional Arab garb, the percentage of people persuaded dropped, presumably because it made him seem less like an American and perhaps less loyal to his country and less sensitive to American sensibilities.
Note that the researchers weren’t trying to convince people that building the mosque was a good idea. They were simply asking them to reevaluate the rumor that the promoter of the idea had been a terrorist sympathizer in the past. If people want their beliefs to be accurate, why wouldn’t they change them when confronted with relevant facts? A factor that likely played a role in this case is emotion. For most Americans, any thought connected with the September 11 attacks calls up anger and fear. It is difficult for facts to gain a toehold under those circumstances.
Here’s another example. Suppose that I am a bit prudish about all sexual matters, but I find the thought of homosexual acts to be outright disgusting. In fact, the feeling is so strong that I’m reluctant to talk about any aspect of homosexuality at all, because doing so inevitably calls up this strong, unpleasant emotion. Now suppose you are trying to persuade me that there is no harm in an openly gay man teaching mathematics to seventh graders. You may hit me with factual arguments—for example, the lack of evidence that a teacher’s sexual orientation influences students. But factual arguments won’t do much good, because what’s behind my objection is not a fact but an emotion—disgust at the thought of homosexuality. I’m unlikely to be aware of what’s driving my opinion, so I may answer you with facts of my own or with an attempt to discredit your argument. But the whole discussion is actually a red herring.40
• • •
This chapter has been a parade of disappointing facts, easily summarized: when we don’t weigh evidence carefully, we are prone to believing or disbelieving things for trivial reasons; and even when we do weigh evidence carefully, we are still subject to those trivial influences. If we are really interested in maintaining accurate beliefs, and especially in knowing which educational practices or reforms are “scientifically based,” what are we to do? Part of the answer is to gain a better understanding of the precise nature of the “trivial influences” to which we are most susceptible, the better to avoid them. That is the subject of Chapter Two.
a Ap Dijksterhuis, a leading social psychologist from Holland, put it this way: “If [an] editor would have asked us to write about automaticity in social behavior 25 years ago, he would have been met with a blank stare. . . The whole concept of automatic or unconscious behavior would have struck anyone as odd at that time. . . [Today] if we wanted to write a short chapter, perhaps we should have asked the editor to assign us a chapter on conscious processes in social behavior.” Dijksterhuis, A., Chartrand, T. L., & Aarts, H. (2007). Effects of priming and perception on social behavior and goal pursuit. In J. A. Bargh (Ed.), Social psychology and the unconscious: The automaticity of higher mental processes (pp. 50–131). New York: Psychology Press.
b Some studies show that if you’re having trouble understanding someone who speaks with a strong accent, imitating it can actually improve comprehension. Adank, P., Hagoort, P., & Bekkering, H. (2010). Imitation improves language comprehension. Psychological Science, 21, 1903–1909.
c The idea is easy to test. One way is to show a group of people an ad for butter and ask them to rate how attractive they find it, whether they think the ad makes them a little more likely to buy the brand, and so forth. Show a second group of people the same ad with the erotic picture subtly airbrushed into the photo and compare the ratings. For a review of such research, see Theus, K. T. (1994). Subliminal advertising and the psychology of processing unconscious stimuli: A review. Psychology and Marketing, 11, 271–290.
d The idea behind learning styles is not that people vary in mental ability—it’s that two people with the same ability have preferences about which way is easiest for them to understand and learn, and that these preferences have an impact on the efficacy of learning. For more on learning styles, see Riener, C., & Willingham, D. T. (2010). The myth of learning styles. Change, 42, 32–35.
e The response that comes from conditioning is seldom as robust as the response to the real stimulus. That is, the dog doesn’t salivate as much in response to the bell as it does in response to the food, and the positive feeling from seeing the Honda is not the same as it is from seeing the attractive women. But there is an effect.
f Scientists are not immune to this motivated reasoning. When an experiment turns out as we expected, we take the results at face value. But when it turns out other than we expected, we comb over the data to make sure they were recorded correctly, reconsider whether we implemented variables properly, recheck equipment, and so on. We’re more critical of disconfirming evidence than of confirming evidence. For examples of the confirmation bias in science, see Koehler, J. J. (1993). The influence of prior beliefs on scientific judgments of evidence quality. Organizational Behavior and Human Decision Processes, 56, 28–55; Mahoney, M. J. (1977). Publication prejudices: An experimental study of confirmatory bias in the peer review system. Cognitive Therapy and Research, 1, 161–175.
g I don’t mean to suggest that it is only liberals who care to maintain their self-image. The example could have just as easily been of political conservatives who would be motivated to see charter schools succeed because the policies on their governance seem to align with conservative views of the roles of competition.
h If you’re curious: the very premise that intelligence is mostly genetic is under assault. Through 1990, most psychologists would have said that perhaps 70 percent of intelligence (as measured by middle-of-the-road intelligence tests) is determined by your genes and perhaps 30 percent by the environment. Today, most would reverse those percentages. For a readable summary, see Nisbett, R. E. (2009). Intelligence: What it is and how to get it. New York: Norton. There is no evidence at all for the idea that people who are low in intelligence make up for it with some other ability. In fact, abilities tend to be positively related, and this relationship is stronger for people with lower ability levels; see Detterman, D. K., & Daniel, M. H. (1989). Correlations of mental tests with each other and with cognitive variables are highest for low IQ groups. Intelligence, 13, 349–359.
Notes
1. Langer, E., Blank, A., & Chanowitz, B. (1978). The mindlessness of ostensibly thoughtful action: The role of “placebic” information in interpersonal interaction. Journal of Personality and Social Psychology, 36, 635–642.
2. James, W. (1890). Psychology (Vol. 1). New York: Henry Holt, p. 115.
3. Chartrand, T. L., Maddux, W. W., & Lakin, J. L. (2005). Beyond the perception-behavior link: The ubiquitous utility and motivational moderators of nonconscious mimicry. In R. Hassin, J. Uleman, & J. A. Bargh (Eds.), The new unconscious (pp. 334–361). New York: Oxford University Press.
4. 1 Corinthians 9: 2–22 (New International Version). Available online at http://www.biblegateway.com/passage/?search=1+Corinthians+9%3A19–23&version=NIV.
5. Johnston, L. (2002). Behavioral mimicry and stigmatization. Social Cognition, 20, 18–35.
6. There are two particularly prominent psychological models of how persuasion happens. Both have a conscious and an unconscious route. Petty, R. E., & Cacioppo, J. T. (1981). Attitudes and persuasion: Classic and contemporary approaches. Dubuque, IA: Brown; and Chaiken, S. (1987). The heuristic model of persuasion. In M. P. Zanna, J. M. Olson, & C. P. Herman (Eds.), Social influence: The Ontario symposium (Vol. 5, pp. 3–39). Hillsdale, NJ: Erlbaum.
7. Packard, V. (1957). The hidden persuaders. New York: McKay.
8. One recent example is Bullock, A. (2004). The secret sales pitch: An overview of subliminal advertising. San Jose, CA: Norwich.
9. Zajonc, R. B. (1968). Attitudinal effects of mere exposure. Journal of Personality and Social Psychology Monograph Supplement, 9, 1–27.
10. For example, Begg, I., Armour, V., & Kerr, T. (1985). On believing what we remember. Canadian Journal of Behavioral Science, 17, 199–214.
11. Bacon, F. T. (1979). Credibility of repeated statements: Memory for trivia. Journal of Experimental Psychology: Human Learning and Memory, 5, 2241–2252.
12. Begg, I. M., Anas, A., & Farinacci, S. (1992). Dissociation of processes in belief: Source recollection, statement familiarity, and the illusion of truth. Journal of Experimental Psychology: General, 121, 446–458.
13. For example, Petroshius, S. M., & Crocker, K. E. (1989). An empirical analysis of spokesperson characteristics on advertisement and product evaluations. Journal of the Academy of Marketing Science, 17, 217–225.
14. This phenomenon is observed not only in advertisements but in the media more generally. Perloff, R. M. The third-person effect: A critical review and synthesis. Media Psychology, 1, 353–378.
15. Stuart, E. W., Shimp, T. A., & Engle, R. W. (1987). Classical conditioning of consumer attitudes: Four experiments in an advertising context. Journal of Consumer Research, 14, 334–349.
16. Pendergast, M. (1993). For God, country, and Coca-Cola. New York: Basic Books.
17. Kelman, H. C. (1958). Compliance, identification, and internalization: Three processes of attitude change. Journal of Conflict Resolution, 2, 51–60.
18. DeBono, K. G., & Harnish, R. J. (1988). Source expertise, source attractiveness, and the processing of persuasive information: A functional approach. Journal of Personality and Social Psychology, 55, 541–546.
19. Curly Neal of the Three Stooges, from Calling All Curs (1938).
20. Yalch, R. F., & Elmore-Yalch, R. (1984). The effect of numbers on the route to persuasion. Journal of Consumer Research, 11, 522–527.
21. Abelson, R. P., Kinder, D. R., Peters, M. D., & Fiske, S. T. (1982). Affective and semantic components in political person perception. Journal of Personality and Social Psychology, 84, 18–28.
22. Bowman, N. A., & Bastedo, M. N. (2009). Getting on the front page: Organizational reputation, status signals, and the impact of the U.S. News and World Report on student decisions. Research in Higher Education, 50, 415–436. The effect may not hold for public institutions, however: Hemelt, S. W., & Marcotte, D. E. (2011). The impact of tuition increases on enrollment at public colleges and universities. Educational Evaluation and Policy Analysis, 33, 435–457.
23. Wason, P. C. (1960). On the failure to eliminate hypotheses in a conceptual task. Quarterly Journal of Experimental Psychology, 12, 129–140.
24. Snyder, M., & Swann, W. B., Jr. (1978). Hypothesis testing in social interaction. Journal of Personality and Social Psychology, 36, 1202–1212.
25. Elstein, A. S., & Schwarz, A. (2002). Clinical problem solving and diagnostic decision making: Selective review of the cognitive literature. British Medical Journal, 324, 729–732.
26. Krems, J. F., & Zierer, C. (1994). Are experts immune to cognitive bias? Dependence of “confirmation bias” on specialist knowledge. Zeitschrift für Experimentelle und Angewandte Psychologie, 41, 98–115.
27. Kelly, H. H. (1950). The warm-cold variable in first impressions of persons. Journal of Personality, 18, 431–440.
28. Snyder, M., & Cantor, N. (1979). Testing hypotheses about other people: The use of historical knowledge. Journal of Experimental Social Psychology, 15, 330–342.
29. Westen, D., Blagov, P. S., Harenski, K., Kilts, C., & Hamann, S. (2006). Neural bases of motivated reasoning: An fMRI study of emotional constraints on partisan political judgment in the 2004 U.S. presidential election. Journal of Cognitive Neuroscience, 18, 1947–1958.
30. Munro, G. D., Leary, S. P., & Lasane, T. P. (2004). Between a rock and a hard place: Biased assimilation of scientific information in the face of commitment. North American Journal of Psychology, 6, 431–444.
31. Taber, C. S., & Lodge, M. (2006). Motivated skepticism in the evaluation of political beliefs. American Journal of Political Science, 50, 755–769.
32. Cacioppo, J. T., & Petty, R. E. (1979). Effects of message repetition and position on cognitive responses, recall, and persuasion. Journal of Personality and Social Psychology, 37, 2181–2199.
33. Hafer, C. L., & Bègue, L. (2005). Experimental research on just-world theory: Problems, developments, and future challenges. Psychological Bulletin, 131, 128–167.
34. Feinberg, M., & Willer, R. (2011). Apocalypse soon? Dire messages reduce belief in global warming by contradicting just-world beliefs. Psychological Science, 22, 34–38.
35. For a different perspective on the adaptiveness of reasoning, see Mercier, H., & Sperber, D. (2011). Why do humans reason? Arguments for an argumentative theory. Behavioral and Brain Sciences, 34, 57–74.
36. Quine, W. V., & Ullian, J. S. (1970). The web of belief. New York: Random House.
37. Tolstoy, L. (1894). The kingdom of God is within you (C. Garnett, Trans.). New York: Cassell, p. 49. Available online at http://books.google.com/books?id=F00EAAAAYAAJ.
38. Cialdini, R. B., & Goldstein, N. J. (2004). Social influence: Compliance and conformity. Annual Review of Psychology, 55, 591–621.
39. Garrett, R. K., Nisbet, E. C., & Lynch, E. (2011). Undermining the corrective effects of media-based political fact checking. Paper presented at the annual conference of the National Communication Association, New Orleans, LA.
40. Haidt, J. (2001). The emotional dog and its rational tail: A social intuitionist approach to moral judgment. Psychological Review, 108, 814–834.