images

I can't give up smoking—coughing is the only exercise I get.

—My friend Harry

In 1941 Admiral Kimmel, commander in chief of the Pacific fleet, was repeatedly warned about the possibility of war with Japan. On November 24, he was informed that a surprise attack could occur in any direction. However, Kimmel didn't think the United States was in any great danger, and since Hawaii was not specifically mentioned in the report, he took no precautions to protect Pearl Harbor. On December 3, he was told that American cryptographers decoded a Japanese message ordering their embassies around the world to destroy “most of their secret codes.” Kimmel focused on the word “most” and thought that if Japan was going to war with the United States they would have ordered “all” their codes destroyed. One hour before the attack on Pearl Harbor, a Japanese sub was sunk near the entry to the harbor. Instead of taking immediate action, Kimmel waited for confirmation that it was, in fact, a Japanese sub. As a result, sixty warships were anchored in the harbor, and planes were lined up wing to wing, when the attack came. The Pacific fleet was destroyed and Kimmel was court-martialed.1 Our desire to cling to an existing belief in the face of contradictory evidence can have disastrous effects.

We have a natural tendency to confirm. That is, we selectively attend to information that supports our existing beliefs and expectations. Studies have shown, for example, that when we view a presidential debate, we pay more attention to information that's consistent with our political point of view. When believers in ESP are shown experimental results contrary to their belief, they remember less of that data than if the results had supported ESP.2 As I write this, President George W. Bush is under attack for starting a war with Iraq based on questionable intelligence. Although United Nations inspectors could find no evidence of weapons of mass destruction prior to war, and some intelligence and policy advisors thought that Iraq was not an imminent threat to the United States, Bush (and Vice President Cheney) wanted to eliminate Saddam Hussein. Consequently, many experts now believe that Bush and Cheney “cherry-picked” the evidence, focusing on anything that supported a war, and discounting evidence that did not. After the invasion, we found that most all of their supporting evidence was wrong.3 Using a confirming strategy can lead to dire consequences.

Confirming strategies maintain consistency in our beliefs. How does that happen? New information that's consistent with our existing beliefs is quickly accepted at face value. On the other hand, information that contradicts our beliefs is often ignored or critically scrutinized and discounted.4 For example, a team of psychologists had people read summaries of two studies relating to the effectiveness of capital punishment in preventing crime. The results of one study supported capital punishment while the other did not. It turned out that if the study's results were consistent with a person's beliefs, he thought that the study was well conducted. On the other hand, if the results were not consistent, he found numerous flaws in the study to discount its relevance. If we don't ignore contradictory evidence, we often find reasons why we shouldn't consider it.5

At times, the reasons we give to rationalize conflicting evidence can be quite laughable. Do you remember the psychic mentioned earlier who thought he could remotely view distant objects without his eyes? To provide support for his ability, he said that the CIA spent millions on remote viewing, proving that there must be something to it. However, when asked why the CIA would shut down a successful program, he said that the cold war is over, and so it's not needed. This, of course, makes no sense because it implies that we have no need for intelligence gathering around the world. If that's the case, why is the CIA still in operation? When the psychic was asked why he wasn't rich if he had the ability to predict the stock market, he said that once a person knows he can do it, he is at such peace with his life that he doesn't feel the need for money. Again, one wonders why he wouldn't use his powers to generate considerable wealth for charity. Wouldn't that be a wonderful use of his gift? When I hear such comments I'm reminded of the quote at the start of this chapter. While said as a joke, it's indicative of the amazing lengths that people will go to rationalize what they want to believe.

As Michael Shermer has indicated, most of the time we form our beliefs not because of empirical evidence or logical reasoning. Rather, we have belief preferences for a host of psychological and emotional reasons, including parental or sibling influences, peer pressure, education, and life experience. We then search for evidence to support those predilections. In fact, this process is a main reason why smart people believe weird things. As Shermer notes, “Smart people believe weird things because they are skilled at defending beliefs they arrived at for non-smart reasons.”6

Gamblers are notorious for rationalizing away their losses so as to maintain a belief in their gambling strategy. If you listen closely, they actually rewrite their history of successes and failures, accepting their successes at face value, and fabricating reasons for their losses. My friend uses a basic strategy in blackjack to determine when he should take a card. Like many gamblers, he usually attributes the outcome of the game to his strategy when he wins. However, when he walks away a loser, he finds a number of reasons for the loss, none of which has to do with his play. The dealer may have changed, someone new sat at the table and ruined the flow of cards, there were too many players, or a person at the table's end took a bad hit. It's extremely common for gamblers to evaluate their outcomes in such a biased manner. Winning is interpreted as a reflection of one's gambling skill, while losing is explained away, and thereby discounted, by some outside forces beyond the player's control.

Sometimes gamblers even evaluate their losses as “near wins.” If a gambler bets on a winning football team, she is likely to believe the win is due to her superior insight and skill, even though it may have been caused by a fourth-quarter fumble by the opposing team, which allowed her team to score. If she bet on the losing team, however, she probably wouldn't question her skill or insight. Rather, she would think the loss was caused by a fluke fumble, and that if it didn't happen, she would have won. In effect, she interprets the outcome not as a loss, but as a near win.

Don't think that gamblers are the only ones fooling themselves. Many of us believe that our successes are due to what we did, and our failures are due to external events. Athletes attribute their wins to themselves and their losses to bad officiating. Students who perform well think a test is a valid assessment of their ability, while those who perform poorly think the test is unfair. Teachers believe that their students' successes are due to their teaching skills, while their students' failures are due to the students' lack of ability or motivation. If a manuscript is rejected for publication, researchers think it's due to the arbitrary selection of a particularly critical reviewer, as opposed to the quality of what they wrote.7

And so, we evaluate evidence in a biased fashion. We pay particular attention to evidence that supports our point of view, and either ignore or discount the importance of evidence that contradicts our beliefs. In fact, the desire to maintain our beliefs often makes us avoid situations that would unearth contradictory evidence in the first place. We typically associate with like-minded people and read books and magazines with orientations similar to our own. Seldom do we read a conservative magazine if we're liberal, or a liberal magazine if we're conservative, to get a better understanding of an opposing viewpoint. I once told a good friend, who is a staunch conservative, that he should read Al Franken's new book Lies, and the Lying Liars who Tell Them. I said, “The book is hilarious, I think you'll enjoy it even if you don't agree with its liberal bent.” He flatly refused. I said, “You don't have to reward Franken by putting money in his pocket—read my copy.” Again, a resounding no! Such a desire to attend to evidence supporting our beliefs acts as a filter mechanism that can actually be self-fulfilling. By avoiding contradictory data, it seems like there's more data supporting our preconceptions, which, of course, reinforces our belief that we were right all along.

CONFIRMING OUR HYPOTHESES

It is the theory which decides what we can find.

—Albert Einstein

Our tendency to confirm is so ingrained in our cognitive makeup that we confirm even if we don't have a prior belief or expectation—all we have to do is test a hypothesis. Whether we realize it or not, we act as intuitive scientists, continually developing and testing hypotheses when we make our professional and personal judgments. For example, a doctor forms hypotheses about the potential causes of a patient's disease, and then tests them by gathering information from the patient and other medical procedures. An investor tests whether a firm's future net income will increase (or decrease) when making investment decisions. We even hypothesis test in our everyday lives when we decide whether we like a person or not. In essence, we constantly test hypotheses when forming our judgments; and if we use confirming strategies, those judgments can be biased.

Imagine that you're talking with your friend John about a mutual friend, Barry. John tells you, “I always thought that Barry was outgoing—he's a real extrovert.” You haven't thought about it before, but you recall that Barry was at a party last week, he sometimes tells jokes, and he likes to go to a bar to wind down. Before long, you come to believe that Barry must be an extrovert. What's wrong with this thinking? When deciding whether Barry is an extrovert, we naturally start to think of the times he exhibited extroverted behavior. In effect, we think of things that confirm the hypothesis we're testing. If we focus on the times Barry was extroverted, we're likely to conclude that he is, in fact, an extrovert. The problem is, people are complex, and can exhibit both extroverted and introverted behavior at different times and under different circumstances. And so, if your friend started off by saying he thought Barry was an introvert, you would have likely thought of a number of instances in which he exhibited introverted behavior. You might remember that he reads a lot and likes to spend time at the library. After focusing on these instances, you would likely conclude that Barry was more introverted. Thus, the way the hypothesis is framed can have a major impact on our final judgment.

Our bias to confirm can also affect how we search for information. For example, what if you didn't know Barry, but you had to decide if he's an extrovert by asking him two of the following four questions.

 

(1) In what situations are you most talkative?

(2) What factors make it hard for you to open up to people?

(3) What would you do to liven things up at a party?

(4) What things do you dislike about loud parties?

 

Which two questions would you ask? Most people select questions 1 and 3. However, when asked to decide if Barry is an introvert, people have a tendency to select questions 2 and 4. Why? Questions 1 and 3 relate to more extroverted behavior, while 2 and 4 concern introversion.8 Even the questions we ask to make our judgments (test our hypotheses) can bias us in favor of finding that hypothesis to be true. If we ask Barry, “In what situations are you most talkative?” to see if he's an extrovert, we'll begin to focus on those instances in which Barry talks a lot, and ignore those situations in which he doesn't. Given that people exhibit extroverted behavior in some cases, and introverted behavior in others, we can usually find a number of extroverted behaviors, even for a person who is more of an introvert.9

Research has found that we consistently employ confirming strategies in social interactions. In fact, psychologist Mark Snyder notes that our tendency to confirm is so entrenched in our cognitive makeup that it doesn't seem to matter whether a hypothesis comes from a source of high or low credibility, how likely it is that the hypothesis is true, or whether substantial incentives (e.g., monetary rewards) are given for accurate hypothesis testing.10 Our ingrained tendency to focus on confirming data usually wins out.

It's one thing to use confirming strategies to judge whether someone is an extrovert or introvert—the consequences of a wrong judgment will not be that important. But what about judgments that may have significant implications for a person's life? Would confirming strategies still occur? Several years ago, the TV show 60 Minutes asked three different polygraphers (let's call them A, B, and C) to conduct a lie detector test on three employees (let's call them X, Y, and Z) to determine who was stealing from a firm. A was told that X was suspected, B was told that Y was suspected, while C was told that Z was suspected, although no reason was given for the suspicion. You can probably guess the results. Polygrapher A found X to be guilty, B found Y to be guilty, and C found Z to be guilty. Research has shown that lie detector tests are quite unreliable—they're open to a lot of subjective interpretation. If a polygrapher has a preconceived notion of who's guilty, he can interpret the data to confirm to his preconceived belief, spelling real trouble for suspected individuals.11

Could confirming strategies actually affect the type of sentences handed down in court? One study investigated the sentences that would be given if jurors considered the harshest, versus most lenient, verdict first.12 In most criminal cases, a jury is told to first decide if the defendant is guilty of the greatest offense that he or she is charged with. If reasonable doubt exists on that charge, they then proceed down the list to progressively lesser charges. For example, a jury often considers whether a defendant is guilty of first-degree murder, and if they can't agree on that verdict, they evaluate second-degree murder. This approach may bias the verdict delivered. People often cling to the first hypothesis considered, and then search for confirming evidence to support that hypothesis. If juries consider the harshest (most lenient) verdict first, they may focus on data supporting that charge, and render a harsher (more lenient) verdict.

Two experiments investigated this issue. In the first, participants, acting as jurors, decided whether a defendant was guilty of first- or second-degree murder, voluntary or involuntary manslaughter, or was not guilty. The case materials were adapted from an actual murder trial, where there were no eyewitnesses and most of the evidence was circumstantial. Half of the participants were asked to consider the harshest verdict first (murder in the first degree) and then proceed to progressively more lenient verdicts, while the other half started with the most lenient verdict (not guilty). Amazingly, 87.5 percent of the jurors chose not guilty if they started with a lenient verdict, while only 25 percent chose not guilty when starting with a harsh verdict!

A second experiment examined more jurors and added some new twists (e.g., whether the jurors were rushed or not rushed to enter a verdict). The verdicts were scored on a scale, with one indicating not guilty and five indicating guilty of first-degree murder. When jurors were not rushed, the average verdict in the harsh to lenient condition was 3.26, while it was only 2.20 in the lenient to harsh condition. That could mean the difference between being found guilty of voluntary manslaughter versus involuntary manslaughter, convictions that carry very different sentences. Thus, defendants can receive harsher verdicts when a harsher crime is considered first, and more lenient verdicts when a lenient outcome is first evaluated. These results suggest that it may be better for a judge just to provide definitions of the charges, and not dictate the order in which they should be considered. In fact, one could argue that since we have a presumption of innocence that considering the most lenient verdict first would be more in line with our judicial philosophy.

YES! YES! YES!

Our search for confirming data is one of the main ways we stick to our current beliefs. It's also indicative of a fundamental cognitive strategy that we employ. We use a positive-test strategy when forming our judgments. That is, our cognitive system is set up to focus on positive, as opposed to negative, instances. This doesn't mean that we're optimistic—always looking on the bright side of life. Rather, it means we like to think in terms of yesses instead of noes when we consider a certain issue. When testing whether a person is an extrovert, we attend to more data that suggests that the person is an extrovert—the data says yes, the person is an extrovert.

To see a positive test strategy in action, consider the following series of three numbers:

2    4    6

What if I told you these numbers obey a certain rule, and that you have to determine what the rule is. To decipher the rule, you can choose other sequences of three numbers, and you'll be told yes, they obey the rule, or no, they don't. Think about what the rule might be, and write down a sequence of three numbers to test your hypothesis.13

When we form a hypothesis about the rule, such as “even numbers increasing by two,” we often pick numbers like 12, 14, 16—numbers that conform to the rule. If we're told, Yes, they obey the rule, we then pick something like 50, 52, 54, and are again told yes. After selecting a couple more triplets that conform to our rule, we become convinced it's “even numbers increasing by two,” and are flabbergasted when told it's wrong. Undaunted, we think again and decide to test a different rule, such as “any three numbers increasing by two.” After mentioning 3, 5, 7, and 21, 23, 25, we state that rule—and are told it's wrong. What's going on? What could the rule be? It's “any three numbers in increasing order.”

Why do we have a hard time discovering the rule? We try to prove our hypothesis correct by searching for examples that confirm the hypothesis, not for examples that disconfirm it. That is, we look for examples that yield a yes response. The problem with this strategy is that we could give a thousand examples which conform to the hypothesis and still not get a definitive answer on the correct rule. Why? If we think the rule is “even numbers increasing by two,” and we give many series of numbers consistent with the rule, those numbers may also be consistent with other rules as well, such as “even numbers increasing” or “any three numbers increasing.” So continually looking for confirming data doesn't get us any closer to the truth. On the other hand, if we choose some numbers that are inconsistent with our hypothesis “even numbers increasing by two,” such as 7, 9, 11, and are told that they conform to the rule, we immediately discover that our hypothesis concerning even numbers is incorrect. In effect, if we use a case that disconfirms the rule that we're testing, we can quickly learn more information than if we continue to search for confirming cases.

As philosopher Karl Popper indicated, a general hypothesis can never be completely confirmed because we may uncover an exception the next time around. It was once thought that all swans were white, until we found black swans in Australia. To determine if a hypothesis is likely to be true, we should try to prove it false. Why? It's impossible to prove a hypothesis is correct with certainty, but we can disprove it with one observation.14 And so, disconfirming evidence can be very useful in our decision making.

Consider the following problem:

Suppose the letters and numbers below are on separate cards. The cards have a number on one side and a letter on the other, and someone tells you: “If a card has a vowel on one side, then it has an even number on the other side.” Which of the cards would you need to turn over in order to decide whether the person is lying?

E      K      4      7

If you're like most people, you would say E and 4, or possibly just E. When 128 people answered the problem, E and 4 was the most common response (59), followed by E (42).15 Why? Once again, we choose cards that give us confirming evidence. However, the correct answer is E and 7. Think about the problem this way. If a card has a vowel, then it has an even number (if X then Y). The only way to falsify an if-then statement is to find a case of X and not-Y (i.e., a vowel and an odd number). The only cards that can disconfirm the rule are vowels or odd numbers (E and 7). Even numbers or consonants are not relevant (an even number is not relevant because the rule doesn't say even numbers can't have a consonant on the other side). Once again, searching for disconfirming, as opposed to confirming, evidence would answer the problem. And yet, four out of five experienced mathematical psychologists—people who should know better—couldn't solve this problem correctly.16 Such is our ingrained desire to confirm.

Interestingly, self-fulfilling prophesies are related to confirming strategies. Self-fulfilling prophecies occur when we act a certain way because we believe something to be true, and our act makes it come true. As such, our belief leads to acts that will likely result in confirming evidence. For example, researchers told grade school teachers that certain students would bloom academically in the coming year. Eight months later, those students' IQs improved more than that of the other students. However, the “high achievers” were selected at random. Teachers apparently gave the bloomers more attention and praise, which resulted in a greater improvement. Thus, not only do we see what we expect to see, we can actually cause what we expect to see.17

SO WHAT'S THE DEAL?

As with the other decision strategies discussed here, a confirming strategy can yield correct answers in many cases. We obviously use it extensively, often making many accurate decisions. However, we can also make grossly inaccurate judgments if we rely on confirming data too much. Why is that? There is often considerable evidence that both supports and contradicts the hypothesis tested. If we focus primarily on the supporting data, we're more likely to accept that hypothesis, even though the contradictory information may be more compelling. In essence, when we use a confirming strategy, we rely on incomplete information, a main source of bad decision making.18

So why do we use confirming strategies if they can have such negative consequences? It's cognitively easier to deal with data that confirm. We have more trouble dealing with negative statements. In fact, our preference for positive responses starts early in life. When children are given twenty questions to determine an unknown number between 1 and 10,000, they seek a yes answer. For example, when they ask if the number is between 5,000 and 10,000? and they hear yes, they're happy and they cheer. If they hear no they groan, even though that answer is just as informative (if it's not between 5,000 and 10,000, then it's between 1 and 5,000). Why is that? A no response requires an extra cognitive step.19 In effect, we appear to have built-in circuitry that prefers yes answers. As we've seen, however, placing too much importance on positive instances can result in believing things that just aren't true.

How can we overcome our penchant for confirming evidence? While the jury is still out, some research suggests the following. Telling decision makers to disconfirm their hypothesis does not always work. One study found that even when people were told to disconfirm, they still sought confirming evidence about 70 percent of the time.20 A possible solution would be to frame a question in a way that encourages disconfirming evidence. For example, a top investment analyst specifically solicits disconfirming evidence before making a decision. If he thinks a certain industry is becoming less price competitive, he will ask executives a question that implies the opposite, such as, Is it true that price competition is getting tougher?21 As we saw before, one of the best things we can do to improve our decision making is to consider alternative hypotheses. By considering additional competing hypotheses, we'll likely focus our attention on data that confirm those hypotheses (and possibly disconfirm our initial hypothesis), giving us a more balanced evaluation of the evidence.