In December 1954, the psychologist Leon Festinger and his colleagues noticed this newspaper headline: PROPHECY FROM PLANET CLARION CALL TO CITY: FLEE THAT FLOOD. A Chicago housewife, Marion Keech, reported that she had received messages from the planet Clarion telling her that the world would end in a great flood sometime before dawn on December 21, 1954. If she and her followers gathered together at midnight, however, a mother ship would arrive just in time to whisk them away to safety.
Festinger immediately saw an opportunity, not to save himself, but to study the phenomenon of cognitive dissonance, the mental tension created when a person holds two conflicting thoughts simultaneously. “Suppose an individual believes something with his whole heart,” Festinger said. “Suppose further that he has a commitment to this belief, that he has taken irrevocable actions because of it; finally, suppose that he is presented with evidence, unequivocal and undeniable evidence, that his belief is wrong: what will happen? The individual will frequently emerge, not only unshaken, but even more convinced of the truth of his beliefs than ever before. Indeed, he may even show a new fervor about convincing and converting other people to his view.”1 Many of Keech’s followers had quit their jobs, left their spouses, and given away their possessions. Festinger predicted that these individuals with the strongest behavioral commitment would be the least likely to admit their error when the prophecy failed and instead rationalize a positive outcome.
As midnight approached on December 20, Keech’s group gathered to await the arrival of the aliens’ mother craft. As dictated by Marion, the members eschewed all metallic items and other objects that would interfere with the operation of the spaceship. When one clock read 12:05 A.M. on the twenty-first, anxious squirming was calmed when someone pointed out a second clock reading 11:55 P.M. But as the minutes and hours ticked by, Keech’s clique grew restless.
At 4:00 A.M., Keech began to weep in despair, recovering at 4:45 A.M. with the claim that she had received another message from Clarion informing her that God had decided to spare Earth because of the cohort’s stalwart efforts. “By dawn on the 21st, however, this semblance of organization had vanished as the members of the group sought frantically to convince the world of their beliefs,” Festinger says. “In succeeding days, they also made a series of desperate attempts to erase their rankling dissonance by making prediction after prediction in the hope that one would come true, and they conducted a vain search for guidance from the Guardians.”2 Marion Keech and her most devoted charges redoubled their recruitment efforts, arguing that the prophecy had actually been fulfilled with an opposite outcome as a result of their faith. Festinger concluded that Keech’s assemblage reduced the cognitive dissonance they experienced by reconfiguring their perceptions to imagine a favorable outcome, reinforced by converting others to the cause.3
Doomsday cults are especially vulnerable to cognitive dissonance, particularly when they make specific end-of-the-world predictions that will be checked against reality. What typically happens is that the faithful spin-doctor the nonevent into a successful prophecy, with rationalizations including (1) the date was miscalculated; (2) the date was a loose prediction, not a specific prophecy; (3) the date was a warning, not a prophecy; (4) God changed his mind; (5) the prediction was just a test of the members’ faith; (6) the prophecy was fulfilled physically, but not as expected; and (7) the prophecy was fulfilled—spiritually.4
Of course, cognitive dissonance is not unique to doomsday cults. We experience it when we hang on to losing stocks, unprofitable investments, failing businesses, and unsuccessful relationships. Why should past investment influence future decisions? If we were perfectly rational, we should simply compute the odds of succeeding from this point forward, jettisoning our previous beliefs. Instead, we are stuck rationalizing our past choices, and those rationalizations influence our present ones.
Unfortunately for those bent on curing themselves of the chronic effects of cognitive dissonance, research since Festinger shows that, if anything, he underestimated its potency. As two of Festinger’s students—Carol Tavris and Elliot Aronson—demonstrate in their aptly titled book, Mistakes Were Made (but not by me), our ability to rationalize our choices and actions through self-justification knows no bounds.
The passive voice of the all-telling phrase—mistakes were made—shows the rationalization process at work. In March 2007, United States attorney general Alberto R. Gonzales used that very phrase in a public statement on the controversial firing of several U.S. attorneys: “I acknowledge that mistakes were made here. I accept that responsibility.” Nevertheless, he rationalized, “I stand by the decision, and I think it was a right decision.”5 The phraseology is so common as to be almost cliché. “Mistakes were quite possibly made by the administrations in which I served,” confessed Henry Kissinger about Vietnam, Cambodia, and South America. “If, in hindsight, we also discover that mistakes may have been made . . . I am deeply sorry,” admitted Cardinal Edward Egan of New York about the Catholic Church’s failure to deal with priestly pedophiles. And, of course, corporate leaders are no less susceptible than politicians and religious leaders: “Mistakes were made in communicating to the public and customers about the ingredients in our French fries and hash browns,” acknowledged a McDonald’s spokesperson to a group of Hindus and other vegetarians after they discovered that the “natural flavoring” in their potatoes contained beef byproducts. “Dissonance produces mental discomfort, ranging from minor pangs to deep anguish; people don’t rest easy until they find a way to reduce it,” Tavris and Aronson note.6 It is in that process of reducing dissonance that our self-justification accelerators are throttled up.
One of the practical benefits of self-justification is that no matter what decision we make—to take this or that job, to marry this or that person, to purchase this or that product—we will almost always be satisfied with the decision, even when the objective evidence is to the contrary. Once the decision is made, we carefully screen subsequent information and filter out all contradictory data, leaving only evidence in support of our choice. This process of cherry-picking happens at even the highest levels of expert assessment. In his book Expert Political Judgment, the political scientist Philip Tetlock reviews the evidence for the predictive ability of professional experts in politics and economics and finds them severely wanting. To the point, expert opinions turn out to be no better than those of nonexperts—or even chance—and yet, as self-justification theory would predict, experts are significantly less likely to admit that they are wrong than are nonexperts.7
Politics is rampantly self-justifying. Democrats see the world through liberal-tinted glasses, while Republicans filter it through conservative-shaded lenses. Tune in to talk radio any hour of the day, any day of the week—whether it is “conservative talk radio” or “progressive talk radio”—and you’ll hear the same current events interpreted in ways that are 180 degrees out of phase. Social psychologist Geoffrey Cohen quantified this effect in a study in which he discovered that Democrats are more accepting of a welfare program if they believe it was proposed by a fellow Democrat, even when, in fact, the proposal comes from a Republican and is quite restrictive. Predictably, Cohen found the same effect for Republicans, who were far more likely to approve a generous welfare program if they thought it was proposed by a fellow Republican.8
Economic positions, whether staked out by birthright, inheritance, or creative hard work, distort our perceptions of reality as much as political positions. The sociologist John Jost has studied how people justify their economic status, and the status of others. The wealthy tend to rationalize their position of privilege as deserved, earned, or justified by their benevolent social acts, and assuage any cognitive dissonance regarding the poor by believing that the poor are happier and more honest. For their part, the underprivileged tend to rationalize their position as morally superior, nonelitist, and within the bounds of social normalcy, and look down upon the rich as living an undeserved life of accidental or ill-gotten privilege.9
Cognitive distortions can even turn deadly. Wrongly convicting people and sentencing them to death is a supreme source of cognitive dissonance. Since 1992, the Innocence Project has freed fourteen people from death row, and exonerated convicts in more than 250 non-death-row cases. “If we reviewed prison sentences with the same level of care that we devote to death sentences,” says University of Michigan law professor Samuel R. Gross, “there would have been over 28,500 non-death-row exonerations in the past 15 years, rather than the 255 that have in fact occurred.” What is the self-justification for reducing this form of dissonance? “You get in the system and you become very cynical,” explains Rob Warden of Northwestern University School of Law. “People are lying to you all over the place. Then you develop a theory of the crime, and it leads to what we call tunnel vision. Years later, overwhelming evidence comes out that the guy was innocent. And you’re sitting there thinking, ‘Wait a minute. Either this overwhelming evidence is wrong or I was wrong—and I couldn’t have been wrong because I’m a good guy.’ That’s a psychological phenomenon I have seen over and over.”10
The deeper evolutionary foundation to self-justification, cognitive dissonance, and the elevation of truth telling and mistake admission to a moral principle worthy of praise can be found in the psychology of deception (and self-deception). Research shows that we are better at deception than at deception detection, but liars get caught often enough that it is risky to attempt to deceive others, especially people with whom we spend a lot of time. The more we interact with someone, the more that person is likely to pick up on the cues we give when we are attempting to deceive, particularly nonverbal cues such as taking a deep breath, looking away from the person we are talking to, and hesitating before answering. But those cues are less likely to be expressed if you actually believe the lie yourself.11 This is the power of self-deception, which evolved in our ancestors as a means of fooling fellow group members who would otherwise catch our deceptions.
From an evolutionary perspective, it is not enough to fake doing the right thing, because although we are fairly good deceivers, we are also fairly good deception detectors. We have to believe we are doing the right thing, too. What we believe we feel, and thus it is that we do not just go through the motions of being moral, we actually have a moral sense and retain the capacity for genuine moral emotions. This is borne out in research on both primates and hunter-gatherer groups, as we shall see in the next chapter. What follows are the numerous ways that cognitive biases interrupt our ability to make rational decisions in our personal as well as our financial lives.
Picture yourself watching a one-minute video of two teams of three players each, one team wearing white shirts and the other black shirts, as they move about each other in a small room tossing two basketballs back and forth among themselves. Your task is to count the number of passes made by the white team. Unexpectedly, after thirty-five seconds a gorilla enters the room, walks directly through the farrago of bodies, thumps his chest, and nine seconds later, exits. Would you see the gorilla?
Most of us, in our perceptual vainglory, believe we would—how could anyone miss a guy in an ape suit? In fact, 50 percent of subjects in this remarkable experiment by psychologists Daniel Simons and Christopher Chabris do not see the gorilla, even when asked if they noticed anything unusual.12 The effect is known as inattentional blindness—when attending to one task, say, talking on a cell phone while driving, many of us become blind to dynamic events, such as a gorilla in the crosswalk. For several years now, I have incorporated the gorilla DVD into my public lecture on “The Power of Belief,” asking at the end of the talk for a show of hands of those who did not see the gorilla. Out of the more than one hundred thousand people over the years, fewer than half saw the gorilla. I can decrease the figure even more by issuing a gender challenge, telling the audience before showing the clip that one gender is more accurate than the other at counting the ball passes, but I won’t tell them which gender so as not to bias the test. This really makes people sit up and concentrate, causing even more to miss the gorilla. The lowest percentage I have ever witnessed miss the gorilla was group of about fifteen hundred behavioral psychologists. Professional observers of behavior, almost none of them saw the gorilla. Many were shocked. Several accused me of showing two different clips.13
Experiments such as these reveal a hubris in our powers of perception, as well as a fundamental misunderstanding of how the brain works. We think of our eyes as video cameras, and our brains as blank tapes to be filled with precepts. Memory, in this flawed model, is simply rewinding the tape and playing it back in the theater of the mind, in which some cortical commander watches the show and reports to a higher homunculus what it saw. Fortunately for criminal defense attorneys, this is not the case. The perceptual system, and the brain that analyzes its data, are far more complex. As a consequence, much of what passes before our eyes may be invisible to a brain focused on something else.
Driving is an example. “Many accident reports include claims like ‘I looked right there and never saw them,’ ” Simons told me. “Motorcyclists and bicyclists are often the victims in such cases. One explanation is that car drivers expect other cars but not bikes, so even if they look right at the bike, they sometimes might not see it.” Simons recounted for me a study by Richard Haines of pilots who were attempting to land a plane in a simulator with the critical flight information superimposed on the windshield. “Under these conditions, some pilots failed to notice that a plane on the ground was blocking their path.”14 There are none so blind as those who will not see.
Have you ever noticed how blind other people are to their own biases, but how you almost always seem to catch yourself before falling for your own? If so, then you are the victim of the blind spot bias, in which subjects recognized the existence and influence in others of eight different cognitive biases but failed to see those same biases in themselves. In one study, Stanford University students were asked to compare themselves to their peers on such personal qualities as friendliness and selfishness. Predictably, they rated themselves higher than they rated their peers. When the subjects were warned about the better-than-average bias and asked to reevaluate their original assessments, 63 percent claimed that their initial evaluations were objective, and 13 percent even claimed to have been too modest! In a related study, Princeton University psychologist Emily Pronin and her colleagues randomly assigned subjects high or low scores on a “social intelligence” test.
Unsurprisingly, those given the high marks rated the test fairer and more useful than did those receiving low marks. When asked if it was possible that they had been influenced by the score they received on the test, subjects responded that other participants had been far more biased than they were. When subjects admit to having such a bias as being a member of a partisan group, says Pronin, this “is apt to be accompanied by the insistence that, in their own case, this status . . . has been uniquely enlightening—indeed, that it is the lack of such enlightenment that is making those on the other side of the issue take their misguided position.”15
In a third study in which Pronin queried subjects about what method they used to assess their own and others’ biases, she found that people tend to use general theories of behavior when evaluating others, but use introspection when appraising themselves. The problem with this method can be found in what Pronin calls the introspection illusion, in which people trust themselves to employ the subjective process of introspection but do not believe that others can be trusted to do the same.16
Okay for me but not for thee. “We view our perceptions of our mental contents and processes as the gold standard for understanding our actions, motives, and preferences,” Pronin explained to me. “But, we do not view others’ perceptions of their mental contents and processes as the gold standard for understanding their actions, motives, and preferences. This ‘illusion’ that our introspections are a gold standard leads us to introspect to find evidence of bias and we are thus likely to infer that we have not been biased, since most biases operate outside of conscious awareness.”17
We tend to see ourselves in a more positive light than others see us. National surveys show that most businesspeople believe they are more moral than other businesspeople,18 while psychologists who study moral intuition think they are more moral than other such psychologists.19 In one College Entrance Examination Board survey of 829,000 high school seniors, 60 percent put themselves in the top 10 percent in “ability to get along with others,” while 0 percent (not one!) rated themselves below average.20
This self-serving bias is Lake Wobegon writ large. An amusing example of it can be found in a 1997 U.S. News & World Report study on who Americans believe is likely to go to heaven: 52 percent said Bill Clinton, 60 percent thought Princess Diana, 66 percent chose Oprah Winfrey, and 79 percent selected Mother Teresa. But 87 percent chose themselves as the person most likely to go to heaven!21
What people see in others they generally do not see in themselves. This leads to an attribution bias. A number of studies show that there is a tendency for people to accept credit for their good behaviors (a dispositional judgment) and to allow the situation to account for their bad behaviors.22 In dealing with others, we are more likely to attribute both good and bad actions to dispositional factors. Hence, we tend to attribute our own good fortune to hard work and intelligence, whereas another person’s good fortune is the result of luck and circumstance. Conversely, our own bad actions are a product of circumstance, but we blame the bad actions of others on their personal weaknesses.23
My colleague Frank J. Sulloway, a psychologist and historian of science at the University of California, Berkeley, and I have discovered another bias in how we assess our behavior and that of others. We wanted to know why people believe in God, so we polled ten thousand randomly selected Americans. In addition to exploring various demographic and sociological variables, we directly asked subjects to respond in writing to two open-ended questions: Why do you believe in God? and Why do you think others believe in God? The top two reasons that people gave for why they believe in God were “the good design of the universe” and “the experience of God in everyday life.” Interestingly, and tellingly, when asked why they think other people believe in God, these two answers dropped to sixth and third place, respectively, while the two most common reasons given were that belief is “comforting” and “fear of death.”24 There appears to be a sharp distinction between how people view their own beliefs—as rationally motivated—and how people view the beliefs of others—as emotionally driven. A person’s commitment to a belief is generally attributed to an intellectual choice (“I bought these $200 jeans because they are exceptionally well made and fit me perfectly” or “I am for gun control because statistics show that crime decreases when gun ownership decreases”), whereas another person’s decision or opinion is attributed to need or emotional reasons (“She bought those overpriced designer jeans because she is obsessed with appearing hip and matching the status-hungry in-crowd,” or “He is for gun control because he is a bleeding-heart liberal who needs to identify with the victim”).25
Given the ubiquity and power of such cognitive biases, it was only a matter of time before someone created an entire branch of economics to study them. The field, behavioral economics, was pioneered by a couple of psychologists, Daniel Kahneman and Amos Tversky, neither of whom ever took a single course in economics but whose personal experiences in war and scientific training in how the mind works have led to one of the most fruitful collaborations in the history of social science.
Daniel Kahneman was born in Tel Aviv, grew up in Paris, and returned to his homeland shortly after the Second World War. He later recalled that his interest in the complexities and inconsistencies of human behavior was shaped by a salient encounter with a Nazi SS officer in France shortly after the Nazi occupation. Forced to wear the yellow Star of David, the young boy was returning home past curfew one evening, his sweater turned inside out, when a German soldier approached. Kahneman tried to scurry past, but “he beckoned me over, picked me up, and hugged me. I was terrified that he would notice the star inside my sweater. He was speaking to me with great emotion, in German. When he put me down, he opened his wallet, showed me a picture of a boy, and gave me some money. I went home more certain than ever that my mother was right: people were endlessly complicated and interesting.”26 Kahneman earned his doctorate in psychology from the University of California, Berkeley, going on to win the Nobel Prize in economics in 2002.
Amos Tversky would have won the prize as well, but in 1996 he died of metastatic melanoma at the age of fifty-nine. Also Israeli-born and keenly interested in understanding the subtleties and oddities of human behavior, Tversky later recalled that “growing up in a country that’s fighting for survival, you’re perhaps more likely to think simultaneously about applied and theoretical problems.” “Applications” included a stint as a paratrooper in an elite unit in the Israeli army, earning Tversky his country’s highest honor for bravery during a 1956 border skirmish in which a fellow soldier had fallen on top of an armed explosive device he was placing beneath some barbed wire. Following a few steps behind the soldier, without thinking and against the orders of his commanding officer to stay put, Tversky leaped forward and pulled the man off the explosive device, thereby saving him. Tversky was wounded and lived the rest of his life with shards of metal in his body, but the lesson was imparted: people do not always make rational choices.
Tversky noticed that people are pattern-seeking animals, finding meaningful relationships in random data, everything from stock market fluctuations to coin tosses to sports streaks. A basketball fan, Tversky teamed with fellow cognitive psychologists Robert Vallone and Thomas Gilovich to test the notion of “hot hands.” As fans of the game know, when you’re hot you’re hot, and when you’re not you’re not, and you can see it every night on the court. Knowing the propensity for humans to find such patterns whether they are there or not, Tversky and his colleagues analyzed every shot taken by the Philadelphia 76ers basketball team for an entire season, only to discover that the probability of a player hitting a second shot after a successful basket did not increase beyond what one would expect by chance and by the average shooting percentage of the player. That is, the number of successful baskets in sequence did not exceed the predictions of a statistical coin-flip model. If you conduct a coin-flipping experiment and record heads or tails, you will encounter streaks: on average and in the long run, you will flip five heads or tails in a row once in every thirty-two sequences of five tosses. Most of us will interpret these random streaks as meaningful.27 Indeed, it would be counterintuitive not to do so.
It was Tversky’s research in cognitive psychology that led him to challenge the dogma in economic theory that people act rationally to maximize their welfare. According to the economist Kenneth Arrow, one of the founders of neoclassical economics and the youngest person ever to make the trip to Sweden to collect The Prize, “Previous criticism of economic postulates by psychologists had always been brushed off by economists, who argued, with some justice, that the psychologists did not understand the hypotheses they criticized. No such defense was possible against Amos’ work.”28 By most accounts, Tversky was a genius.
Yet, with characteristic modesty, Tversky said that most of his findings in economics were well known to “advertisers and used car salesmen.” For example, he noticed that customers were displeased when a retail store added a surcharge to a purchase made with a credit card, but were delighted when the store instead offered a discount for paying in cash. This “framing” of a choice as a penalty or as a reward deeply influences one’s decision. In another example, Tversky discovered that in reviewing the risks of a medical procedure with patients, physicians will get a very different response if they tell them that there is a 1 percent chance of dying versus a 99 percent chance of surviving.29
Kahneman and Tversky, in conjunction with their colleagues Richard Thaler, Paul Slovic, Thomas Gilovich, Colin Camerer, and others, established a research program to study the cognitive basis for common human errors in thinking and decision making. They discovered a number of “judgmental heuristics,” as they called them—“mental shortcuts,” or simpler still, “rules of thumb”—that shape our thinking, most especially how we think about money.
Imagine that you work for the admissions office of a graduate program at a university and you come across this description of a candidate in a letter of recommendation:
Tom W. is of high intelligence, although lacking in true creativity. He has a need for order and clarity, and for neat and tidy systems in which every detail finds its appropriate place. His writing is rather dull and mechanical, occasionally enlivened by somewhat corny puns and by flashes of imagination of the sci-fi type. He has a strong drive for competence. He seems to feel little sympathy for other people and does not enjoy interacting with others. Self-centered, he nonetheless has a deep moral sense.
Kahneman and Tversky presented this scenario to three groups of subjects. One group was asked how similar Tom W. was to a student in one of nine types of college graduate majors: business administration, computer science, engineering, humanities/education, law, library science, medicine, physical/life sciences, or social science/social work. Most of the subjects in the group associated Tom W. with an engineering student, and thought he was least like a student of social science/social work. A second group was asked instead to estimate the probability that Tom W. was a grad student in each of the nine majors. The probabilities lined up with the judgments from the first group. A third group was asked to estimate the proportion of first-year grad students in each of the nine majors. What the researchers found was that even though the subjects—based on the answers of the third group, or control—knew that there are far more graduate students in the social sciences than there are in engineering, and thus the probability of Tom W.’s being in the engineering program is the lowest, they nevertheless concluded that he must be an engineer based on what the narrative description of him represented.30
Tversky and Kahneman call this the representative fallacy, in which “an event is judged probable to the extent that it represents the essential features of its parent population or generating process.” And, more generally, “when faced with the difficult task of judging probability or frequency, people employ a limited number of heuristics which reduce these judgments to simpler ones.”31 We are good at telling stories about people and gleaning from these narratives bits of information that we then use to make snap decisions.
On the other hand, we are lousy at computing probabilities and thinking in terms of the chances of something happening. This time, imagine that you are looking to hire someone for your company and you are considering the following candidate:
Linda is 31 years old, single, outspoken, and very bright. She majored in philosophy. As a student, she was deeply concerned with issues of discrimination and social justice, and also participated in antinuclear demonstrations.
Which is more likely? (1) Linda is a bank teller, or (2) Linda is a bank teller and is active in the feminist movement.
When this scenario was presented to subjects, 85 percent chose the second option. Mathematically speaking, this is the wrong choice, simply because the probability of two events occurring together (in “conjunction”) will always be less than or equal to the probability of either one occurring alone. Tversky and Kahneman argue that most people get this problem wrong because the second option appears to be more “representative” of the description of Linda.32
Hundreds of experiments in these fallacies have been conducted, showing time and again that people make snap decisions under high levels of uncertainty, and they do so by employing these various rules of thumb to shortcut the computational process. For example, policy experts were asked to estimate the probability that the Soviet Union would invade Poland and that the United States would then break off diplomatic relations. Subjects gave the scenario a probability of 4 percent. Meanwhile, another group of policy experts was asked to estimate the probability just that the United States would break off diplomatic relations with the Soviet Union. Contrary to what the odds actually would be, these experts gave this latter scenario only a 1 percent chance of happening. The experimenters concluded that the more detailed two-part scenario seemed more likely because we like stories with more details and thus grant them greater veracity.
In the run-up to the 1976 U.S. presidential election, an experiment was conducted in which one group of subjects was asked to “imagine Gerald Ford winning the upcoming election,” while another group of subjects was asked to “imagine Jimmy Carter winning the upcoming election.” When subsequently asked to estimate the probability of each of the candidates winning, those who were asked to imagine Ford winning estimated his chances as much higher than those who were asked to imagine Carter winning, who, in turn, gave their guy a much higher probability of victory.33
This is the availability fallacy, which holds that we assign probabilities of potential outcomes based on examples that are immediately available to us, which are then generalized into conclusions upon which choices are based.34 Your estimation of the probability of hitting red lights during a drive will be directly related to whether you are late or not. Your estimation of the probability of dying in a plane crash (or lightning strike, shark attack, terrorist attack, etc.) will be directly related to the availability of just such an event in your world, especially your exposure to it in the media. If newspapers cover an event, there is a good chance that people will overestimate the probability of that event happening.35
The USC sociologist Barry Glassner has built his career studying how the media creates a “culture of fear” that is totally out of sync with reality. Because the media establishes an availability rule of thumb about what we should fear, we fear all the wrong things and ignore those that could potentially kill us. “In the late 1990s the number of drug users had decreased by half compared to a decade earlier,” Glassner notes, yet the “majority of adults rank drug abuse as the greatest danger to America’s youth.”
The availability distortion of our understanding of medical issues is especially egregious. Studies have found that women in their forties believe they have a 1 in 10 chance of dying from breast cancer, while their real lifetime odds are more like 1 in 250. This effect is directly related to the number of news stories about breast cancer. As Glassner documents, “We waste tens of billions of dollars and person-hours every year on largely mythical hazards like road rage, on prison cells occupied by people who pose little or no danger to others, on programs designed to protect young people from dangers that few of them ever face, on compensation for victims of metaphorical illnesses, and on technology to make airline travel—which is already safer than other means of transportation—safer still.”36 Our perceptions of the economy are similarly distorted: “the unemployment rate was below 5 percent for the first time in a quarter century. Yet pundits warned of imminent economic disaster.”
Available information can distort our decision making in surprising ways. Two groups of subjects are asked to supply an estimate:
Group 1: What percentage of African nations are members of the United Nations? Do you think it is more or less than 45 percent? Please give an exact percentage.
Group 2: What percentage of African nations are members of the United Nations? Do you think it is more or less than 65 percent? Please give an exact percentage.
Subjects who answer the first question give a lower percentage estimate than subjects who answer the second question. Why? They were given a lower starting point, which primed their brains to think in lower numbers. Once an initial value is set, we are biased toward that value. Behavioral economists call this effect the anchoring fallacy, and it shapes our perceptions of what we consider to be a fair price or a good deal. After all, money is simply cheap paper with some ink slapped on it, so the value of a commodity must be evaluated in a context. The context begins with an anchor. Lacking some objective standard, which is usually not available, we grasp for any standard available, no matter how seemingly subjective. It reminds me of that old Henny Youngman routine: “How’s your wife?” “Compared to what?”
The comparison anchor can even be entirely arbitrary. In one study, subjects were asked to give the last four digits of their Social Security numbers, and then asked to estimate the number of physicians in New York City. Bizarrely, people with higher Social Security numbers tended to give higher estimates for the number of docs in Manhattan. In a related experiment, subjects were shown an array of items to purchase—a bottle of wine, a cordless keyboard computer, a video game—and were then told that the price of the items was equal to the last two digits of their Social Security numbers. When subsequently asked the maximum price they would be willing to pay, subjects with high Social Security numbers consistently said that they would be willing to pay more than those with low numbers.
Our intuitive sense of the anchoring effect and its power explains why negotiators in corporate mergers, representatives in business deals, and even those involved in divorces stand to benefit from beginning from an extreme initial position. Setting a high anchor mark influences the information “available” to both sides. But once an event has occurred, it is easy to look back and reconstruct not only how it happened—not only why it had to happen that way and not some other way—but also why we should have seen it coming all along. Known colloquially as “Monday morning quarterbacking,” the hindsight bias is the tendency to reconstruct the past to fit present knowledge.37 The hindsight bias went to work after December 7, 1941, when it became clear after the fact that the Japanese had always planned to attack Pearl Harbor. The proof was in the so-called “bomb plot message,” intercepted in October 1941 by U.S. intelligence, in which a Japanese agent in Hawaii was instructed by his superiors in Japan to monitor warship movements in and around the Oahu naval base. The “fateful” message was never passed up to President Roosevelt. There were, in fact, eight such messages dealing with Hawaii as a possible target that were intercepted and decrypted by intelligence agents before the attack. In hindsight and out of context, it looked like a terrible failure. In context and without hindsight, however, it was not at all clear where the Japanese were going to strike. In fact, from May through November 1941, army intelligence, concerned about security leaks and the possibility that the Japanese might discover that their codes had been broken, stopped sending all such memos to the White House. More critical, during the same period in which the eight messages involving ship movements in and around Hawaii were intercepted, no fewer than fifty-eight ship movement messages were intercepted in association with the Philippines, twenty-one involving Panama, seven regarding Southeast Asia and the Netherlands East Indies, and even seven connected to the U.S. West Coast!38
In like manner, one of the only predictable stock market effects is the emergence of clear hindsight the day after the market does anything out of the ordinary—moving up or down—regardless of the actual causes. In conditions of causal uncertainty, our cognitive biases kick into gear and drive us to concoct all sorts of probable causes. During the time I was writing this book, Google stock, as it is wont to do, took several major plunges, only to bounce right back. When the stock rose dramatically after a positive quarterly earnings report, the Monday morning money pundits proclaimed that investors were rewarding Google for a job well done. Yet the next quarterly earnings report, which was even more spectacular than the previous one that garnered investors’ favor, resulted in the stock’s plunging more than $40 in a matter of days. The cause was obvious to the stock analysts: investors were punishing Google for not increasing their quarterly earnings as much as these same analysts predicted that they would.
We connect the dots from our complex and seemingly chaotic world and construct narratives based on the connections we think we have found. Whether the patterns are real or not is a separate issue entirely. In neoclassical economics, however, the assumption was that no matter how muddy our storytelling became, the data would never lie. What economists had tried to ignore before Tversky and Kahneman was that we decide what the data say in the first place—and our brains evolved to deal with a world that bears only slight resemblance to the vast, messy crowds of information in the modern marketplace.
Before you sit two black bags filled with red and white marbles. In one of the bags, two-thirds of the marbles are red and one-third are white, while in the other bag one-third of the marbles are red and two-thirds are white. Your mission is to figure out which bag has the most red marbles and which bag has the most white marbles. You are allowed to pull five marbles out of bag number one and thirty marbles out of bag number two. From the first bag, you grab four red marbles and one white marble. From the second bag, you grab twenty red marbles and ten white marbles. Which bag would you predict has the most red marbles? Most people would say the first bag, because four out of five red marbles represents 80 percent, whereas twenty out of thirty red marbles in the second bag represents only 66 percent. But statistically speaking, the smarter bet is bag two, because the sample size is larger and therefore more representative of the total number of red marbles.39
Because of the law of small numbers we tend to believe that small sample sizes are representative of the larger population. Investors make mistakes of this sort on a daily basis, when a short-term rise or fall in a stock is taken as deeply meaningful and thus serves as a trigger to buy or sell. The wise but counterintuitive approach would be to first call up those long-term trend charts to see if the sawtooth daily ups and downs of the stock are part of a trend up or down. Like global temperatures, they rise and fall daily, so basing decisions on small numbers of data points is unwise.
A corollary to the law of small numbers is that if your numbers are large enough, then a random and representative sample from it will give you a good sense of the way the world really is. In science, if you run an experiment over and over, the observed probability will approach the real (or actual) probability. This is one meaning of the law of large numbers, but another one that I find useful in explaining odd occurrences is that if the numbers are large enough, the probability is that something weird is likely to happen. That is, very improbable events will probably occur when there are a sufficiently large number of chances for it to happen. Million-to-one odds happen three hundred times a day in America. With three hundred million Americans running around doing their thing each day, it is inevitable that on any given nightly newscast, there will be a story about something really weird happening to someone somewhere.
When it comes to the modern complex economy, which involves the manufacturing, distribution, and sale of billions of products traded by billions of people, weird things will happen on a fairly regular basis. For instance, compare the “most traded” stocks on any given day to the “biggest gainers” and “biggest losers” on those same days: they are never the same companies. The most traded stocks represent the largest and most popular companies with the most shares available to buy and sell, whereas the biggest gainers and losers almost always consist of companies you will never have heard of because they are the outliers, the extremes. Because there are now so many companies whose stock is publicly traded, the law of large numbers practically guarantees that on any given day someone is going make a huge amount of money while someone else is going to lose a large sum of cash. How can the human mind successfully manage choices with probabilities in this context?
As it happens, it isn’t very good at probabilities in the context of small numbers, either. On the classic television game show Let’s Make a Deal, contestants were forced to choose one of just three doors. Behind one of the doors the contestant would find (and win!) a brand-new car. Behind the other two doors the contestant would discover goats (and not even brand-new ones). If a contestant chose door number one and host Monty Hall, who knew what was behind all three doors, then showed her a goat behind door number two, would the contestant be smarter to stick with door number one or switch to door number three?
Most people figure that it doesn’t matter because now the odds are fifty-fifty. But in this situation, most people would be wrong. In fact, in this particular real-life example (first presented by Marilyn vos Savant in her weekly Parade magazine column), “most people” included not just the general public but scientists, mathematicians, and even some statisticians, who upbraided her in no uncertain terms for the ignorance of her ways.
Yet all of their intuitions and rationalizations, expert and otherwise, were misguided. Here’s the explanation: the contestant had a one in three chance before any door was opened, but once Monty shows one of the losing doors, she has a two-thirds chance of winning by switching. Why? There are three scenarios for the doors: (1) car goat goat; (2) goat car goat; (3) goat goat car. If the contestant faces scenario one, in which the car is behind the door she has chosen already, she loses by switching, but if she faces either scenario two or three, she will win by switching. Another way to reason around this counterintuitive problem in probabilities is to imagine that there are a hundred doors, not just three; the contestant chooses door number one and Monty shows her doors number two through ninety-nine—all goats. Now should she switch? Of course she should, because her chances of winning increase from one in a hundred to ninety-nine in a hundred.40
In our evolutionary past, small numbers mattered very much to our survival, whether we were dealing with small numbers of relatives, allies, or game animals. And small numbers could be best managed and manipulated not through probabilities but through stories with personal meaning attached. The Yanomamö’s three hundred SKUs represented the universe of valuable stuff—things attached to people in the band, and thus things with stories attached. To make sense of modern economic tasks, such as grabbing a bag of chips from an aisle of store shelves, playing card games in gambling casinos, and trading stocks on Wall Street, we think in terms of stories with personal meanings, too. To pretend that it is all probabilities, that people are purely rational calculators making economic decisions, is not natural—or accurate.
For example, imagine you are a contagious disease expert at the U.S. Centers for Disease Control and you have been told that the United States is preparing for the outbreak of an unusual Asian disease that is expected to kill 600 people. Your team of experts has presented you with two programs to combat the disease:
Program A: 200 people will be saved.
Program B: There is a one-third probability that 600 people will be saved, and a two-thirds probability that no people will be saved.
If you are like the 72 percent of the subjects in an experiment that presented this scenario, you chose Program A. Now consider another set of choices for the same scenario:
Program C: 400 people will die.
Program D: There is a one-third probability that nobody will die, and a two-thirds probability that 600 people will die.
Even though the net result of the second set of choices is precisely the same as the first, the participants in the experiment switched preferences, from 72 percent for Program A over Program B to 78 percent for Program D over Program C. We prefer to think in terms of how many people we can save instead of how many people will die—the “positive frame” is preferred over the “negative frame.”41
The power of these framing effects is especially noticeable in making decisions about investing, lending, and borrowing money. Take this financial conundrum:
1. Phones-are-Us offers the new DigiMusicCam cell phone for $100; five blocks away FactoryPhones has the same model half off for $50. Do you make the short trip to save $50?
2. Laptops-are-Us offers the new carbon fiber computer for $1,000; five blocks away CompuBlessing has the same model discounted to $950. Do you make the short trip to save $50?
Most people would take the trip in the first scenario but not the second, even though the amount saved, $50, is the same. Why? This is a type of framing problem called mental accounting, where we put monies into different categories depending on the frame, or context—in this case, small expenditures in one accounting bucket and large ones in another. Studies show that most people are less likely to make the effort to save money when the relative amount they are dealing with is small.
Imagine that you have purchased in advance a $100 ticket for an event you have been eagerly anticipating, but when you arrive at the venue you discover that you have lost the ticket and they won’t let you in unless you purchase another. Would you plunk down another hundred bucks? In experiments in which this scenario is presented, over half of subjects (54 percent) say that they would not. But now imagine that you have not prepurchased a ticket to your long-awaited event, and instead you arrive at the venue with two Franklins in your billfold, intending to purchase your ticket at the gate. As you pull out your cash, however, you discover that one of the hundreds fell out, leaving you with just one $100 bill. Would you still purchase the ticket? Interestingly, and tellingly, a vast majority of people (88 percent) said that they would.
Rationally, there is no difference in value between a $100 ticket and a $100 bill. In economic jargon, they are fungible, or interchangeable. The ticket and the bill are both pieces of paper of equal value to be used as a medium of exchange. Yet emotionally there is a difference between a lost $100 ticket and a lost $100 bill. People sort their money into different categories depending on its original source (where it came from), its current status (gold, cash, product, service), and how it is spent (now or later, a sure thing or a risky gamble).
Credit cards reframe cash into a different mental accounting category that makes it much easier to spend. MIT marketing professors Drazen Prelec and Duncan Simester put this principle to the test by hosting an actual sealed-bid auction for Boston Celtics basketball game tickets. Half the people were told that if they won the bid they would have to pay for the tickets in cash, while the other half were told that if they won the bid they could pay by credit card. The cash bidders offered barely half what the credit card bidders offered.42
Or imagine you are offered a gamble with the prospects of a 10 percent chance to win $95 and a 90 percent chance to lose $5. Would you accept the gamble? Most people say that they would not. And yet these same people answer rather differently when the gamble is rephrased in this manner: Would you pay $5 to participate in a lottery that offers a 10 percent chance to win $100 and a 90 percent chance to win nothing? Tellingly, most of those who rejected the first gamble accepted the second. Why? Kahneman and Tversky explain it in terms of mental accounting differences: “Thinking of the $5 as a payment makes the venture more acceptable than thinking of the same amount as a loss.” Why should our brains be so ill equipped to handle money? The answer, in short, is folk economics: our brains did not evolve to intuitively equate equal value commodities with their symbolic representation in paper. The logic is not the same, and so illogic prevails.
Take the following thought experiment, known as the Wason Selection Test (after its creator, Peter Wason, who introduced it in 1966), that is designed to test symbolic reasoning. Before you are four cards, each with a letter of the alphabet on one side and a number on the other side. Two cards are showing numbers and two cards are showing letters, such as this:
M 4 E 7
Here is the rule: if there is a vowel on one side, there must be an even number on the other side. Here is your task: which two cards must be turned over in order to determine if the rule is true? Which cards would you turn over?
Most people correctly deduce that they do not have to check the other side of card “M” because flipping over the “E” card could invalidate the rule (although by itself it does not affirm it absolutely). Most people, however, incorrectly decide to flip over the “4” card. This is wrong because the rule says that if one side is a vowel, then the other must be an even number—it says nothing about whether an even number must be accompanied by a vowel; the opposite side of the “4” card could be a vowel or a consonant. Finally, most people do not think that the “7” card must be checked, because if its flip side is a vowel then the rule is violated. Hundreds of experiments employing the Wason Selection Test reveal that fewer than 20 percent of subjects correctly deduce that the answer (in this particular example) is that the “E” and “7” cards must be inspected.
A modified version of the Wason Selection Test that personalizes the options reveals when humans do much better with logic. You are the bartender of a nightclub with a minimum legal drinking age of twenty-one. When you arrive on your shift there are four patrons at the bar. Here is the rule: people under twenty-one cannot be served alcohol. Here is your task: you can ask people their age, or check to see what they are drinking, but not both. Instead of four cards, here are your four options from which to choose:
|
|
|
Patron: |
#1 |
#2 |
#3 |
#4 |
|
|
|
Drink/Age: |
Water |
Over 21 |
Beer |
Under 21 |
In which cases should you ask patrons their age or check what they are drinking? This is a proverbial no-brainer. Obviously you do not need to check the age of #1, since the beverage is water; nor do you need to check the drink of #2, since this person is over twenty-one. Equally evident, the beer drinker, #3, could be under twenty-one, so you better check the age, and the under-twenty-one person #4 could be drinking beer, so you better check the drink. Almost everyone tested solves this logic problem correctly.
Since the logic of these two tests is the same, why do people do so poorly on the first task but so well on the second? The answer from evolutionary psychology is that the first task employs symbols and the second task involves people. In folk science terms, we did not evolve to adequately process symbolic logic problems, but as a social primate species we did evolve brain circuitry to deal with problems that involve other people, especially such social problems as deception and cheating. In response to the Homo economicus foundational belief that people have unbounded rationality, behavioral economists retort that we have “bounded rationality.” We might say that our rationality in the modern world of symbols and abstractions is bounded by the Paleolithic environment, in which our brains evolved to handle problems in the archaic world.
In the early 1990s, citizens in both New Jersey and Pennsylvania were offered two options for their automobile insurance: a high-priced option that granted them the right to sue, and a cheaper option that restricted their right to sue. The corresponding options in each state were roughly equivalent. In New Jersey, the default option was the more expensive one—that is, if you did nothing you were automatically enrolled in that plan—while in Pennsylvania, the default option was the cheaper one. In New Jersey, 75 percent of citizens enrolled in the high-priced insurance, while in Pennsylvania, only 20 percent opted for it.
Such findings support the notion that when making decisions, we tend to opt for what we are used to, the status quo.43 Research conducted by William Samuelson and Richard Zeckhauser reveals that when people are offered a choice among four different financial investments with varying degrees of risk, they select one based upon how risk-averse they are, and their choices range widely. But when people are told that an investment tool has been selected for them and that they then have the opportunity to switch to one of the other investments, they are far more likely to stick with the default; 47 percent stayed with what they already had, compared to the 32 percent who chose those same investment opportunities when none were presented first as the default option.44 The status quo represents what you already have (and must give up in order to change), versus what you might have once you choose.
Economist Richard Thaler calls the bias toward the status quo the endowment effect, and in his research he has found that owners of an item value it roughly twice as much as potential buyers of the same item. In one experiment, subjects were given a coffee mug valued at $6 and were asked what they would take for it. The average price below which they would not sell was $5.25. Another group of subjects, when asked how much they would be willing to pay for the same mug, which they did not own, gave an average price of $2.75.45
In our evolutionary past, this makes perfect sense. Before humans domesticated other species, they had to forage and hunt, sometimes in conditions of severe scarcity, or the threat of it, in order to survive. Those who survived most likely exhibited a strong predilection for hoarding as well. Nature endowed us with the desire to value, and dearly hold on to, what is ours. Of course, by putting so much value on what we already have, we can also overvalue it—to the point that the sunk cost blinds us to the value of future losses that we will sustain if we do not switch to something that we don’t already have.
Cognitive dissonance drives people to rationalize irrational judgments and justify costly mistakes, whether they are end-of-the-world cultists or political leaders. Consider an especially poignant contemporary example of this sunk-cost fallacy that has far-reaching consequences.
The war in Iraq is now over four years old. At a cost of more than 3,100 lives plus $200 million a day, $73 billion a year, and over $300 billion since forces landed in March 2003, that’s a substantial investment on the part of the United States, without accounting for the costs to other countries. American war costs are estimated to top out at over $1 trillion; and who knows how many more will die before it is done. So it is no wonder that through 2006, most members of Congress from both parties, along with President Bush and former president Clinton, believed that we had to “stay the course” and not just “cut and run.” As Bush explained in a Fourth of July speech at Fort Bragg, North Carolina: “I’m not going to allow the sacrifice of . . . troops who have died in Iraq to be in vain by pulling out before the job is done.” Clinton echoed the sentiment: “We’ve all got a stake in its succeeding.”46
We all make similar arguments about decisions in our own lives: we hang on to losing stocks, unprofitable investments, failing businesses, and unsuccessful relationships. But why should past costs influence us? Rationally, we should just compute the odds of succeeding from this point forward, and then decide if additional investment warrants the potential payoff. But we are conditioned to overvalue the status quo.
Pace Will Rogers, I am not a member of any organized political party. I am a libertarian. As a fiscal conservative and social liberal, I have never met a Republican or Democrat in whom I could not find something to like. I have close friends in both camps, in which I have observed the following: no matter the issue under discussion, both sides are equally convinced that the evidence overwhelmingly supports their positions.
This surety is called the confirmation bias, where we seek and find confirmatory evidence in support of already existing beliefs and ignore or reinterpret disconfirmatory evidence. According to Tufts University psychologist Raymond Nickerson, the confirmation bias “appears to be sufficiently strong and pervasive that one is led to wonder whether the bias, by itself, might account for a significant fraction of the disputes, altercations, and misunderstandings that occur among individuals, groups, and nations.”47 Experimental examples abound. In a 1981 study by psychologist Mark Snyder, subjects were asked to assess the personality of someone they were about to meet. One group of subjects was given a profile of an introvert (shy, timid, quiet), while another group of subjects was given a profile of an extrovert (sociable, talkative, outgoing). When asked to make a personality assessment, those subjects who were told that the person would be an extrovert tended to ask questions that would lead to that conclusion; the introvert group did the same in the opposite direction.48 In a 1983 study, psychologists John Darley and Paul Gross showed subjects a videotape of a child taking a test. One group was told that the child was from a high socioeconomic class; the other group was told that the child was from a low socioeconomic class. The subjects were then asked to evaluate the academic abilities of the child based on the results of the test. The subjects who were told that the child they were evaluating was from a high socioeconomic class rated the child’s abilities as above grade level; the subjects who thought they were evaluating low socioeconomic kids rated them below grade level in ability. What is remarkable about this study is that the subjects were looking at the same set of test results!49
The power of expectation cannot be overstated. In 1989, psychologists Bonnie Sherman and Ziva Kunda conducted an experiment in which they presented subjects with evidence that contradicted a belief they held deeply, and with evidence that supported that same belief. The results showed that the subjects acknowledged the validity of the confirming evidence but were skeptical of the value of the disconfirming evidence.50 In another 1989 study, by the psychologist Deanna Kuhn, when children and young adults were exposed to evidence inconsistent with a theory they preferred, they failed to notice the contradictory evidence, or if they did recognize its existence, they tended to reinterpret it to favor their preconceived beliefs.51 In a related study, Kuhn exposed subjects to an audio recording of an actual murder trial and discovered that instead of evaluating the evidence first and then coming to a conclusion, most subjects concocted a narrative in their mind about what happened, made a decision of guilt or innocence, then riffled through the evidence and picked out what most closely fit the story.52
A functional magnetic resonance imaging (fMRI) study conducted at Emory University under the direction of psychologist Drew Westen shows where in the brain the confirmation bias occurs, and how it is unconscious and driven by emotions.53 During the run-up to the 2004 presidential election, while undergoing a brain scan, thirty men—half self-described “strong” Republicans and half “strong” Democrats—were tasked with assessing statements by both George W. Bush and John Kerry in which the candidates clearly contradicted themselves. Not surprisingly, in their assessments, Republican subjects were as critical of Kerry as Democratic subjects were of Bush, yet both let their own preferred candidate off the evaluative hook.
The neuroimaging results, however, revealed that the part of the brain most associated with reasoning—the dorsolateral prefrontal cortex—was quiescent. Most active were the orbital frontal cortex, which is involved in the processing of emotions, the anterior cingulate, which is associated with conflict resolution, the posterior cingulate, which is concerned with making judgments about moral accountability, and—once subjects had arrived at a conclusion that made them emotionally comfortable—the ventral striatum, which is related to reward. “We did not see any increased activation of the parts of the brain normally engaged during reasoning,” Westen explained. “What we saw instead was a network of emotion circuits lighting up, including circuits hypothesized to be involved in regulating emotion, and circuits known to be involved in resolving conflicts.” Interestingly, neural circuits engaged in rewarding selective behaviors were activated. “Essentially, it appears as if partisans twirl the cognitive kaleidoscope until they get the conclusions they want, and then they get massively reinforced for it, with the elimination of negative emotional states and activation of positive ones.”
These neural correlates of the confirmation bias have implications that reach deeply into politics and economics. A judge or jury assessing evidence against a defendant and a CEO evaluating information about a company undergo this same cognitive process. And this brings us back to where we began: pattern-seeking, connect-the-dots thinking, fueled by the confirmation bias, that builds up incorrect folk intuitions. What can we do about it?
In science, we have built-in self-correcting machinery. Strict double-blind controls are required in experiments, in which neither the subjects nor the experimenters know the experimental conditions during the data collection phase. Results are vetted at professional conferences and in peer-reviewed journals. Research must be replicated in other labs unaffiliated with the original researcher. Disconfirming evidence, as well as contradictory interpretations of the data, must be included in a paper. Colleagues are rewarded for being skeptical. “Even with these safeguards in place,” Westen cautions, “scientists are prone to confirmatory biases, particularly when reviewers and authors share similar beliefs, and studies have shown that they will judge the same methods as satisfactory or unsatisfactory depending on whether the results matched their prior beliefs.” In other words, if you don’t seek contradictory data against your theory or beliefs, someone else will, usually with great glee and in a public forum, for maximal humiliation.
In their excellent 2000 book, Why Smart People Make Big Money Mistakes, Cornell University cognitive psychologist Thomas Gilovich and financial writer Gary Belsky consider the take-home lessons of the behavioral economics of business finance.54 A slender, handsome man who carries about him an air of intellectual aristocracy and scientific authority, Gilovich is not only one of the most creative experimentalists working in psychology today, he is an interdisciplinary thinker who is willing to apply the research protocols from cognitive psychology to other areas of human endeavor.
An experiment run on psychology students in a university laboratory is one thing, but do people really behave that way in the real world? According to Gilovich, they do. In thinking about our tendency to overvalue sunk costs, he points to the loss aversion effect, which shows that people tend to fear losses about twice as much as they desire gains.
Gamblers, for example, are highly sensitive to losses, but not in the way you might think. They tend to follow a losing hand by placing bigger bets and turn conservative after a winning hand by placing smaller bets. One rationale for this strategy is “double up to catch up”—no matter how many losses in a row, if you double the bet each time, you will get back all of your money when you eventually do win. But most gamblers tend to underestimate the number and length of losing streaks. On a $10 minimum table, if you start with the minimum bet and have a losing streak of, say, eight in a row (not as uncommon as you might think), you would have to be prepared to plunk down $2,560 on the ninth hand to stick to your strategy. More important, gamblers also tend to underestimate the number and length of winning streaks and lose out on the reward of placing larger bets during them. Of course, even with an optimal betting strategy that plays to win every hand, and keeping loss aversion in check, if you play long enough you will lose because of the slight edge to the house built into the rules of the game. But casinos make even more money than the house percentage would predict because of our loss aversion.55
Yet the cognitive biases and fallacies that afflict us also provide useful insight into the mind of the market. Playing the stock market by buying and selling individual stocks on a regular basis is little different from gambling at a casino, and the odds are just about as good that you’ll come out ahead, or at least do as well as the stock market does overall and in the long run. Studies show that even professional investors and market analysts rarely perform as well as an indexed mutual fund in the long run.