Some people think of the glass as half full. Some people think of the glass as half empty. I think of the glass as too big.
—George Carlin
Imagine that the United States is preparing for an outbreak of an unusual Asian disease that's expected to kill six hundred people. Two different medical programs to combat the disease have been proposed, and their consequences are as follows:
Which of these two programs would you favor?1 If you're like most people, you would choose program A. Now consider the following case. Imagine, once again, that an outbreak of an Asian disease is expected to kill six hundred people. Two other programs (C and D) are available to combat the disease.
Which of these two programs would you favor? Most people choose program D. In fact, psychologists Amos Tversky and Daniel Kahneman found that when these decisions were given to two different groups of people, 72 percent who saw the first context preferred program A, while 78 percent who saw the second context preferred program D. You may not have picked A and D since they were presented side-by-side and you could directly compare them, but if you saw only one condition, there's a strong likelihood you'd pick A in the first case and D in the second. What's wrong with that? Program A is identical to program C, while program B is identical to program D. If two hundred lives will be saved (program A), then four hundred people will die (program C), so to be consistent, if you picked A, you should have picked C.
These two decision scenarios offer the same choices, but we react very differently. Why? The “frame” of the problem has changed. In the first case, we focus on saving lives and are in a gain frame, while in the second scenario we focus on losing lives and are in a loss frame. In essence, our decisions can change if we frame the problem as a gain or a loss. Viewing the proverbial glass as half full or half empty really does affect our judgments!
This framing effect has been found in a variety of personal and professional decision contexts. For example, seventy-one experienced managers responded to a similar decision in a business context. In this case, the managers would either lose $400,000 or save $200,000 if they chose the first alternative. Only 25 percent of the managers chose that alternative when it was framed as losing $400,000, but 63 percent chose it when framed as saving $200,000.2
Framing can even affect life and death decisions. One study asked 1,153 patients, doctors, and graduate students if they would choose radiation therapy or surgery for lung cancer. Some saw the decision framed in terms of living, while others in terms of dying. For example, about half were told that, with surgery, there was a 68 percent chance of living for more than one year. The other half was told that, with surgery, there was a 32 percent chance of dying by year's end. Surgery was selected 75 percent of the time in the survival frame and 58 percent in the mortality frame.3 Even for decisions as important as surgery, many people would make a different choice depending upon the language of the frame. You can imagine the power that some people have to manipulate public opinion just because they know how to ask a question to get their desired response.
So the framing of a decision can affect our choices—but why? It turns out that we have a natural tendency to be risk avoidant for gains and risk taking for losses. To see what I mean, choose between the following two options.
Option A: | A sure gain of $1,000 |
Option B: | A 50% chance of gaining $2,000, and a 50% chance of gaining nothing |
Most people are risk avoidant for this decision. They want the guaranteed gain of $1,000, as opposed to a gamble where they could gain $2,000, but may also gain nothing. Now consider the following decision:
Option A: | A sure loss of $1,000 |
Option B: | A 50% chance of losing $2,000, and a 50% chance of losing nothing |
Most people choose B in this case because they don't want to experience a certain loss of $1,000. We're willing to take a chance of losing a greater amount if there's also a chance of losing nothing. In essence, we're generally risk averse for gains (we go for the sure thing) and risk taking for losses (we're willing to take a gamble). That's okay—it's just a natural human tendency. But these risk preferences can cause us problems. As we saw in the “lives saved / lives lost” decision, our judgments for identical alternatives can change just because the options are framed as gains or losses. So we have to be aware that our decision frame can affect our choice. What can we do about it? Whenever possible, frame the decision in different ways and see if your judgment changes. If it stays the same, we can be confident in our choice—but if it changes, we need to think a bit more about our preferences.
We Hate to Lose!
Imagine that you just lost $1,000. How would you feel? Now imagine that you just won $1,000. Most of us would love to win $1,000, but would have a stronger reaction to losing $1,000. A loss of $1,000 is felt more than a gain of $1,000. This is a phenomenon that psychologists call loss aversion—losses loom larger than gains for most of us. In essence, we hate to lose! Our loss aversion is one reason we're willing to take more risks in loss contexts—we just don't want to accept a sure loss.
This desire to avoid losses leads to a number of faulty decisions. For example, investors tend to sell winning investments quickly and hold on to losing investments. Studies show that we're more likely to sell stocks that rise in price rather than those that fall. This is often a bad decision. In fact, one study found that the stocks investors sold outperformed the stocks held by about 3.4 percent over the subsequent twelve months.4 Why do we do it? We want to lock in a sure gain, and we don't want to accept a sure loss. As a result, we sell stocks that go up in price to realize the gain, and hold stocks that drop, hoping they'll recover. Unfortunately, some of those stocks continue to drop and we lose more money than if we had just cut our losses. In our attempt to avoid the pain of a loss, we hold on to losers much too long, causing us greater pain down the road.
Loss aversion also explains an interesting phenomenon known as the endowment effect. Consider the following decision.
What if you won a pair of tickets to a sporting event that you want to go to. Someone you don't know finds out that you have the tickets and wants to buy them. What's the minimum price you would be willing to sell the tickets for?
Now assume that you don't have tickets to the sporting event, but you want to go. How much would you be willing to pay someone for the tickets?5
We typically demand about twice as much to sell a ticket we already have, compared to the price we would pay to buy the ticket. Why? We don't want to lose what we have. We therefore overvalue what belongs to us and undervalue what belongs to others.6 As a demonstration, Professor Richard Thaler gave a group of students a coffee mug embossed with their school's logo. When the students were later asked how much they would be willing to sell the mug for, their average price was $5.25. But other students without a mug would only pay an average of $2.75 to buy one.7 Ownership actually increases the value of what we have.
Our propensity to overvalue the things we have is exploited every day by the business community. Marketers understand that when we buy a product and take it home, the endowment effect will take over and we won't want to return it. And so, we see furniture retailers enticing us to take home a new dining room set today—without starting our payments for one full year. In other cases, we can get a free Internet connection, or a reduced rate on cable TV or phone service, for a number of months. Getting a product on a trial basis or for a reduced initial price lures us in, and once we have it, we're reluctant to give it up.8
At some gas stations, fuel is less expensive if purchased with cash instead of credit. Credit card companies encourage the stations to call the difference a cash discount as opposed to a credit card surcharge.9 Why? Surcharges are seen as out-of-pocket losses, while cash discounts are viewed as gains. Although the fee structure is the same for both, we react more to a surcharge “loss,” so we're less likely to use a credit card if a surcharge is involved. Loss aversion may also complicate our negotiations with others, because each party considers its concessions to be losses, and those losses loom larger than the gains received from the other negotiating party. Loss aversion may even explain an incumbent politician's advantage in elections, since the potential loss from an unfavorable change may be viewed to be greater than the potential gain from a favorable change in leadership.10 Our desire to avoid losses can have far-reaching effects.
MENTAL ACCOUNTS
What if you bought a ticket for $75 to see your favorite sports team play, and when you go to the ballpark, you realize that you lost the ticket. Are you going to spend another $75 to see the game? Now suppose that you go to the ballpark expecting to buy a $75 ticket at the window. When you check your wallet, you realize that you have more than enough money to buy the ticket, but that you just lost $75. Would you buy the ticket?
When people are given decisions like these, most answer no to the first question and yes to the second. One study found, for instance, that only 46 percent of us would buy the ticket in the first case, while 88 percent would buy it in the second.11 Why is that? In the first case, we put both cash outlays of $75 in the same “mental account” because both amounts relate to buying the ticket. We start to think that the game is costing us $150, which is more than we're willing to spend. In the second case, the two amounts are put in different mental accounts because we don't associate the lost money with the ticket price, and so we buy the ticket. But we're in the same position at the end of the day if we buy the ticket—we get to see the game and we're out to the tune of $150. Yet our decisions differ.12
With mental accounting, we pigeonhole our money into different categories or accounts, and then treat that money differently depending upon the account in which it's kept. In fact, we can waste our money because of mental accounts.13 Traditional economics says that all money should be fungible—it shouldn't matter if it comes from our salary, from a gift, or from gambling winnings. The money in each case should have the same value to us, so we should spend it the same. But that's not how we act. We often spend money which we receive as a gift or from gambling much more freely than money we had to work for. This even applies to our tax refunds. We frequently think of our tax refund as a windfall, and so we're more likely to spend it frivolously. However, a refund is really a deferred payment of our salary, a type of forced savings. If we save money from our paycheck, we typically give considerable thought to how we're going to spend it, but we don't do that with a tax refund. Why? We put the refund into a separate mental account.14
I often travel to Australia to present and discuss research projects at various universities. Any stipend that I receive is quickly, and extravagantly, spent. I buy more expensive meals and spend much more on wine and beer. I'll often buy a $75 bottle of wine to have with dinner down under, while back in the United States I spend only around $25. Why? I don't consider my stipend to be part of my normal salary, and so it goes in a different mental account. While I have a great time, I'm making very different financial decisions than I would if I were at home—all because of my mental accounts.
The size of our mental accounts can also affect our financial decisions. How would you act in the following two situations?15
You're at a store to buy some new computer software that costs $100. The salesperson tells you that the same software is on sale at another store that's a ten-minute drive away for $75. Would you go to the other store?
You're at a store to buy a new computer that costs $1,900. The salesperson tells you that the same computer sells for $1,875 at another store that's a ten-minute drive away. Would you go to the other store?
Most of us would go to the other store in the first case, but not the second. The percentage reduction in price should not matter; we should only compare the dollar savings with the time spent to get that savings. However, we use mental accounts and compare the savings to the size of the account. Since we want a good deal, we're more than willing to make the drive to save $25 in the first case, but not the second.16
Credit cards are a type of mental account. Somehow, our money gets devalued if we use plastic, which is ironic since credit cards typically cost us more after we factor in the large interest rates. As an example, two professors at MIT conducted a sealed bid auction for tickets to a Boston Celtics game. Half of the participants were told that if they won the bid they would have to pay for the tickets in cash, while the other half were told they would have to pay by credit card. Amazingly, the average credit card bid was about twice as high as the average cash bid!17 Our credit card mental account can cost us big bucks.
Mental accounts can also affect our risk-taking behavior. Finance professor Richard Thaler asked a group of divisional managers if they would invest in a project that had a 50 percent chance to gain $2 million and a 50 percent chance to lose $1 million. The expected value of the project is a profit of $500,000—not a bad investment—but only three out of twenty-five executives would take the gamble.18 Why? They were using a narrow mental account that included only one investment project, and were not willing to take the chance of losing on the project. But if they expanded their account to include other similar investments, they might be more than willing to take the risk. In fact, when the company CEO was asked if he would invest in twenty-five such projects, he enthusiastically said yes, because in the long run the company is likely to come out ahead. The moral of the story—if you're too risk averse in your business dealings, you should expand your mental account.19
Mental accounts also make us evaluate the results of our financial decisions in a faulty manner. Remember my friend Chris's story? Someone he knew made a killing on just a couple of stock investments. When I asked about his friend's other investments, he downplayed their importance—because they were losers. Many investors put their stock gains in one mental account and their losses in another. They then focus on the gains and explain away the losses (e.g., some outside force beyond their control, like an overall downturn in the economy, may have caused the loss). This is a classic case of focusing on the hits and de-emphasizing the misses. If we want an accurate evaluation of our investment performance, we need to expand our mental account to include both gains and losses.
Do mental accounts lead you to make poor financial decisions? Ask yourself the following two questions: (1) Do I have emergency or other money in a savings account that's not for retirement? (2) Do I owe money on my credit cards that's carried over from month-to-month? If you answered yes to both questions, you're making poor decisions because of mental accounting. Why? You're paying a high interest rate on your debt, and receiving a low rate on your savings. It's better to pay off your credit card, and if you need money for an emergency down the road, put it on the card.20 When making your personal financial decisions, it's usually smarter to pay off your debt as soon as possible. If you have a $3,300 balance on your credit card and you're charged 18 percent interest, it would take nineteen years to pay off the debt if you make the minimum monthly payment. If you paid just $10 more than the minimum each month, you would pay the debt in only four years and save about $2,800 in interest!21
So what can we take away from all this? We pigeonhole our money into mental accounts, and that accounting can result in a number of unwise financial decisions. How can we overcome those problems? Treat all your money equally, whether you get it from your salary, savings, gifts, or gambling winnings. One of the best ways to do that is to first put all your money into a savings or investment account before you spend it. This little bit of advice seemed to help a student in my critical thinking course. He went to a casino and won $800, a considerable sum for an undergraduate student. On his way to blowing his windfall, he stopped and thought about how his mental accounts were affecting his decision. He was in dire need of cash at the time, so he came home with the money, put it in the bank, and used it to live off of for a couple of weeks. It's one thing if you have excess money to blow, but if not, treating all your money equally will reduce reckless spending and result in more informed financial decisions.
20/20 HINDSIGHT
Why do blacks dominate basketball? All sorts of reasons have been proposed, including genetics. Some people believe that blacks are better basketball players because they can jump higher and run faster. Through this logic, it's no wonder blacks are superior at the game; in fact, some people could not imagine it otherwise. But when making such inferences, they're being caught up in the hindsight bias. No matter what the event, people can come up with causal explanations that make the event seem as if it was obvious from the start. It's obvious to many people that blacks dominate professional basketball for genetic reasons. But consider the following facts.
At one time, Jews dominated the game. Basketball was primarily an east-coast, inner-city game from the 1920s to the 1940s, and it was played, for the most part, by the oppressed ethnic group of that time—the Jews. Investigative journalist Jon Entine noted that when Jews dominated basketball, sports writers developed many reasons for their superior play. As he states, “Writers opined that Jews were genetically and culturally built to stand up under the strain and stamina of the hoop game. It was suggested that they had an advantage because short men have better balance and more foot speed. They were also thought to have sharper eyes…and it was said they were clever.”22 Paul Gallico, one of the premier sports writers of the 1930s, said the reason basketball appealed to Jews was that “the game places a premium on an alert, scheming mind, flashy trickiness, artful dodging and general smart aleckness.”23 Notwithstanding the insulting stereotype, I'm amazed how we think we know the cause for something after the fact—even if that presumed cause is quite absurd.
Were World War II, the attack on Pearl Harbor, the Challenger and Columbia space shuttle disasters, and the escalation of the Vietnam War inevitable? With hindsight, people often answer yes. But if these events were so inevitable, why weren't they predicted? There are usually many uncertainties before an event occurs. But when we know the outcome, we forget about those uncertainties and think that the event was likely to happen all along. Psychologist Baruch Fischhoff interestingly demonstrated this tendency with a true historical account of a battle between British forces and the Gurkhas from Nepal.24 Fischhoff had people read about the battle, told some of them that the British actually won, and told others nothing about the outcome. They then had to assess the likelihood that the British won, the Gurkhas won, or a stalemate occurred, based only upon the battle's description (i.e., assuming they didn't know the outcome). Those who were told the British won thought there was a 57 percent probability of a British victory, while those not told the outcome thought there was only a 34 percent chance that the British won the battle.
Once we know an outcome has occurred, two things happen: (1) The outcome seems inevitable, and (2) We easily see why things happened the way they did. In effect, if we know the outcome of an event, we restructure our memory. We don't remember the uncertainties that were evident before the event occurred; instead, we reconstruct the past given our knowledge of what actually happened.25 It's the curse of knowledge!
Why is hindsight bias important? For one thing, it affects how we judge others. If our company lost market share and put our job in jeopardy, we may think, “Our CEO should have known that the competition was going to market a new innovation—just look at the evidence.” But if we consider all the uncertainties that existed prior to knowing the outcome, we might have made the same decision as the CEO. Hindsight bias also inhibits how we learn from experience, because if we're not surprised by an outcome, we tend not to learn much from that outcome.
So how can we mitigate the problems of hindsight? Just informing people about the bias is typically not enough. As with many other problems discussed here, one of the best ways to reduce the bias is to consider the alternative—consider how an alternative outcome could have occurred. In so doing, we pay attention to information that supports an alternative outcome, which should open up the possibility that the actual outcome may not have been obvious from the start.26
OVERCONFIDENCE
With all the ways our decisions can go wrong, you would think we'd have a little humility about our ability to make accurate judgments—but we don't. Research has consistently demonstrated that we're overconfident in the judgments we make. And these include the judgments of professionals like doctors, lawyers, security analysts, and engineers. One study showed, for instance, that when doctors diagnosed pneumonia, they were 88 percent confident in their diagnoses, even though their patients had pneumonia only 20 percent of the time. Sixty-eight percent of lawyers believe that they will win their case, when only 50 percent can. When people predicted whether stocks were going to rise or fall from market reports, only 47 percent of their predictions were correct, but their average confidence was 65 percent. Over 85 percent of us think we're better drivers than the average person. In most every aspect of life, we consistently overrate our knowledge and abilities.27
Of course, in some cases overconfidence helps us achieve things we normally wouldn't. Few people would start a new business if they thought it wasn't going to succeed, yet over two-thirds of small businesses fail within the first four years of their start-up. However, overconfidence can also cause catastrophic results. Before the space shuttle Challenger exploded, NASA estimated the probability of a catastrophe to be one in one hundred thousand launches. That's equivalent to launching the shuttle every day for three centuries! With such confidence, it's no wonder NASA thought they could launch the shuttle under extremely adverse conditions.
Overconfidence also leads to the planning fallacy. Do you normally underestimate the time or expense to complete a project? Most of us do. When students estimated how long it would take to write their theses, their average estimate of 33.9 days fell way short of the 55.5 days it actually took.28 Government projects are particularly susceptible to the planning fallacy. When the Australian government decided to build the famous Sydney Opera House in 1957, they thought it could be completed by 1963 at a cost of $7 million. In fact, a scaled-down version opened in 1973 at a cost of $102 million. The city of Boston recently constructed a new underground highway system known as the “big dig.” The initial estimates indicated the project would be completed in 1998 and cost $2.6 billion. The majority of the work was completed by 2005, with a price tag of over $14 billion!29
Research consistently reveals little or no relation between our confidence and accuracy. As an example, when clinical psychologists and students repeatedly evaluated patients after receiving increasing amounts of information, confidence in their judgments went up, but accuracy stayed about the same.30 Particularly disconcerting, psychologist Elizabeth Loftus, who studies the relationship between eyewitness court testimony and accuracy in criminal identification, concluded, “One should not take high confidence as any absolute guarantee of anything.”31 Even when eyewitnesses are extremely confident in their identifications, they are often wrong. Studies have also found no relation between confidence and accuracy when clinicians diagnose brain damage, or when physicians diagnose cancer or pneumonia.32 In effect, physicians are as confident on the cases they misdiagnose as they are on the cases they diagnose correctly. Just because we think we know something, doesn't always mean we do.
One reason we're overconfident is we remember the hits and forget the misses—we often remember the times we're successful, and forget the times we fail. It's a bit more complicated, however, because sometimes our failures are our most vivid memories. It turns out that even when we remember our failures, we interpret them in a way that still bolsters our belief. Eileen Langer, a Harvard psychologist, calls it the “Heads I win, tails its chance” phenomenon.33 As we saw with gamblers' behavior, if we're successful, we think the positive outcome was caused by our knowledge and ability. If we're unsuccessful, we think the negative outcome was caused by something we had no control over. As a result, we reinterpret our failures to be consistent with an overall positive belief in our abilities.
So what can we do about overconfidence? Try to think about the reasons why your judgment may be wrong. In a sense, this is similar to considering alternative hypotheses. If we evaluate alternative hypotheses, and the reasons why those alternatives could be correct, we'll implicitly consider evidence contrary to our current belief or judgment, which should keep our overconfidence in check. Considering the alternatives is one of the most effective methods we have to counter many of our problematic judgment biases.
INTUITIVE JUDGMENTS
Since we're often overconfident, we tend to think that our intuitive judgments are quite accurate. When we make intuitive judgments, we collect various pieces of information, evaluate the importance of the information, and then somehow combine the data in a subjective way to arrive at our decision. We like to think that these intuitive judgments are more accurate than just relying on statistical data alone because subjective assessments allow us to use our own personal expertise in the decision process. Of course, these judgments can be pretty good at times. But as you might expect, they can also result in errors and serious consequences. This is especially true when professionals make intuitive judgments that have a significant impact on our lives. Consider, for example, the college admissions decision.
When we apply to college, our fate is in the hands of an admissions committee. While admission members examine hard, statistical data like a student's prior grade point average and SAT scores, they (for some schools) also place considerable importance on interviewing the prospective student. Committee members like to think they can see some intangible quality during an interview that allows them to predict whether the student will be successful in college. They then subjectively assess all the information to arrive at their own intuitive assessment of the applicant.
The problem is, interviews are notoriously unreliable in predicting future success. As psychologist Robyn Dawes points out, it's presumptuous to think that someone can learn more about a student's abilities in a half-hour interview than by examining their grade point average, which describes the student's performance over four years.34 In fact, personal assessments from interviews can be harmful, because they lack both reliability and validity. Dozens of studies have shown that an interviewer's assessment isn't a good indicator of an applicant's future success—and different interviewers often don't even agree with one another's assessment.35 Yet, many colleges employ interviews as a main ingredient in their acceptance decisions.
Why do we continue to believe in the value of interviews? We think that our intuitive judgment is better than relying on statistical data. Part of the problem comes, once again, from remembering the hits and forgetting the misses. An admissions committee member is likely to remember the time he accepted a student with poor grades on a hunch, and the student went on to perform very well in school. Such a memory can only bolster one's confidence in his intuitive judgment. Unfortunately, the committee member is likely to forget the times he accepted a student on a hunch, and the student performed poorly. It's no wonder that we think we have special skills that just can't be replicated by relying on only statistical data. In addition, we think it's just not right to base major decisions on statistics alone—we think it's soulless. Many students would vehemently complain if rejected based solely on their past statistics, arguing that they need to be interviewed to uncover their true potential as a student.
The fact is, however, volumes of research indicate that we would make more accurate decisions if we relied on statistical predictions instead of intuitive predictions. With statistical prediction, we don't use our subjective judgment to assess and combine different bits of information. Instead, we combine the information statistically or mathematically. In the college admissions case, for example, we can just add up a student's grade point average, SAT score, and numerical evaluations of recommendation letters, and then use that sum to predict a student's future success in college.36 The higher the number, the more likely the student will perform well. No overall subjective assessment is needed.
Decades of research have demonstrated that such simple statistical models do a better job than intuitive judgments in many decision contexts. In fact, statistical prediction has been shown to be better than intuitive prediction in over one hundred studies. These include predicting the success of students in college, the suicide attempts of psychiatric patients, the job satisfaction of engineers, the growth of corporations, when a parolee will violate their parole, whether patients are neurotic or psychotic, the amount of psychiatric hospitalization required, and a patient's response to electroshock therapy.37 And, in most all of these cases, experts are providing the intuitive predictions.
For example, one study investigated the accuracy of the graduate admissions committee at the University of Oregon. The committee used their professional judgment to predict the future success of students, given information like undergraduate grade point average (GPA), Graduate Record Exam (GRE) scores, and assessments of the quality of the undergraduate institution.38 The judgments of the admissions committee were then correlated with student performance in school after a two- to five-year period (based on faculty ratings at that time). It turned out that they correlated only 0.19, a very poor accuracy rate. In contrast, just adding up student scores on the relevant variables (e.g., GPA, GRE scores, etc.) yielded a correlation of 0.48. We would be more accurate if we just relied on basic statistical data combined in a very simple way, as opposed to relying on the intuitive assessments of the professionals.39
What about the decision to grant a criminal parole? Parole boards rely heavily on interviews with criminals. One study found that out of 629 criminals who were granted parole, all but one of the decisions were consistent with the recommendation of the interviewer. But was the interviewer's intuitive judgment any good? The parole board thought that about 25 percent of their decisions were failures within one year of release because the parolee committed another crime or violated parole. A model that used only background statistics, like the type of crime originally committed, the number of past crimes, and the number of prison rules violated, was much more accurate than an interviewer in predicting these failures.40
Even the intuitive predictions of medical doctors can be poor when compared to statistical predictions. One study had doctors estimate the life expectancies of 193 patients with Hodgkin's disease. Although the doctors thought they could accurately make the prediction, their judgments were totally unrelated to a patient's survival time, and a statistical model performed considerably better.41 One area in which statistical prediction is used extensively is in loan applications. About 90 percent of consumer loans and all credit card issuances are based on statistical models, which is probably a good thing, because when experienced bank officers rated the creditworthiness of clients, more of their selections resulted in defaults as compared to those chosen by a statistical model.42 In effect, considerable research indicates that the intuitive judgments of professionals often don't add much beyond what we would get from just relying on statistics. In fact, for most of the decisions investigated, intuitive judgments are worse. But we are still very confident in our intuitive decision making.
Why are these expert judgments so poor? Some things are difficult to predict because the information we have isn't very good. For example, there may be no reliable test available to determine if a person has a certain psychological or physical disorder (of course, we usually try to make the prediction nonetheless). In other cases, the information we have is useful, but we may misinterpret or misuse that information (e.g., we often overvalue less important information and undervalue more important data). In addition, if we have to make a large number of decisions, as in the admissions committee case, we may not apply our decision strategy consistently. We're not machines—we have our days. Sometimes we're bored, sometimes we're distracted, and sometimes we're tired. As a result, we may make different decisions at different times, and that inconsistency increases our decision errors.43 Statistical models, on the other hand, don't get tired, bored, or distracted—they always apply the same decision rule, time after time.
And so, many of our decisions would be more accurate if we relied on statistical prediction rather than intuitive judgments. Of course, I'm not advocating that we never rely on professional judgment. We obviously need the advice of doctors, lawyers, and other professionals for many of the decisions we face in life. Doctors have expert knowledge of current medical practices that can save our lives. But we have to recognize the limits in our ability to predict. As we've seen, predicting many different types of future events is very difficult, especially if they involve human behavior. The research indicates that intuitive judgments do not provide great insight into these decisions. While many professionals believe they have expert insight that allows them to make these predictions, the fact is, relying on statistical prediction would result in better decision making. As psychologist Stuart Sutherland has said, “Suspect anyone who claims to have good intuition.”44
JUDGMENTS ABOUT INDIVIDUALS VERSUS GROUPS
You may have heard the phrase, “Statistics don't apply to the individual.” We may know, for example, that 70 percent of people with a certain disease will die within a year, but that doesn't tell us whether a specific person with the disease will die. Or we may hear that 60 percent of people coming from a certain socioeconomic background will commit a crime, but, once again, we don't know if a specific individual with that background will turn to crime. But remember, we have an inherent desire to predict things. As a result, many people, including professionals, believe they can use their intuitive insight to make predictions about an individual's behavior.
Take, for instance, the field of clinical psychology. Some clinical psychologists claim that their training gives them unique insight into how an individual will act, beyond what we can get from general statistics. They're routinely brought into our courtrooms to provide expert testimony on an individual's psychological state—and they make their pronouncements with a great deal of confidence.45 The problem is, the field of psychology, and the social sciences in general, don't give us that kind of information. Psychology does not allow us to make definitive predictions about a single individual; instead, it indicates the tendencies that exist in a group of individuals.46 As a result, intuitive judgments about individuals are frequently in error. The best information we have available to make such judgments is, once again, general statistics.
How do we know that clinical prediction is no better than just relying on statistics? There's no evidence to indicate that years of experience as a psychotherapist leads to a better patient outcome. Also, studies have found that licensed clinical psychologists do no better than unlicensed practitioners (e.g., social workers).47 In fact, psychologist Robyn Dawes argues that “the effectiveness of therapy is unrelated to the training or credentials of the therapist. We should take seriously the findings that the best predictors of future behavior are past behavior and performance on carefully standardized tests, not responses to inkblot tests or impressions gained in interviews, even though no prediction is as good as we might wish it to be.”48
The bottom line is, we can be reasonably confident only in our aggregate predictions; that is, how a group of people will tend to behave. Any attempt to predict the behavior of a single individual is open to so much error and uncertainty that, either we should not do it at all, or it should be done with strong caveats.49 As Dawes states, “A mental health expert who expresses a confident opinion about the probable future behavior of a single individual (for example, to engage in violent acts) is by definition incompetent, because the research has demonstrated that neither a mental health expert nor anyone else can make such a prediction with accuracy sufficient to warrant much confidence.”50 Yet such opinions are given every day in our courts of law.
Since psychology finds general tendencies in groups of people and doesn't allow us to accurately predict what an individual from the group will do, the conclusions discussed here relate to our general tendencies. When I say we're risk avoidant for gains and risk taking for losses, we search out confirming evidence, or we see associations that are not there, I mean that there's a tendency for us to act in these ways. But we can't predict with certainty how any one of us will act, no matter how hard we try. The best we can do is make probabilistic assessments based upon general statistics.51 While statistics don't apply to the individual, they allow us to say things like, “Based on past statistics, there's a 70 percent chance that a person with this disease will die within a year.” It's not perfect, but it's the best we can do. Anything else, and we're just fooling ourselves.