Simplify, Simplify, Simplify
—Henry David Thoreau
Our decisions can be quite complex. In fact, if we wanted to maximize the accuracy of our judgments, we would have to gather an enormous amount of information. Just consider the decision to get a new job. To maximize the enjoyment, fulfillment, and financial rewards from a new position, we would need to gather data on the type of work involved in a variety of different careers, the educational requirements for those careers, the salaries offered, and on and on. After we picked a career, we would have to investigate all the companies in the field that we could work for. As you can gather, if we did a thorough search to maximize our decision accuracy, we'd spend more time deciding where to work than actually working. We can't live our lives like that. Thus, we use heuristics when we make our decisions.
Heuristics are general rules of thumb that we use to simplify complicated judgments. These simplifying strategies can be quite beneficial: they reduce the time and effort required to make a decision, and they often result in reasonably good decisions. While heuristics give approximate, rather than exact, solutions to our problems, approximate solutions are often good enough. The problem is, heuristics can also lead to systematic biases that result in grossly inaccurate judgments. So let's look at a few of the heuristics we commonly employ, and the biases that arise from their use.1
OF COURSE IT'S THE SAME—IT LOOKS THE SAME, DOESN'T IT?
Imagine that you just met Steve, and after talking with him for a time, you develop a thumbnail sketch of his personality. He seems very helpful, but somewhat shy and withdrawn. It also appears that he likes things orderly and has a passion for detail. What do you think is Steve's occupation? Given a choice between a farmer, a salesperson, an airline pilot, a librarian, or a physician, most people say librarian.2 Why? Steve's characteristics are similar to our stereotypical view of librarians. We often make judgments based on similarity. If A is similar to B, we think A belongs to B. In effect, we think that like goes with like. This strategy is known as the “representativeness” heuristic, because we're making our judgment based on the degree to which A is representative of B.
This heuristic works quite well for many decisions—things that go together are often similar. However, it also causes us to overlook other relevant data, and thus can lead to decision errors. For example, when considering our judgment of Steve's occupation, we overlook the fact that, in any given town, there are many more stores than there are libraries. There are therefore many more salespeople than librarians. Even though you may think that salespeople usually aren't shy and withdrawn, given their much greater number, there are likely to be many that are. In fact, there are likely to be more shy salespeople than there are librarians, so a better answer would be salesman. But we don't pay attention to that background statistic. Instead, we base our judgment on an ambiguous personality description because we think it's representative of a librarian.
Believing that like goes with like also leads us mistakenly to think that one thing causes another. Why? We think that effects should resemble their causes. This has resulted in some pretty strange medical practices over the years. At one time, ground-up bats were prescribed for vision problems in China because it was mistakenly assumed that bats had good vision. In Europe, fox lungs were used for asthmatics because it was thought that foxes had great stamina. Certain alternative medical practices prescribe raw brains for mental disorders.3 Much of psychoanalysis follows a similar approach to thinking. For example, psychoanalysts maintain that a fixation at the oral stage (the breast) when one is young will lead to a preoccupation in adult life with the mouth, resulting in smoking, kissing, and talking too much.4 The idea that like causes like is also a fundamental feature of astrology, where people born under a specific sign are believed to have certain personality characteristics. If you're born under the Taurus (bull), you're thought to be strong willed, under Virgo (virgin), you're shy. There is no physical evidence for these beliefs, but the causes and effects have similar characteristics.
And so, basing judgments on similarity can result in a number of bizarre beliefs. Why? When we use the representativeness heuristic we typically ignore other potentially relevant information that should influence our decision. Here are some important decision errors that we fall prey to because of this simplifying strategy.
Neglecting Base Rates
Do you remember the virus test mentioned earlier? Your doctor gave you a screening test for a certain type of virus, and the results came back positive, indicating that you have the virus. How concerned should you be? What if your doctor said:
So what's the probability that you actually have the virus? Many people say it's around 95 percent. Recall that the right answer is only around 4 percent! How can that be? Let's use a little logic and number crunching. If one out of five hundred people have the virus, the other 499 don't. However, if the test indicates that a person has the virus when she doesn't in 5 percent of the cases, the test will say that about twenty-five of the virus-free individuals are infected (0.05 times 499). This 5 percent is called a false positive rate, because the test positively identifies a person as having the virus when, in fact, she doesn't. As a result, the test indicates that twenty-six people (twenty-five wrong and one right) out of five hundred have the virus when only one actually has it. One in twenty-six is about 4 percent. So even though the test says you have the virus, there's only about a 4 percent chance you do.5
Don't feel foolish if you thought the answer was close to 95 percent. When a similar problem was given to sixty doctors, medical students, and house officers at four Harvard Medical School teaching hospitals, the answer given most frequently was 95 percent. About half of the medical practitioners said 95 percent, while only eleven gave the right answer.6 Even medical professionals fall prey to judgment errors that relate to their work. As it turns out, intelligent people are usually not trained to think about issues like these in the right way.
There's usually some error associated with most predictive tests. While the virus test indicated a person has the virus when she actually has it 100 percent of the time (the true positive rate), it also indicated a person has the virus when she doesn't 5 percent of the time (the false positive rate). It would be one thing if a test was perfect in prediction, but the overall accuracy is almost never 100 percent. Thus, we first have to consider the base rate—the background statistic—which indicates how often the event occurs. We normally don't think about this background stat, but it's crucial information. In our example, one in five hundred have the virus, which is a base rate of only 0.2 percent. Next, that base rate should be adjusted given the result and “diagnosticity” of the test. To evaluate a test's diagnosticity, we have to compare the true positive and false positive rates. In the virus example, the true positive was 100 percent while the false positive was 5 percent, so we should adjust the base rate by a factor of twenty (100 percent ÷ 5 percent). This number indicates how much information we get from a test—the higher the number, the more the test results should influence our judgment.7
The diagnosticity of a test is extremely important when we make decisions based on test information. For example, many people rely on lie detector tests. Police and lawyers use them in criminal investigations, and the FBI uses them to screen employees.8 However, the diagnostic value of a lie detector test has been estimated to be as low as two to one.9 As we saw in the medical example, a twenty to one diagnosticity yielded only a 4 percent probability of having a virus, given that the base rate of infection was very low. A lie detector test is much less reliable, indicating that we get little useful information from polygraphs. And yet, lawyers, police, and federal agencies place great emphasis on their results (thankfully, they're not admissible in a court of law). In fact, since the base rate of being a criminal is usually quite low, some argue that the only time you should take a lie detector test is when you're guilty. Why? When the base rate is low, and there's a significant false positive rate, there can be many more cases where the test says guilty for an innocent person than for a guilty person. In effect, there's a chance you may beat the test if you're guilty, while there's a significant chance of being found guilty when you're innocent.
Corporate leaders are also not immune to base rate neglect. For example, auditors use bankruptcy prediction models when they decide the type of audit opinion to report. One study told auditors that a bankruptcy model had a 90 percent true positive and a 5 percent false positive rate, and that about 2 percent of all firms fail. Given this information, there would be a 27 percent probability that a firm will go bankrupt if the model predicts bankruptcy. However, the average probability estimated by audit partners was 66 percent, and their most common response was 80 percent. While these experts seem to perform a bit better than novices, they still do not appreciate the full significance of base rates when making probability decisions.10
If base rates are so significant, why do we ignore them? Representativeness is one reason. The test tells us that we have characteristics similar to people with the virus, or that a firm is similar to other firms that have failed, and so we focus on that information. But ignoring base rates could also be due to other reasons as well. Because we are storytellers, not statisticians, we think that background statistics aren't very important. But they are! So what should we do? For crucial decisions, we may want to formally calculate the probabilities. Even if we don't go through formal calculations, however, just knowing that we should pay attention to the background statistics should help us arrive at more informed judgments.
Disregarding Regression to the Mean
At the 2000 British Open, Tiger Woods started the last day with a six-stroke lead—but David Duvall was closing fast. He sank four birdies in five holes, and was within three strokes of Tiger when the announcers exclaimed, “Duvall is on fire—it looks like he's going to catch Woods.” But was it realistic to think that Duvall could keep up his blistering pace? If he kept playing at that level, he would have ended the day with a score of 59, unheard of in the world of professional golf. So it's extremely unlikely Duvall could have done it—but the announcers didn't consider that fact. In effect, they didn't take into account a statistical concept known as regression to the mean.
Extreme values of any measurement are typically followed by less extreme values. While very tall parents are likely to have tall children, those children are usually not as tall as their parents; instead, they're closer to the average height of people in general (i.e., they “regress” to the mean of the population).11 Similarly, if Duvall is making more birdies than normal now, it's likely he will regress to his average and not make them later on in the game.12 But we often don't consider that fact; rather, we think that he's on a hot streak (or he's got a “hot club”).
So what happened at the British Open? Woods won by eight strokes. Interestingly, Woods started the day ahead by six strokes after playing three rounds, which means he was better than the other players by an average of two strokes a day. After the fourth round, he added another two strokes and won by eight. While the numbers don't always work out this simply (e.g., players can have a bad day, as Greg Norman found out when he blew a large lead at the final round of the Masters in 1996), it's very unrealistic to assume that a player's performance on a short sequence of holes (e.g., Duvall's four birdies in five holes) will continue throughout the match. It makes much more sense to assume that his performance will regress back to his average performance.
Our failure to understand regression to the mean can be detrimental to learning. For example, flight instructors in one study noticed that when they praised a pilot for an exceptionally smooth landing, the pilot usually had a poorer landing on the next flight. On the other hand, criticism after a rough landing was usually followed by an improvement on the next try. The instructors concluded that verbal rewards are detrimental to learning, while verbal punishments are beneficial. But are punishments really better than rewards for learning? It's more likely that we would get such a sequence of events because of regression to the mean.13
An often-used management practice, management by exception, is also subject to this bias. With this procedure, managers intervene when very high or low employee performance occurs. Management may therefore attribute any subsequent change in performance to their intervention, when the change may simply be due to employees regressing back to their average performance. Or consider the Sports Illustrated jinx. A sports figure often makes the cover of Sports Illustrated when he has an outstanding year. In the following year, his performance typically drops off, leading many to believe that making the cover is a curse. But it's just regression to the mean—any outstanding year will likely be followed by one that is not so stellar.
Disregarding Sample Size
What if your town has two hospitals, and about forty-five babies are born each day in the larger hospital, while fifteen are born in the smaller one. As you know, around 50 percent of all babies are boys, but the exact percentage varies from day to day; sometimes it may be higher, and sometimes lower. Over the last year, each hospital recorded the days on which more than 60 percent of the babies born were boys. Which hospital do you think recorded more such days—the larger hospital, the smaller hospital, or would they be about the same (i.e., within 5 percent of each other)?14
When asked this question, the majority of people think that both hospitals would have about the same number of days. However, we should expect more 60 percent days in the smaller hospital. Why? There's a greater variability of outcomes in small samples, so there's more chance that seemingly unrepresentative events will occur. But we don't recognize the importance of sample size when making our judgments. Instead, we erroneously believe that small samples are as representative as large samples.
If you flip an unbiased coin six times, which of the following sequences do you think is more likely to occur?
(A) H T H T T H
(B) H H H T T T
Is it A, B, or do they have the same probability? Most people say A, when, in fact, A and B are equally likely. Why is that? Since each coin flip is independent of the next, each has a 1/2 probability of being a head or a tail. To get the probability of each specific sequence, we have to multiply 1/2 by itself six times (the number of times we flip). The result is 1/64, or 1.5 percent, for either sequence. However, we tend to believe that even short sequences of a random process will be representative of that process. Thus, we think that every part of the sequence must appear random, and since random processes switch between heads and tails, option A seems more likely.15
As you can see, we have a mistaken belief that small samples will mimic the population more closely than they actually do. We consequently think that small samples are as reliable as large samples when making our judgments. This can lead to all sorts of decision errors. One study showed, for example, that students often select their courses by relying on the recommendations of a couple of students, rather than on the formal written evaluations of dozens of students. Why? Students focus on a few personal accounts, and ignore the unreliability of that small sample size.16 But small samples are less likely to be representative of the population—a couple of students in class may have very different views than the class as a whole. Realizing that small samples are not as representative as large samples will go a long way in helping us form better beliefs and decisions.
Conjunction Fallacy
You just met Linda, who is thirty-one years old, single, outspoken, and very bright. As a student, she majored in philosophy, was deeply concerned with issues of discrimination and social justice, and participated in antinuclear demonstrations. Which do you think is more likely? (A) Linda is a bank teller, or (B) Linda is a bank teller and is active in the feminist movement.17 Or consider this decision: What's more likely? (A) an all-out nuclear war between the United States and Russia, or (B) an all-out nuclear war between the United States and Russia in which neither country intends to use nuclear weapons, but both sides are drawn into the conflict by the actions of a country such as Iraq, Libya, Israel, or Pakistan.18
If you're like most people, you said B in both cases. In fact, nearly nine out of ten people believe it's more likely that Linda is a feminist bank teller than only a bank teller. Also, more people believe that a war triggered by a third country is more likely. However, these decisions violate a fundamental rule of probability. That is, the conjunction, or co-occurrence, of two events (e.g., bank teller and feminist) cannot be more likely than either event alone. There have to be more bank tellers than bank tellers who are feminists, because some tellers aren't feminists.19 But we think the description of Linda is representative of a feminist, and so we rely on that similarity information when making our decision. We have to keep in mind, however, that as the amount of detail in a scenario increases, its probability can only decrease. If not, we'll fall for the conjunction fallacy, which can result in costly and misguided decisions. As psychologist Scott Plous indicates, the Pentagon has spent considerable time and money developing war plans based upon highly detailed, but extremely improbable, scenarios.20
Stereotyping
Many, if not most, people judge others by using stereotypes. A stereotype is a type of simplifying strategy, because when we use stereotypes we don't spend much time thinking about a person to decide how she will act. We just pigeonhole the person as a certain type, and immediately attribute a variety of characteristics to her.21 Stereotypes are then perpetuated because our confirming bias causes us to notice things that support the stereotype. So, if we buy in to the view that blonds are dumb, or that the Irish love to drink, we are more likely to notice those people who conform to our preconception and ignore those who don't. Our stereotypes are also reinforced because we typically label different groups, and by using labels we see them as being more different than ourselves. One study found, for example, that if we simply label short lines as A and longer lines as B, people think there is a bigger difference in the length of the lines than if no labels are given at all.22 Imagine what labels do to our subjective judgments of others.
While stereotypes are simple to use, they can lead to many decision errors. People are very complex creatures. Remember the bell curve? There's a distribution around most things. Within a certain group of people, there will be very intelligent individuals and some that are not very bright, some that like to drink and some that don't. In effect, there is often a much bigger difference in the traits and characteristics between two individuals within a group than there is between groups. Remember, the smaller the sample size, the greater the variability. Pick any one individual from a group, and you can get someone with characteristics that are very different from your preconceived notions of the group. As a result, we need to pay particular attention to our use of stereotypes—they can lead to a number of erroneous judgments concerning the attributes of others.
IT'S WHAT COMES TO MIND
Which do you think is a more likely cause of death in the United States: being killed by falling airplane parts or being eaten by a shark? Most people say shark, but you're actually thirty times more likely to die from falling airplane parts!23 Or consider the following pairs of potential causes of death: (1) poisoning or tuberculosis, (2) leukemia or emphysema, (3) homicide or suicide, (4) all accidents or stroke. Which cause do you think is more likely in each pair? The second cause is more common, but most people choose the first.24 In fact, we think we're twice as likely to die from an accident as from a stroke, when we're actually forty times more likely to die from a stroke.25
Our errors in judging these frequencies are due to a heuristic called availability. When using this heuristic, the estimated frequency or probability of an event is judged by the ease with which similar events can be brought to mind. For example, is it more likely that a word starts with the letter k, or that a word has k as its third letter? Most of us think that k appears more often as the first letter, even though there are twice as many words where k is the third letter. Why do we make the error? It's easier to search for words that begin with k, while it's tougher to bring to mind words with k in the third position.26 The availability heuristic often serves us well, because common events are usually easier to remember or imagine than uncommon events. However, sensational or vivid events are also easily remembered, and so availability can cause us to overestimate those events.
Suppose you're taking a seven hundred fifty-mile plane trip and your friend drives you twenty miles to the airport. When he drops you off at the terminal, he says, “Have a safe trip.” Rarely do you tell him to have a safe trip back home, but, ironically, your friend is three times more likely to die in a car crash on his return trip than you are on your plane trip.27 While driving a car is more dangerous than flying, phobias about driving are rare, while flying phobias are ubiquitous. Images of plane crashes more easily come to mind given the attention that the media gives to them. In 1986 the number of Americans traveling to Europe dropped sharply because of a few publicized plane hijackings. However, Americans living in cities were in greater danger by staying home. Just consider the effects that the hijackings on 9/11 had on the travel industry in the United States. People stayed closer to home, driving instead of flying, which actually increased their risk of death.
When parents were asked what worried them most about their children, high on the list was abduction, an event that has only a one in seven hundred thousand chance of occurring! Parents were much less worried about their children dying in a car crash, which is well over one hundred times more likely than abduction.28 Why? Abduction cases are given considerable attention by the media, while car crashes are not. In the mid 1980s, rumors were spread that seventy thousand children had been abducted around the country. It turned out that the figure referred to runaways and children taken by parents involved in custody battles. In fact, the FBI recorded only seven abduction cases by strangers nationwide at that time.29 But sensational stories get airtime, distorting our evaluation of the risk.
Availability and the Media
As we saw earlier, the beliefs we set are often linked to media coverage. A major reason for this influence is availability. Consider, for example, what happened when George Bush (senior) became president and declared in his first televised address that “the gravest domestic threat facing our nation today is drugs.” In the next few weeks the number of drug stories on network newscasts tripled. A survey conducted by the New York Times and CBS two months into the media barrage indicated that 64 percent of people thought that drugs were the country's greatest problem. The number was only 20 percent five months earlier.30
Research demonstrates that public opinion is linked to media coverage. In one study, the number of stories that included the words drug crisis was analyzed, along with changes in public opinion, over a ten-year period. Drugs were sometimes ranked as the country's most important problem by only one in twenty Americans, while at other times nearly two out of three thought it was our most pressing issue. It turned out that the variations in public opinion could be explained by changes in the media coverage.31 Why is that? When the media plays up drug stories, they more readily come to mind—they're more available to us. And so, our beliefs can easily be manipulated by politicians, or any other special interest group, the media decides to cover—with long-reaching effects.
In an effort to show how pervasive crack cocaine had become in our cities, Bush held up a plastic bag marked evidence during his television address and said, “This is crack cocaine seized a few days ago by drug enforcement agents in a park across the street from the White House.” The country was aghast to learn that drugs were being sold right next to the White House. However, the Washington Post later learned that Bush asked DEA agents to find crack in Lafayette Park. When they couldn't find a dealer there, they recruited a young crack dealer from another part of town to make a delivery across from the White House. Unfamiliar with the area, the dealer even needed directions to find the park.32 The push was on to make the public view crack and other drugs as a serious national problem, when, in fact, drug use in the United States had actually declined over the previous decade. The media highlighted crack as “the most addictive drug known to man,” even though the Surgeon General reported that less than 33 percent of the people who try crack become addicted, while 80 percent of people who smoke cigarettes for a certain length of time get addicted.33
What was the result of such political and media sensationalism? By the end of the 1980s, Congress mandated much harsher prison sentences for the possession of crack cocaine than for the possession of cocaine powder. Since a greater proportion of African Americans used crack (whites used more cocaine powder), by the mid 1990s, three out of four people in prison for drug-related crimes were African American, even though many more whites used cocaine.34 Consider, also, the media coverage of the anthrax scare after 9/11. Millions of dollars were spent to combat anthrax, which affected only a small number of people. At the same time, a large number of deaths were caused by other infections.35 And so, while the availability heuristic can provide fairly accurate probability estimates, it can also lead to judgment biases that affect many aspects of our lives. The moral of the story—whenever possible, pay attention to the statistics—not the story!
ANCHORS AWAY
It is well known that many cases of management fraud go undetected. How prevalent do you think executive-level management fraud is in public companies? Do you think the incidence of significant fraud is more than ten in one thousand firms (i.e., 1 percent)? First, answer yes or no. After you answer, estimate the number of firms per one thousand that you think have significant executive level management fraud.36
What if I first asked whether you thought there were more than two hundred fraud cases in one thousand firms? Would that change your overall estimate of the number of firms with fraud? Most people would say, “Of course not—it won't have any effect. I'm just indicating whether my estimate is above or below that number.” But, in fact, changing that arbitrary number does affect judgments. For example, when auditors responded to these two conditions, the average number of frauds was sixteen in the first condition (i.e., ten out of one thousand), and forty-three in the second condition (i.e., two hundred out of one thousand). While ten and two hundred should be irrelevant, professionals' judgments of significant executive-level fraud almost tripled when the higher number was given. Why? They were using a heuristic called anchoring and adjustment. When using this heuristic, we select an initial estimate or anchor, and then adjust that anchor as new information is received. Problems arise when we use an irrelevant anchor, or when we make an insufficient adjustment from the anchor.
Now you may say, maybe the researcher was giving the auditors hints about the level of fraud with the initial question, so their estimates should be affected. But that's not the case. Anchoring and adjustment is such a powerful phenomenon that it affects our judgments even when we know that the anchor is totally meaningless. For example, psychologists Amos Tversky and Daniel Kahneman asked people to estimate the percentage of African nations in the United Nations.37 Before answering, a wheel with numbers one through one hundred was spun and the participants were asked whether their answer was higher or lower than the number on the wheel. Subjects were influenced by that number even though they knew it was determined totally by chance. For example, the median estimates were twenty-five and forty-five for those groups who spun the numbers ten and sixty-five, respectively.
As another example, consider the following case. Without actually performing any calculations, give a quick (five second) estimate of the following product:
8 × 7 × 6 × 5 × 4 × 3 × 2 × 1 = ?
What number did you get? When people responded to this problem, their average answer was 2,250. However, another group of individuals was asked to respond to:
1 × 2 × 3 × 4 × 5 × 6 × 7 × 8 = ?
Their average was 512, even though the numbers are the same. Why? We anchor on the initial numbers, and since those numbers are higher in the first case, our response is much higher.38
Anchoring simplifies our decisions. It allows us to focus on a small amount of information at one time, as opposed to simultaneously considering all the information relevant to a decision. We first pay attention to some initial data, and then adjust our initial impression for any new information we receive. While this approach may be appropriate for many decisions, it can lead to errors when we anchor on initial data that is totally irrelevant to the decision at hand. And even if the initial data is relevant, we often pay too much attention to that data, and thereby fail to make sufficient adjustments when new information becomes available.
Anchoring can affect judgments in many aspects of our personal and professional lives. The prices that we negotiate in our financial decisions are extremely susceptible to anchoring effects. For example, one study found that when retailers and manufacturers negotiated the price of auto parts, an irrelevant initial anchor of $12 resulted in a price of $20.60, while an irrelevant anchor of $32 yielded a price of $33.60.39
How much are you going to pay for your new house? One study investigated the appraisal values that real estate agents attach to houses on the market.40 The agents were given tours of a house and a ten-page packet of information that included all the information normally used to value a house. They were also given different initial listing prices (which should be irrelevant), and were asked to judge what they thought the house was worth. The initial listing prices ranged from $119,900 to $149,900, which caused the agents' appraisal values to increase from $114,204 to $128,754. In effect, you could pay $14,550 more for a house just because of an irrelevant initial anchor used by a real estate agent.
Anchoring can also affect your stock decisions. Remember my colleague who thought he could use fundamental analysis to beat the market? He once told me that if a company was selling at $25, and then drops to $3 a share, it's a good investment. This is an anchoring problem. The price we pay for a stock often becomes our anchor when evaluating that stock in the future. In fact, we don't even have to buy the stock, we just have to know what it was selling for at a certain point in time. Just consider how people reacted to the stock price changes of Enron or WorldCom. Enron's stock was selling at close to $90 a share in the year 2000. The price dropped to around $55 early in 2001, and when compared to its high, the stock looked cheap. Many people rushed in to buy at that price, and they looked pretty smart when the price rebounded to over $60. But we all know what happened. By 2002, Enron's stock was selling for twelve cents a share!41 Using anchoring in our decision processes can be quite costly.
Anchoring can have even more severe consequences. Remember the study that investigated the verdicts handed down by juries when told to consider the harshest or most lenient verdict first? Considering the harshest verdict first (standard practice in murder trials) resulted in harsher verdicts handed down than if juries considered lenient verdicts first.42 Our simplifying strategies can lead to a number of disastrous judgments.
SIMPLIFYING ISN'T ALL BAD
As you can see, we simplify our decisions, and simplifying can get us into trouble. But the picture isn't all gloomy. We obviously make many correct decisions and hold many correct beliefs. If we didn't, we wouldn't have survived very long. In fact, simplifying strategies serve us very well in many instances. When we use availability, we don't conduct an exhaustive search of all the relevant information, we just retrieve data from our memory that's the easiest to remember. This usually works well because we often retrieve common things, and common things are more likely to occur. We think the probability of getting a cold is greater than getting cancer because we see more people with colds, which leads to a correct judgment. However, we also easily retrieve sensational items that are not common, and so we judge their likelihood to be greater than they actually are. As a result, we overestimate the danger from anthrax, drugs, crime, and a host of other risks because the media emphasize these threats, making them foremost in our minds.
Judging by representativeness also can work well, because similar things often go together. However, if we focus only on similarity, we ignore other relevant information that should affect our decision, such as base rates and the reliability of the data. And so, these simplifying strategies can lead us astray. This can happen in the personal decisions we make every day of our lives, as well as in our professional decisions that have severe consequences for large numbers of people. One encouraging point: research suggests that when professionals perform job-related tasks, the biases evident in their judgments are usually not as great as when novices perform abstract tasks.43 Expert knowledge in decision-making tasks appears to reduce, but not eliminate, biased judgments. The bottom line is, we need to be aware that we use simplifying strategies when making our decisions, and that they can lead to problems, if we're not careful. Recognizing that fact is the first step to correcting a number of our decision errors.