16 Why Good People Make Poor Decisions

Poor outcomes are different from poor decisions. The best decision possible given the knowledge available can still turn out unhappily. I am interested only in the cases where we regret the way we made the decision, not the outcome. I define a poor decision—where we regret the process we used—in the following way: A person will consider a decision to be poor if the knowledge gained would lead to a different decision if a similar situation arose. Simply knowing that the outcome was unfavorable should not matter. Knowing what you failed to consider would matter.

Those who favor analytical approaches to decision making believe poor decisions are caused by biases in the way we think. Naturalistic decision-making researchers disagree. We tend to reject the idea of faulty reasoning and try to show that poor decisions are caused by factors such as lack of experience. The decision bias explanation is the more widespread and popular view so we will examine it first.

Are Poor Decisions Caused by Biased Thinking?

Kahneman, Slovic, and Tversky (1982) present a range of studies showing that decision makers use a variety of heuristics, simple procedures that usually produce an answer but are not foolproof.1 The studies showed that in making judgments, we rely on information that is more readily available and appears more representative of the situation. We usually start analyses with known facts and make adjustments from these.

Kahneman, Slovic, and Tversky designed their studies so that the heuristics they were trying to demonstrate would lead to worse performance.

The rationale was that the heuristics were so powerful that their subjects would use them even if it meant more errors. The studies used tasks in which probabilities could be calculated in advance, allowing the researchers to set up objective standards for performance. The research strategy was not to demonstrate how poorly we make judgments but to use these findings to uncover the cognitive processes underlying judgments of likelihood.

Because of this strategy, many professionals have interpreted this research as demonstrating biases, not just heuristics. The research strategy has become known as the “heuristics and biases” paradigm. To date, more than two dozen decision biases have been identified. Many researchers interpret this research as showing that people are inherently biased and will misconstrue evidence. Therefore, decision errors must be caused by these biases. Decision training programs, such as the one presented by Jay Russo and Paul Shoemaker in their book Decision Traps (1989), were designed to reduce the influence of the decision biases. The heuristics and biases school is active and influential, particularly in the United States and Great Britain, but it is also coming under attack now.

Lola Lopes (1991) has shown that the original studies did not demonstrate biases, in the common use of the term. For example, Kahneman and Tversky (1973) used questions such as this: “Consider the letter R. Is R more likely to appear in the first position of a word or the third position of a word?” The example taps into our heuristic of availability. We have an easier time recalling words that begin with R than words with R in the third position. Most people answer that R is more likely to occur in the first position. This is incorrect. It shows how we rely on availability.

Lopes points out that examples such as the one using the letter R were carefully chosen. Of the twenty possible consonants, twelve are more common in the first position. Kahneman and Tversky (1973) used the eight that are more common in the third position. They used stimuli only where the availability heuristic would result in a wrong answer. Several studies found that decision biases are reduced if the study includes contextual factors and that the heuristics and biases do not occur in experienced decision makers working in natural settings.2

There is an irony here. One of the primary “biases” is confirmation bias—the search for information that confirms your hypothesis even though you would learn more by searching for evidence that might disconfirm it. The confirmation bias has been shown in many laboratory studies (and has not been found in a number of studies conducted in natural settings). Yet one of the most common strategies of scientific research is to derive a prediction from a favorite theory and test it to show that it is accurate, thereby strengthening the reputation of that theory. Scientists search for confirmation all the time, even though philosophers of science, such as Karl Popper (1959), have urged scientists to try instead to disconfirm their favorite theories. Researchers working in the heuristics and biases paradigm condemn this sort of bias in their subjects, even as those same researchers perform more laboratory studies confirming their theories.

What Accounts for Errors in Natural Decision Settings?

Naturalistic decision-making researchers are coming to doubt that errors can be neatly identified and attributed to faulty reasoning. Jim Reason, at the University of Manchester, finds that the operator of a system who is blamed for the error is often the victim of a series of problems of faulty design and practice (1990). Reason coined the term latent pathogens to refer to all the problems such as poor design, poor training, and poor procedures, that may be undetected until the operator falls into the trap. It is easy to blame the operator for the mistake, yet all of the earlier problems made the mistake virtually inevitable. David Woods and his colleagues at Ohio State University (1993) assert that decision errors do not exist. If we try to understand the information available to a person, the goals the person is pursuing, and the level of experience, we will stop blaming people for making decision errors. No one is arguing that we should not look at poor outcomes. The reverse is true. The discovery of an error is the beginning of the inquiry rather than the end. The real work is to find out the range of factors that resulted in the undesirable outcome.3

To get a better sense of what leads to poor decisions, I reviewed the data my colleagues and I had collected in different domains, covering more than six hundred decision points (Klein, 1993).4 I identified any decision point that resulted in a poor outcome, where the decision maker could have known better. I primarily relied on the comments of the decision makers, for the incidents where they admitted that they had done the wrong thing. I categorized a set of twenty-five decisions as errors. This is a small number so the conclusions are only speculative.

I was able to place the errors into three categories. Of the twenty-five, sixteen were due to lack of experience. For example, a fireground commander failed to call in a second alarm because the fire did not seem large. He did not realize that the balloon construction of the building made it vulnerable to damage to the supporting framework. Now that he knows better, he doubts if he will make the mistake again.

A second cause of poor decisions was lack of information. For example, a flight crew failed to obtain full weather reports prior to takeoff and failed to identify alternate landing sites. When the simulated flight ran into difficulties due to malfunctions thrown at them by the people controlling the exercise, the weather profile was inadequate for selecting an alternate landing site. The third source of poor decisions was due to mental simulation, the de minimus error. Decision makers noticed the signs of a problem but explained it away. They found a reason not to take seriously each piece of evidence that warned them of an anomaly. As a result, they did not detect the anomaly in time to prevent a problem.5

Example 16.1
The Missed Diagnosis

In the neonatal intensive care unit, the nurse is assigned to a baby who is not in her regular care. She notices that the baby has a distended stomach, blood in the stool, along with a 3 cc aspirate. All these are signs of necrotizing enterocolitis, a condition afflicting premature babies in which the bowel becomes infected. The nurse does nothing, and by the next day the baby is critically ill.

The nurse fails to act because she explains away each symptom. The distended belly reminds her of an earlier case in which the neonatal intensive care unit treated this baby’s sister. The sister also had an unusually large belly, so the nurse classifies the feature as a family trait. The blood in the stool is only a small amount and can be related to the baby’s nasogastral tube. Finally, the 3 cc aspirate is small enough so that, by itself, it could not be considered unusual.

The weakness is that decision makers can easily dismiss evidence that is inconvenient, explaining away the early warning signs. These limitations could prevent decision makers from detecting the early signs of a problem because they might not recognize anomalies or sense the urgency of the problem. The limitations could lead decision makers to misrepresent the situation, perhaps by explaining away key pieces of information, failing to consider alternate explanations and diagnoses, or leaving decision makers confused by complexity. Finally, the limitations in experience could make it too hard for decision makers to notice weaknesses in their planned courses of action.

The Effect of Stress on Decision Making

We often blame poor decisions on stress. This is too simplistic. The evidence that supposedly shows that stress results in decision errors is not convincing (Klein, 1996). Reflect back on the fireground commanders, nurses, pilots, and others we have examined who perform so well under extreme time pressure, high stakes, ambiguity, and other features of naturalistic decision making. Remember the study described in chapter 10, showing that chess masters continued to make strong moves even under extreme time pressure, an average of six seconds per move. We should not be so willing to accept the premise that stress results in decision errors.

I am not arguing that stressors do not have an effect. My claim is that stress does affect the way we process information, but it does not cause us to make bad decisions based on the information at hand. It does not warp our minds into making poor choices. Stressors such as time pressure, noise, and ambiguity, result in the following effects:

Under time pressure, we obviously will not be able to sample as many cues. But if our decisions get worse, it is not because a state of stress clouded our minds but that we did not have the chance to gather all the facts. Incidentally, the data show that experienced decision makers adapt to time pressure very well by focusing on the most relevant cues and ignoring the others.

Stressors such as noise may interfere with the ability to rehearse things in working memory. This factor might be particularly disruptive for tasks requiring a lot of mental simulation. The concentration needed to construct a mental simulation of how a plan will be carried out may sometimes require inner speech. However, I do not know of any studies showing that noise and other distractors interfere with mental simulation.

A third effect of stressors is to capture our attention. If we have to adapt to noise, pain, or fear, for instance, then we may need to monitor ourselves. We have to manage our reactions (e.g., hyperventilation) to the stressor. Now we have two things to do: make the decision and cope with the stressor. The more tasks we have to juggle, the worse we generally do.

Stressors should disrupt decision making the most if people use strategies such as a rational choice analysis. Time pressure and ambiguity alone would prevent anyone from carrying out that type of strategy. If we believed that people ordinarily generated option sets and contrasted the options, then we would expect stress to degrade decision making. However, if people rely on recognitional decision strategies, then we would not expect to see much disruption, particularly when the decision makers were reasonably experienced.

The Problem of Uncertainty

One definition of uncertainty (paraphrasing Lipshitz and Shaul, 1997) is “doubt that threatens to block action.” Key pieces of information are missing, unreliable, ambiguous, inconsistent, or too complex to interpret, and as a result a decision maker will be reluctant to act. In many cases, the action will be delayed or will be overtaken by events as windows of opportunity close. Because it is impossible to achieve 100 percent certainty, decision makers must be able to proceed without having a full understanding of events. Some decision makers may be too impetuous, chasing after rumors. Others may require too much information, and as a result they may wait too long to take action.

Uncertainty is the reverse side of the ability to size up situations quickly. To see how doubt can block action, contrast the RPD model with the model of uncertainty in figure 16.1. Everywhere that experience had generated recognition of familiarity, uncertainty generates confusion and lack of understanding. Where experience enables decision makers to take action rapidly, uncertainty results in doubt.

11307_016_fig_001.jpg

Figure 16.1 How uncertainty leads to doubt

In talking about uncertainty, people tend to mix many things together. A review of the literature shows that people discuss uncertainty in terms of risks, probabilities, confidence, ambiguity, inconsistency, instability, confusions, and complexity. They refer to uncertainty about future states, the nature of the situation, the consequences of actions, and preferences. Because so many concepts are packed into the term uncertainty, it is difficult to address how to help people handle uncertainty without becoming more precise.

Schmitt and Klein (1996) identified four sources of uncertainty:

  1. Missing information. Information is unavailable. It has not been received or has been received but cannot be located when needed.
  2. Unreliable information. The credibility of the source is low, or is perceived to be low even if the information is highly accurate.
  3. Ambiguous or conflicting information. There is more than one reasonable way to interpret the information.
  4. Complex information. It is difficult to integrate the different facets of the data.

We can also identify several different levels of uncertainty: the level of data; the level of knowledge, in which inferences are drawn about the data; and the level of understanding, in which the inferences are synthesized into projections of the future, into diagnoses and explanations of events.

Is uncertainty inevitable? Clearly the technology available in the future will dramatically increase the information available, yet we cannot be optimistic that increasing information will necessarily reduce uncertainty. It is more likely that the information age will change the challenges posed by uncertainty. For one thing, decision makers will still be plagued with missing information.

Previously, information was missing because no one had collected it; in the future, information will be missing because no one can find it. Moreover, the improved data collection will likely transform into faster decision cycles. By way of analogy, when radar was introduced into commercial shipping, it was for the intent of improving safety so that ships could avoid collisions when visibility was poor. The actual impact was that ships increased their speed, and accident rates stayed constant. On the decision front, we expect to find the same thing. Planning cycles will be expedited, and the plans will be made with the same level of uncertainty as there was before. Moreover, communication technology means that clients expect faster decisions—without the time allowed in the past for thoughtful reflection.

Sometimes it is tempting to believe that we can use information technology to eliminate certain types of uncertainty. For example, an intelligent system could screen all messages to detect inconsistencies and weed these out. This dream is unrealistic. The next generation of computers will not eliminate uncertainty caused by inconsistencies.

Great commanders are able to overcome the problem of uncertainty. Analysis of historical data shows that the effective commanders, such as Grant and Rommel, accepted the inevitability of uncertainty. Rather than being paralyzed, their actions blocked by doubt, they all possessed the ability to shape the battlefield, acting decisively and prudently at the same time. They were able to force the adversary on the defensive, shifting the burden of uncertainty. They were able to maintain flexibility without planning out various contingencies (which were sure to become obsolete). On the battlefield, plans are vulnerable to the cascading probability of disruption. If a plan depends on six steps and each has a 90 percent probability of being carried out successfully, many decision makers will feel confident, whereas the actual probability of carrying out the plan is just over 50 percent since the probabilities multiply. Highly successful commanders seem to appreciate the vagaries of chance and do not waste time worrying about details that will not matter. The inference we draw is that although uncertainty is and will be inevitable, it is possible to maintain effective decision making in the face of it.

Because uncertainty is inevitable, decisions can never be perfect. Often we believe that we can improve the decision by collecting more information, but in the process we lose opportunities. Skilled decision makers appear to know when to wait and when to act. Most important, they accept the need to act despite uncertainty.

Expertise versus Superstition

I have been arguing that expertise can provide important sources of power other than rational analysis. People with greater expertise can see the world differently. They have a larger storehouse of procedures to apply. They notice problems more quickly. They have richer mental simulations to use in diagnosing problems and in evaluating courses of action. They have more analogies to draw upon.

Expertise can also get us in trouble. It can lead us to view problems in stereotyped ways. The sense of typicality can be so strong that we miss subtle signs of trouble. Or we may know so much that we can explain away those signs, as in the missed diagnosis in example 16.1. In general, these shortcomings seem a small price to pay; however, there may be times when a fresh set of eyes proves helpful.

I am more troubled by the difficulty of learning from experience. We cannot often see a clear link between cause and effect. Too many variables intervene, and time delays create their own complications.6 If managers find themselves having success—getting projects completed on schedule and under budget—does that success stem from their own skills, the skills of their subordinates, temporary good luck, interventions of higher-level administrators, a blend of these factors, or some other causes altogether? There is no easy way to tell. We can learn the wrong lessons from experience. Each time we compile a story about an experience, we run the risk of getting it wrong and stamping in the wrong strategy.

An analogue here is the way historians debate the causes of famous events—for example, the Great Depression. Franklin Delano Roosevelt was elected president in 1932 to help the nation climb back to prosperity, and he took many strong actions. Some historians and economists argue that these actions did the trick; others claim that they made the problem worse. The Great Depression was an event of major importance in our history that has received enormous scrutiny, and we still do not know if Roosevelt’s actions were effective in bringing about economic recovery.

Because of the difficulty of interpreting cause-and-effect relationships, lawmakers cannot achieve high levels of expertise. They can certainly master the procedures of being politicians, for example, getting on the most influential committees, forging ties with lobbyists, doing favors for the right people. Nevertheless, they cannot learn the long-term impacts of the legislation that they consider. They cannot learn the causal dynamics between a piece of legislation and eventual social changes. Their mental models are not flexible or rich. When politicians ask to be reelected because of their experience, they are referring to the efficiency with which they do their job, not their growing wisdom in judging which laws to propose and support.

This brings us to the question of superstition. Many of us consider superstitions as characteristics of primitive cultures that have not been able to figure out cause-and-effect relationships. We hear stories of cultures that never learned the linkage between conception and the birth that occurs nine months later. We hear stories about magical thinking, such as performing rituals to ensure that the crops will be good. Citizens of a rational society should be beyond superstitions.

Yet we follow rituals all the time without any evidence that they work. We pass laws on all sorts of topics without any evidence that they will change behaviors. We encourage unhappy people to seek counseling for a great many problems for which there is no evidence that counseling will provide benefit. Corporations adopt fads to build morale or increase motivation without any evidence that these will work; they reorganize to improve efficiency without any definitive evidence that the new structure will be any better. We read the latest scare reports about foods that are linked to cancers and try to adjust our diets accordingly, even though for most of these reports there is no indication that these linkages will have any practical impacts on life span.

In short, our lives are just as governed by superstitions as those of less advanced cultures. The content of the superstitions has changed but not the degree to which they control us. The reason is that for many important aspects of our lives, we cannot pin down the causal relationships. We must act on faith, rumor, and precedent.

In a domain such as fighting fires, caring for hospitalized infants, or flying an airplane, expertise can be gained. In other domains, such as selecting stocks, making public policy, or raising a child, the time delays are long and the feedback is uncertain. Jim Shanteau (1992) has suggested that we will not build up real expertise when:

Under these conditions, we should be cautious about assuming that experience translates into expertise. In these sorts of domains, experience would give us smooth routines, showing that we had been doing the job for a while. Yet our expertise might not go much beyond these surface routines; we would not have a chance to develop reliable expertise.

Lia Di Bello has studied the way people in organizations learn different kinds of complex skills.7 She found that she could distinguish competent workers, who had mastered the routines, from the real experts. If she gave people a task that violated the rules they had been using, the experts would quickly notice the violation and find a way to work around it. They could improvise to achieve the desired goal.

Where does this leave us regarding the growth of expertise? At one extreme is the work of Ericsson and Charness (1994), suggesting almost anyone can become expert at almost anything, given enough practice. At the other extreme is the work of people like Russo and Shoemaker, suggesting that all of us are inherently biased and unreliable as decision makers. And in between is the suggestion by Shanteau that expertise is more easily acquired in some domains than others. In short, we do not have answers, yet we may have a basis for asking better questions about the way expertise develops.

Applications

One way to improve performance is to be more careful in considering alternate explanations and diagnoses for a situation. The de minimus error may arise from using mental simulation to explain away cues that are early warnings of a problem. One exercise to correct this tendency is to use the crystal ball technique discussed in chapter 5. The idea is that you can look at the situation, pretend that a crystal ball has shown that your explanation is wrong, and try to come up with a different explanation. Each time you stretch for a new explanation, you are likely to consider more factors, more nuances. This should reduce fixation on a single explanation. The crystal ball method is not well suited for time-pressured conditions. By practicing with it when we have the time, we may learn what it feels like to fixate on a hypothesis. This judgment may help us in situations of time pressure.

A second application is to accept all errors as inevitable. In complex situations, no amount of effort is going to be able to prevent any errors. Jens Rasmussen (1974) came to this conclusion in his work with nuclear power plants, which is one of the industries most preoccupied with safety. He pointed out that the typical method for handling error is to erect defenses that make the errors less and less likely: add warnings, safeguards, automatic shut-offs, and all kinds of other defenses. These do reduce the number of errors, but at a cost, and errors will continue to be made, and accidents will continue to happen. In a massively defended system, if an accident sneaks through all the defenses, the operators will find it far more difficult to diagnose and correct it. That is because they must deal with all of the defenses, along with the accident itself. Recall example 13.7, the flight mismanagement system. A unit designed to reduce small errors helped to create a large one.

Since defenses in depth do not seem to work, Rasmussen suggests a different approach: instead of erecting defenses, accept malfunctions and errors, and make their existence more visible. We can try to design better human-system interfaces that let the system operators quickly notice that something is going wrong and form diagnoses and reactions. Instead of trusting the systems (and, by extension, the cleverness of the design engineers) we can trust the competence of the operators and make sure they have the tools to maintain situation awareness throughout the incident.8

Key Points

Notes