9 Nonlinear Aspects of Problem Solving

People become problem solvers when they have to find a way to create a new course of action, improvise, notice difficulties way in advance, or figure out what is causing a difficulty. The concept of leverage points opens the way to think about problem solving as a constructive process. It is constructive in the sense that solutions can be built up from the leverage points and that the very nature of the goal can be clarified while the problem solver is trying to develop a solution. In rock climbing, there is no correct solution. The climber looks at the available holds and figures out what direction makes good sense.

This approach to problem solving can be traced back to the German research psychologist Karl Duncker, one of the central figures in the Gestalt psychology school in Europe. The Gestalt school emphasized perceptual approaches to thought. Rather than treating thought as calculating ways of manipulating symbols, the Gestaltists viewed thought as learning to see better, using skills such as pattern recognition.

Duncker (1935/1945) had asked people to think out loud (so he could gain insights into their thought processes) while solving well-defined and ill-defined problems. One of the tasks he used was the X-ray problem: you are a physician needing to treat a patient with a tumor. You can use X-rays to destroy the tumor, but the radiation will also damage healthy tissue. What can you do?

There are a few acceptable approaches (as befits an ill-defined problem). One of the more satisfactory solutions is to use several X-ray sources. Each could transmit low-level radiation that would not be harmful to healthy tissue yet could be aimed to converge on the tumor and destroy it. To find this solution, the subjects had to elaborate the goal of destroying the tumor. The goal included the property of not injuring healthy tissue. Eventually the problem solver identified a new goal of exposing healthy tissue to only small amounts of radiation.

Duncker found that as his subjects worked on these problems, they simultaneously changed their understanding of the goal and assessed solutions. A subject might think of a solution, try it out, realize it would not work, realize what was missing, and then add to the definition of the goal. This new definition would suggest new approaches, and when these approaches failed, they helped to clarify the goal even further.

To solve ill-defined problems, we need to add to our understanding of the goal at the same time we generate and evaluate courses of action to accomplish it. When we use mental simulation to evaluate the course of action and find it inadequate, we learn more about the goal we are pursuing. Failures, when properly analyzed, are sources of new understanding about the goal.

The leverage point account of problem solving requires a nonlinear rather than a linear approach.1 We can think of problem solving as consisting of four processes: problem detection, problem representation, option generation, and evaluation (see figure 9.1).

11307_009_fig_001.jpg

Figure 9.1 Nonlinear account of problem solving

The account shown in figure 9.1 does not have an output stage, because each of the components can lead to different types of outputs. Problem detection is itself an output, as in the warnings offices set up by governmental agencies whose job is to provide early detection of problems. Problem representation is another output, sometimes the necessary and sufficient output for determining how to proceed. There are medical diagnosticians whose responsibility is primarily to provide skillful problem representations. Generating forecasts is itself a professional specialty in many fields.

Constructing a course of action is the component most people think of as the output of problem solving: generating a plan for achieving a goal. Regardless of how the option is generated, it will need to be evaluated, often using mental simulation. The evaluation process can lead to adoption of the option, result in selecting between options, or identify new barriers and opportunities, thereby triggering additional problem solving.

The problem-solving process contains different types of outputs, depending on what is needed in the situation.

Figure 9.1 shows why this is an interactive account. The goals affect the way we evaluate courses of action, and the evaluation can help us learn to set better goals. The goals determine how we assess the situation, and the things we learn about the situation change the nature of the goals. Goals define the barriers and leverage points we search for, and the discovery of barriers and leverage points alters the goals themselves. The way we diagnose the causes leading to the situation also affects the types of goals adopted. Further, the leverage points (fragmentary action sequences) we notice grow out of our own experiences and abilities—another layer of interaction.

To review this nonlinear account, we will examine each of the components, starting with the problem detection process.2 Something happens, something anomalous, and we may notice it. Perhaps a rock climber notices that the route has more debris on it than usual. Or an expectation is violated or something expected did not occur. In example 8.4, the pilot flying to Philadelphia did not seem to notice how much the margin of safety was being compromised.

The problem representation function covers the way a person identifies and represents the problem.3 Is there a gap between what we have and what we want? Is there an opportunity to achieve more than we expected? Both gaps and opportunities can trigger problem-solving efforts, either to remove the gap (and obtain what you want) or to harvest the opportunity. Say that the rock climber gets a little higher and notices that there has been a small avalanche. The pitons she counted on using are covered up, so she may have to find another route, maybe retracing her steps. Or she can find better holds and keep going.

Leverage points are part of the problem representation. We try to detect the leverage points that can turn into solutions, as well as the choke points that can spell trouble ahead. In some cases, identifying the leverage points can count as the critical type of problem representation, emphasizing the lines of reasoning that may be most important.

Not all gaps or opportunities lead to problem solving. They must be important enough, and the problem solver must judge that the gap or opportunity will not be resolved without special effort. Then comes a difficult judgment: the solvability of the problem.4 Somehow we use our experience to make this judgment even before we start working to come up with a solution. Part of our situation awareness is whether this is a problem we should be able to solve without too much work, versus a problem that can take days or weeks and get us nowhere.

Therefore, the function of problem representation includes goal setting because the problem solver must judge whether to try to come up with a solution or turn to other needs. For ill-defined goals, we can expect to see a lot of goal modification during the problem-solving effort.5

When a gap or opportunity is identified, often we will try to diagnose it.6 This is the mental simulation function in which we try to weave together the causes that might have led to the current situation. Because diagnosis is not always required, it is portrayed as an elaboration of problem representation in figure 8.1 and is shaded. We may also want to project trends based on the diagnosis, to see how the situation may change. This is the forecasting process. In many situations, the primary need is to build a reliable forecast in order to determine whether the difficulty will disappear on its own, or will get worse and require action. The problem representation and diagnosis processes are linked to forecasting, which usually requires mental simulation.

The next function is directed at generating a new course of action, in many cases, a straightforward process: we recognize what to do, and do it. At other times, we do not recognize what to do and must rely on leverage points in order to construct a new course of action. If we blindly press forward trying to reach goals and remove barriers, we may miss these leverage points. Experience lets us detect them and provides a chance to improvise in order to take advantage of them. We also have to be careful not to pursue opportunities too enthusiastically since they might distract us from our more important goals. We have to balance between looking for ways to reach goals and looking for opportunities that will reshape the goals.

The fourth function is to evaluate plans and actions, to play out a scenario to see what will happen. If the evaluation is favorable, we carry out the action. We also can learn from the evaluation, perhaps discovering new gaps or opportunities, resulting in problem detection or in a new way to represent the problem, as when we would modify our goals.

The concept of problem solving appears incomplete without the aspect of discovering opportunities. There is an infinite set of instances in which problem solving should be initiated. We do not have all the money we want or all the vacation time. We are not driving cars that are as luxurious or sporty as we want, and so forth. So the potential for problem solving exists on a massive level. Yet we are not defining each of these as problems and wasting time worrying about these discrepancies. De Groot (1945) and Isenberg (1984) have suggested that what triggers active problem solving is the ability to recognize when a goal is reachable. In the standard view, this seems paradoxical, since it claims that a course of action that is generated after a goal is defined is evaluated for plausibility and used to determine whether to pursue that goal in the first place. There must be an experiential ability to judge the solvability of problems prior to working on them.

Experience lets us recognize the existence of opportunities. When the opportunity is recognized, the problem solver working out its implications is looking for a way to make good use of it, trying to shape it into a reasonable goal. At the same time, the opportunity is shaping the goal by raising the level of aspiration and identifying additional goal properties.

Example 9.1 shows how an organization shifted its goals because of the way it evaluated a business plan. In evaluating a course of action, officers of a company discovered an opportunity and leverage point. This information caused them to revise the goal and led to the synthesis of an expanded course of action.

Example 9.1
Learning to Love Telemarketing

The parent company organizes franchises for services. Each of the individual franchisees needs to use telemarketing to obtain customers, and each finds the task of hiring, training, and managing telemarketers to be burdensome. The marketing director for the parent company identifies this as a problem, but with an obvious solution: the parent company can centralize the telemarketing across the United States. The president of the parent company is lukewarm to this idea, since it requires a large investment. Then he realizes that with a centralized group of telemarketers, he can pursue his ideas about direct order sales. At that point he becomes even more enthusiastic about the project than the marketing director.

As the marketing director and the president mentally simulated the proposed telemarketing center, the president noticed a new and different possibility. This idea, to use telemarketers for catalog sales, increased the president’s level of aspiration and changed the nature of the goal he wanted to pursue. The opportunity to use the telemarketers for direct order sales also suggested some fragmentary action sequences that could be easily blended with the original goal of helping the franchisees.

Traditional Models of Problem Solving

To get a better appreciation of the concept of leverage points and how they fit into a nonlinear account of problem solving, it is useful to contrast them to some more traditional models. Problem solving is a different phenomenon when we engage in it in natural settings from when we study it under laboratory conditions. Aspects of everyday cognition such as problem finding are usually not studied in the laboratory. We do not investigate problem-finding skills in studies where we present subjects with a problem or puzzle that is ready made. Certainly experimenters must restrict a phenomenon in order to study it. We run into trouble when we do not realize what our studies have been excluding.

Traditional approaches to problem solving treat the process as open to decomposition into component elements that are relatively independent of each other. Traditional approaches also view the process of problem solving as proceeding in a mechanistic fashion, from one stage to the next, using a set of operators as transformation rules.

First, we will examine the approach that relies on stage models, the most common treatment of problem solving. After that, we will look at the artificial intelligence approach, the most sophisticated treatment.

Stage Models

Stage models are the most common description of problem-solving activities. Raanan Lipshitz and Oma Bar-Ilan (1996) have found examples of researchers and theoreticians who have presented various models:

  • Two-stage models: idea getting and idea evaluation.
  • Three-stage models: problem finding, design of alternatives, and choice.
  • Four-stage models: understanding the problem, devising a plan, carrying out the plan, and evaluating the results.
  • Five-stage models: identifying the problem, defining it, evaluating possible solutions, acting, and evaluating success.
  • Six-stage models: identifying the problem, obtaining necessary information to diagnose causes, generating possible solutions, evaluating the various solutions, selecting a strategy for performance, and performing and revising the solution.
  • Seven-stage models: problem sensing, problem definition, alternative generation, alternative evaluation, choice, action planning, and implementation.
  • An eight-stage model.

To simplify these variations, we can propose a generic four-stage model that comes reasonably dose to these varieties:

  1. Define the problem.
  2. Generate a course of action.
  3. Evaluate the course of action.
  4. Carry out the course of action.

This description generally makes sense. On logical grounds alone, it is hard to evaluate a course of action (step 3) before generating it (step 2).

We can run into trouble with this model by following the linear sequence of steps too strictly. For example, you would not want to start generating courses of action until you had a fairly good idea of what the problem was; however, for many common problems we will not be able to reach a good definition because they are ill defined. We cannot begin with a definition since there is none.

Think about a fireground commander racing to put out a fire. The goal seems well defined: extinguish the fire so it does not start again. However, as we saw in chapter 2, the commander has to judge whether to attack the fire or begin search and rescue, or do both. If the building is deserted and in poor shape, the commander may choose to let it burn rather than use resources to prevent its spread. Commanders are judged by whether they choose the right action in a situation, and sometimes their superiors and peers will disagree about what that right thing was.

The criterion for claiming that a goal is ill defined is that experienced people might not agree about how to satisfy the goal. Writing a good story has an ill-defined goal. Different English teachers could disagree in their judgments of whether a specific story satisfies the goal of being good. Solving a university’s parking problem is ill defined; experts are certain to disagree about what a good outcome is.7 One person might favor building large parking structures in the middle of campus to preserve space. Another would put all parking at the edges of campus and serve commuters with a bus system, to make the campus car free. Architects will have trouble agreeing on acceptable designs, in part because of the disagreement over goals.

In contrast, a mathematical equation (e.g., x + 7 = 12; solve for x) is an example of a problem with a well-defined goal. No one would disagree with the answer (x = 5) or with the intent of the problem. For many people, solving well-defined puzzles is satisfying because there is a right answer. There is pleasure in making the different strands tumble into place.

Most natural goals seem to be ill defined. Some are ill defined to a minor extent, like firefighting. Some are more strongly ill defined, like solving a parking problem. Some are extremely ill defined, like writing a good short story.

Most of the studies of problem solving and decision making have centered on well-defined goals—solving mathematical equations, physics problems, or syllogisms in deductive logic, for example. The appeal of these well-defined problems is that researchers can perform carefully designed experiments manipulating different variables to see if the manipulations change the proportion of correct answers. Because all the problems have correct answers, there is no ambiguity in these studies. Therefore, the field of problem solving has concentrated on tasks with well-defined goals.

Because it is so systematic, we are attracted to the standard advice of following the stages from problem definition to option generation and evaluation. Yet in dealing with an ill-defined goal, the advice is sure to fail. The first step, define the goal, can never be completed if the goal is ill defined, and that means the problem solver is not supposed to go further. Instead, the problem solver is trapped at that first step. The standard method for solving problems is worse than useless; it can interfere when people try to solve ill-defined problems (Klein & Weitzenfeld, 1978).

Consider the following quotation: “What do you want to do with your life? Where do you want to be in five years, ten, or even twenty? What kind of lifestyle do you want to have? All of these questions need to be answered BEFORE you start pursuing a career focus and even before you decide on an education.” This advice came from a pamphlet designed for high school students (Federal Jobs for College Graduates). I wonder how many readers with advanced degrees could answer the questions.

Problems can be unstructured in several ways, not just through vague goals. According to Reitman (1965), a problem can be ill structured if the initial state is not defined, the terminal state is undefined, or the procedure for transforming the initial state into the terminal state is undefined. For some problems, clarifying the initial state is the most important outcome. For example, diagnosing the disease causing a mysterious set of symptoms will usually enable a physician to determine the appropriate course of treatment. For other problems, such as rescuing people from a burning building, the diagnosis of how they got into the building is irrelevant. Next, let us consider the case in which the terminal state is ill defined. For some problems, clarifying the terminal state is critical. For example, a teenager searching for a job may not have a clear idea of the essential characteristics of a good job. For other problems, such as trying to cut the crime rate by half in a large city, the goal is well defined, and what is needed is to diagnose the prime causes of crime (e.g., unemployment, lenient judges, inconsistencies in legal codes, and so forth) and to find procedures for eliminating those causes. Finally, the transformation of initial into terminal state may be critical, as in constructing a complicated plan, or it may be trivial, as in the case of a physician who knows just what to do once an accurate diagnosis is made. Therefore, the focus of problem solving can be very different depending on the nature of the problem.

The four-stage model is incomplete and misleading. It does not tell us much that is useful about defining goals or how to generate courses of action. It merely states that these steps must be taken. It can include stages such as diagnosis that may be irrelevant for certain types of problems. It misleads by implying that the steps should follow a linear sequence. Most behavioral scientists studying problem solving now agree about the shortcomings of stage models (e.g., Mintzberg, Raisinghani, & Theoret, 1976; Smith, 1994; and Weick, 1983). The components of a stage model are themselves reasonable. The difficulty is not in the components of the stage models, but in the assumption of linearity. The account in figure 8.1 relies on similar components but organizes them to make them more applicable to ill-structured problems.

Artificial Intelligence Approach

Artificial intelligence researchers try to use computers to perform complex judgment and reasoning tasks. In the 1950s, Herbert Simon and others realized that computers could be used to manipulate symbols as well as numbers. By coding knowledge as symbols, Simon and his colleagues were able to make computers learn, draw inferences, and solve problems. In doing so, he made the study of thinking a respectable scientific endeavor. The proof of success in programming a computer to copy the thinking of human subjects could be tested by comparing the computer’s performance with those of the human subjects it was supposed to imitate. Previously, psychologists in the United States had rejected the study of thinking as too unscientific. The dominant research paradigm was to study the learning behavior of lower organisms, such as rats and pigeons. Simon and his colleagues helped to change all that.

In 1972, Allen Newell and Herbert Simon published Human Problem Solving, describing their success in writing computer programs that could emulate human thought processes for tasks such as chess and puzzles. Example 9.2 shows a cryptarithmetic task that they used in their research.

Example 9.2
DONALD + GERALD = ROBERT

The cryptarithmetic task is to solve a problem, DONALD + GERALD = ROBERT, with only the due that D = 5. Each letter corresponds to a different number, and the task is to figure out the numerical value of each letter.

DONALD

+GERALD

ROBERT

Starting with the fact that D = 5, we can figure out that T is 0. Also, in the far left column we see that D (5) plus G is at least 6 and at most 9. We know R is an odd number, since in column 5, L + L = R, and we know there was a one carryover. So R is either 7 or 9. And so forth.8

The field of artificial intelligence has resulted in a number of important findings and applications but has not had the full impact that its original developers expected. The potential for this work is limited because artificial intelligence is primarily about well-defined problems. Puzzles such as the cryptarithmetic task in example 9.2 are well defined. In studying problems like this, Newell and Simon found that people use heuristics such as finding intermediate goals and solving them as a way to break up the whole problem.

The focus on well-defined problems is not the only limitation of the artificial intelligence approach. Although it claims to be describing how people solve problems, the approach is limited to the processes at which digital computers excel, such as setting up and searching through lists.

Now we are in a position to see what is missing from the artificial intelligence approach to problem solving. Here are some of the basic claims of the artificial intelligence approach:

  1. The problem is represented as a closed problem space generated from a finite set of objects, relations, and properties.
  2. Problem solving is the search through the problem space until the desired knowledge state is reached.
  3. The search can be heuristic (means—ends analysis to set up subgoals and reduce differences).
  4. Reformulating a goal means removing unnecessary constraints.

Artificial intelligence programs establish a problem space, consisting of all the descriptions and implications of the objects, their properties, and their relationships.9 The job of the program is to discover at least one pathway that will work to connect the initial and the desired end state. The program can conduct a heuristic search so that it does not have to examine every alternative. For example, if the program hits a gap, that is, a difference between where it is and where it needs to be, then it can make the reducing of this gap a subgoal and focus its search on ways of reducing that difference. This is termed a means-ends analysis—the strategy of identifying a barrier to reaching an end state, making the new goal to overcome that barrier, and so on.

The artificial intelligence community can point to many impressive accomplishments, but we have to be wary of its claims. In fact, each of the individual claims runs into trouble, and the entire approach rests on questionable assumptions.

First, the idea of a problem space does not match anything we know of experience about human problem solving. I do not know of any evidence that shows that people generate problem spaces except in combinatorial, well-defined problems such as figuring out the probability of getting three heads if you throw four coins simultaneously. If you did not know the formula to use, you might just write out all the permutations and count up the frequencies. For situations that are more complex or less precise, we would usually not try to lay out the problem space of objects, relations, and properties.

Second, the idea of searching through a problem space misses the experience we have of noticing things we had not considered before, and discovering or synthesizing a new approach. If we have already set up the problem space, we are unlikely to make these discoveries as we work on the task.

Third, there are other strategies in addition to the means-ends analysis. Using means-ends analysis to reduce differences is not the same as noticing opportunities. When we solve problems, we are alert to opportunities that let us make progress, even if those opportunities do not correspond to the obstacles we are trying to eliminate.10 In addition, Voss, Greene, Post, and Penner (1983) studied ill-structured social science problems and found little evidence for straightforward means-ends analysis.

Fourth, in solving problems we do not reformulate goals merely by removing constraints. We sometimes make radical shifts. Think again about the car rescue in example 5.1. The commander of the rescue team was not eliminating unnecessary constraints as much as he was changing the nature of the goal: lifting the victim through the roof instead of getting him out of one of the doors.11

The artificial intelligence program is not a technique for generating options. Instead, it is a procedure for setting up a search space and then using heuristics to achieve more efficient searches to find a good option. Conducting rapid searches is what digital computers do best. A computer does not have to do anything constructive to search through a space. It does not have to generate anything new. If the search space can be structured well enough, it can turn up findings that are novel. For example, if you give a computer a thousand different ice cream flavors, ten types of cones, and five hundred toppings, it will identify a range of options no one ever considered before.12

One of the primary mechanisms of artificial intelligence is to spread out the alternatives exhaustively and filter through them efficiently. This is the same strategy used in the analytical approaches to decision making. These approaches urge us to generate large numbers of options in order to be reasonably sure that the set will include a good option. Then we are supposed to search through these options, filtering out the inadequate ones, to find a successful one. Computational approaches try to reduce thinking to searching. Thus they show their greatest successes with tasks that can be transformed into searches.

To examine the ideas about problem solving represented in figure 9.1 more critically, it may be useful to consider an incident that placed high demands on both problem-solving and decision-making skills. The most interesting aspects of this incident were about detecting problems, representing the problems, and generating new courses of action.

The Apollo 13 Mission: A Case Study of Problem Solving

The incident in question is the explosion of an oxygen tank following the launch of the spacecraft and the difficulties of bringing the astronauts safely back to earth. On the surface, it is a story about adventures in space, but at another level it is about problem solving and decision making: detecting problems, understanding the nature of the problems, improvising solutions, and selecting courses of action. Lovell and Kluger (1994) have described the Apollo 13 mission, particularly the time-pressured workarounds to unexpected failures, under conditions of high uncertainty. I counted seventy-three problem-solving and decision-making episodes in the book.

This type of retrospective review can be helpful in tying issues together. It can also be misleading, since the Apollo 13 mission is not representative of other types of problem-solving incidents and because the recollections of astronaut Jim Lovell may be inaccurate regarding the processes used to solve the various problems. One thing that makes me suspicious is that no one in the book ever makes a stupid mistake. Perhaps Lovell is blessed with gifted colleagues, perhaps he is a pure soul who is attuned to the good in people, or perhaps his memoirs have been sanitized to reduce friction. Therefore, we need to be careful in accepting Lovell and Kluger’s account as reality. But that should not stop us from seeing what we can learn from it. (Cooper, 1973, has provided an accessible account of the accident that is consistent with the one offered by Lovell and Kluger.)

One easy thing to do is look for examples of the categories of figure 9.1: problem detection, problem representation, generation of a course of action, and evaluation.

Problem detection was usually not important because the problem was clear-cut and expertise was not required to notice the problem. For example, the initial loss of oxygen triggered all kinds of alarms. It would have taken a dedicated effort not to notice that a problem had arisen. The instruments showing a loss of oxygen also pointed to an obvious impending problem of keeping the astronauts alive. Other problem detections did require expertise. I estimated that problem detection played a key role in twenty of the seventy-three episodes. For example, using the lunar excursion module (LEM) as a lifeboat created a problem of eliminating carbon dioxide, since the LEM was not equipped to handle three astronauts for extended periods of time. This difficulty could have been missed, with critical consequences.

Problem representation was a primary process, playing a key role in thirty-one of the seventy-three episodes. In thirteen of the episodes, the problem representation directly led to the strategy or plan adopted. By knowing what type of problem they were facing, the mission controllers recognized how to react. From the outset, the mission controllers needed to make sense of all the unexpected instrument readings they were getting. They needed to get a big picture of what had suddenly happened to their spacecraft. Once they built their situation awareness, the nature of the problem shifted from performing the planned mission to the new mission of finding a way to save the astronauts. Problem representation was central in making sense of all the sudden demands: keeping antennas aligned, preventing the spaceship from developing thermal imbalance because it was no longer rotating systematically to protect it from the direct exposure to the sun, adjusting to losses of power and oxygen, and so forth.

There were about five instances of goal revision. The most dramatic was the shift in goals from trying to continue the mission while repairing the problem, to calling the mission off and concentrating on returning the astronauts safely. In hindsight, this seems like an obvious shift, but the mission controllers and the astronauts resisted it. In industrial settings, such as running a manufacturing or production process, supervisors have trouble with this type of breakpoint, where they have to abandon business as usual and move to an emergency mode. Sometimes they wait too long to make this shift.

The incident also required many subgoals, such as figuring out how to reduce power soon after the accident, or devising a plan to power up the command module in two hours, rather than the full day (using thousands of amp hours) that was typical. There were many examples of these, since each new challenge carried with it a new set of subgoals. These are the basis for means-ends analysis: finding a difference (e.g., it takes a day to power up the spacecraft and I have to do it in two hours) and making it the new goal to pursue. This incident demanded a great deal of meansends analysis in the sense of identifying new subgoals.

I did not see the type of goal shift we would find with ill-defined goals. The Apollo 13 mission may not have shown much goal revision because the goals were fairly well defined to begin with. Most of the changes in goals involved shifts in priorities, such as the uses of water. (Water was needed by the cooling system, to protect the equipment, and also by the crew.) Therefore, this may have been the wrong type of incident to study goal redefinition. Another possibility is that I am mistaken in my emphasis on goal redefinition.

The Apollo 13 incident did not require as much diagnosis as I had expected. I found only ten examples of diagnosis plus another two after the mission was over (to find out why the explosion had occurred and to find out why the trajectory of the spacecraft was shifting during its return to Earth). For problems such as finding ways to eliminate the buildup of carbon dioxide, reduce electrical power consumption, and conserve water use, diagnosis was irrelevant. The problem representation was clear, and the mission controllers needed to find a novel course of action. This illustrates why the process of diagnosis is portrayed as optional in figure 9.1.

For those episodes where diagnosis was needed, it played a critical role. For instance, during the return to Earth, battery 2 started to falter. There were only four batteries available. If the spacecraft was to develop battery problems on top of all the other failures, the chances of success would go even lower, and more workarounds would be needed. One of the mission controllers tried to diagnose the cause of the problem and determined that it was a low-probability malfunction that had existed in all the batteries of every LEM. The odds of other batteries failing were very low. In other words, the malfunction was not part of the unique crises afflicting Apollo 13 or part of the other system failures. Based on this diagnosis, the mission controllers were able to ignore the problem with battery 2 and proceed with their plans.

The level of detail for carrying out the diagnosis is important. The astronauts and the mission controllers would have liked to have a diagnosis of the cause of the original malfunction (which turned out to be an explosion of one of the two oxygen tanks). However, it was more important to find out the nature of the damage than to learn why it occurred. The mission controllers performed an initial diagnosis to learn what was causing the bizarre sensor readings. The diagnosis was that the spacecraft had lost one of its two oxygen tanks and was quickly losing the oxygen in the second tank. That explained the sensor readouts and provided a dear problem representation. They did not have a diagnosis of what caused the loss of the oxygen tanks. Only at the end of the mission did the astronauts see the extent of the destruction to the oxygen tanks. And only after several months did the investigation team learn how the maintenance and design process had created the hazard in the first place, leading to a condition where an unshielded wire in the oxygen tank generated a spark after a routine action to turn on the fan to stir the contents of the oxygen tanks. This level of diagnosis would have had no value during the emergency.

Some of the most compelling episodes in this incident were about constructing new courses of action. These stood out because so much of the mission was designed to be carefully scripted, yet when the script had to be abandoned, the mission controllers showed themselves capable of wonderful improvisation. I counted eighteen instances in which new courses of action were invented, along with many additional instances in which a course of action was recognized but without a need to do anything inventive.

For the cases where actions had to be constructed, the accounts did not have enough detail for me to judge the extent to which means-ends analyses and leverage points were used, or other strategies or combinations of strategies. Several episodes suggested means-ends analyses, such as when the controllers discovered that the reentry battery was 20 amp-hours below what was needed. The problem was a difference between what existed and what was needed. The controllers searched for a way to reduce this difference and found that the LEM had excess capacity, so the new course of action was to transfer LEM power up to the command module.

Other episodes suggested the use of leverage points, and in many of the instances in which new courses of action were generated, both means-ends analyses and leverage points appeared to be used. The means-ends analyses identified new subgoals, and the leverage points identified the promising starting points for constructing the courses of action. For example, the mission controllers needed a way to align the reentry of the ship. The typical procedure was to use the horizon of the earth as it moved across a window, sweeping past hash marks etched on the window for this purpose. However, Apollo 13 was returning from the nighttime side of earth; the astronauts could not check their alignment against a horizon they could not see. One of the team leaders realized that the moon would be visible as it set against the earth’s horizon during the critical reentry time. He planned a course of action to develop calculations about when the moon should disappear, so the crew could determine if they had the correct entry mark.

One of the most unexpected things I learned from reviewing the Apollo 13 episodes was the importance of forecasting. In my tally, fifteen of the episodes required forecasting. In four of these, the forecasts resulted in a revised problem representation, and in another three episodes the forecasting produced problem detection. Forecasting was needed to calculate when the spacecraft would run out of oxygen and water. The forecasts showed that the mission controllers could not retain the planned course, because the oxygen would not last long enough. They would have to work out a new course. This surfaced a new problem to be solved. Forecasting also came into play at the end of the mission, when the observed trajectory began to deviate from the expected trajectory, which meant that a new problem was detected. The mission controllers had to figure out whether the deviation was going to get worse and, if so, how to handle it. They were not able to diagnose the cause of the deviation until after the mission.

The mission controllers did perform evaluations of the different courses of action that were proposed, to assure themselves that the various workarounds would succeed. Thus, in recommending a new procedure for changing the course of the crippled ship, the suggested action was studied in a simulator to verify that the action would work and that the timing parameters were accurate. The mission controllers were using actual simulations along with mental simulations to test the actions.

I counted only four instances of decision making that involved selecting between alternative courses of action. One was the decision to shut the reactant valves. The second was to use a direct abort (i.e., turn the ship around) or let it continue around the moon. The third was whether to have the crew sleep before attempting the difficult task of powering down the spacecraft (letting the crew sleep would reduce the chances of error, but it also would waste power while they were sleeping). The fourth was selecting the type of burn strategy for returning the spacecraft to earth.

The first decision, to shut the reactant valves, was an attempt to stop the oxygen loss, because no one knew where the leak was. The decision meant that the mission to land on the moon would have to be abandoned, because the reactant valves could not be reopened by the crew. The mission controllers and the crew resisted this course of action; they did not want to terminate the mission. However, the action was clearly necessary, so there was no formal comparison between options.

The second decision was to select a course of action for returning the ship. The choices were to use a direct abort (turn the ship around) or an indirect abort (continue to fly the orbit around the moon but eliminate the landing). Since the explosion had possibly damaged the main engine and had caused the loss of the electricity needed to fire the engine, the direct abort option was judged not to be feasible. What appeared to be a decision turned out to be straightforward.

The third decision was about letting the crew sleep before powering down the spacecraft. The mission director chose to have the crew power the ship down before sleeping. Based on the account by Cooper, this decision was made by imagining the consequences of losing power during the six hours of sleep time, versus the consequences of having a sleepy crew do the complicated task.13 The mission director did not appear to compare the two options on the same set of criteria but mentally simulated the outcomes for each and chose the outcome that made him less uncomfortable.

The fourth decision was to select the type of burn for returning the spacecraft after it orbited the moon. Option 1 was a super-fast burn that would return the ship to earth thirty-six hours later. One penalty was that the return would be to a part of the Atlantic where the U.S. Navy did not have any ships. A second penalty was that this option required the crew to jettison the service module that would normally protect the heat shield, and the mission controllers worried that the heat shield might have been damaged by the explosion. They also worried that even a normal heat shield might not survive a sudden shift from deep freeze to reentry, since no one had ever conducted that type of test. Option 2 was to use a slower burn that would add a few hours but land the ship in the Pacific. The disadvantage was the same need to jettison the service module. Option 3 was to use a short burn, land in the Pacific, but keep the service module. This option would have the ship land more than twenty-four hours later than the super-fast burn. The crew was short on consumables such as oxygen and water. This decision was hotly debated, and option 3 was selected. According to Lovell and Kluger, the decision was made on the basis of perceived worst case. The difficulties with consumables were large, but the mission controllers believed they were manageable. The risks of jettisoning the service module were unknown and could be catastrophic. Framed in this way, the comparison was between a course of action that was painful but manageable and a course of action with a risk that was plausible and catastrophic.

The problem solving that went on during the Apollo 13 mission was intended for a number of purposes:

In reviewing the problem-solving activities that went on during the rescue, we can see that they did not require a standard sequence of steps, and most of the problem solving did not require either diagnoses or the generation of a novel course of action. Forming an accurate problem representation was the most common activity for this incident. Other incidents will show different patterns.

Problem Solving and Decision Making

Many researchers agree that the distinctions between problem solving and decision making blur in natural settings. Some prefer to treat problem solving as a subclass of decision making (called upon when the person needs to formulate a new course of action). Some prefer to see decision making as a subclass of problem solving (called upon when the person has to compare several courses of action). There is more overlap than difference.

Consider the case of an undergraduate finishing her first year of college. Because she misses her friends, she may be tempted to transfer to a school that is closer to home. It appears that she has a decision to make: stay at the original college or transfer. Nevertheless, in many cases the student will not make a choice. Instead, she will shift into a problemsolving mode. She will check on how many credits will be lost in the transfer, gather more information about the quality of the professors in her major field of study, reconsider whether she should join a sorority, imagine how her grades might fall if she were living closer to home, check on the availability of rides to make it easier to return. She may plan to use her earnings for the summer to buy a car so she feels less trapped at her current university. These are as much problem-solving as decision-making activities.

In order to define problems and generate novel courses of action, we need to draw on our experience to make judgments about:

Each of these judgments is its own source of power. The sources of power in this list overlap considerably with the ones covered in the discussion of decision making. It seems as if there are two primary sources of power for individual decision making and problem solving:

Pattern matching provides us with a sense of reasonable goals and their attributes. It gives a basis for detecting anomalies and treating them with appropriate seriousness. It helps us to notice opportunities and leverage points, discover relevant analogues, and get a sense of how solvable a problem is. The judgment of solvability is also responsible for letting us recognize when we are unlikely to make more progress and that it is time to stop.

Mental simulation is the engine for diagnosing the causes of the problem, along with their trends. It plays a role in coalescing fragmentary actions to find a way to put them together. And it is the basis for evaluating courses of action. The themes covered thus far in reviewing problem solving and decision making are the core components for my perspective on naturalistic decision making. The development and use of these sources of power are elaborated in the following chapters.

Applications

One application is to be less enthusiastic about rational planning approaches. Certainly there is value in trying to envision goals more clearly in planning and preparation. Nevertheless, we must accept the limitations of our ability to make plans for complex situations. We can prepare to improvise as we redefine the goals midway through a project.

Most so-called rational methods of problem solving are variants of the stage models presented earlier. These approaches are taught in many different settings: business schools, engineering departments, organizational development courses, and in special seminars and workshops. The simplicity of the methods makes them attractive and easy to remember. For example, Kepner and Tregoe (1965, 1981) presented a systematic and general method for problem solving in their book The Rational Manager. According to Kepner and Tregoe, you first need to determine what the values of different parameters ought to be. Then you determine what the values are. Then you figure out when the values changed. Then you find out what else changed around that time, and, presto, you have uncovered the cause of the problem. This approach works as long as you are dealing with well-defined goals and fairly static work settings.

Of course, it is a good idea to try to define the goal as clearly as possible before proceeding. Robert Mager (1972) has described several useful methods for clarifying goals. I agree with him that goal clarification is important, especially at the beginning of a task. My skepticism about rational problem-solving methods is that they do not prepare you to improvise, act without all of the relevant information, or cope with unreliable data or shifting conditions. They do not prepare you to learn about the goals throughout the problem-solving process.

A second application is to be less enthusiastic about creativity programs. A range of different creativity methods has been proposed: brainstorming, synectics, and permutations of elements. The permutation of elements, for example, involves specifying all the different possibilities for each variable and then combining them to create an assortment of alternatives. Let us return to the illustration of making new types of ice cream sundaes. You can have the flavors themselves (coffee, pistachio, licorice, peach, etc.), the added ingredients (cookie crumbs, berries, walnuts, etc.), the toppings (whipped cream, bananas, fudge, tiny meatballs), and so forth. By combining these, you have a large number of possibilities. Most of these possibilities are new (pineapple ice cream with coconut chunks, topped with bacon flakes), and many are truly awful (pistachio ice cream, mini meatballs, and caramel topping). These procedures seem like desperate attempts to use systematic procedures as a substitute for imagination. In most domains, we need not off-the-wall creative options but a clear understanding of the goals. The creativity methods may sometimes look promising for identifying new possibilities, but the cost is having to plow through all the poor ideas. Even brainstorming, a method that has been around for decades, seems primarily a social activity. If the participants generate their ideas individually, the resulting set of suggestions is usually longer and more varied than when everyone works together. Mullen, Johnson, and Salas (1991) have documented the finding that brainstorming reduces productivity.

Another application of the concepts of leverage points and nonlinear problem solving is to gain a better understanding of planning. Tom Miller and I (1997) have studied planning teams in several different settings and have formulated a number of conclusions about planning, teamwork, and problem solving. (This effort was sponsored by the Office of Naval Research, in conjunction with the Naval Research Laboratory.) Our primary data source was a series of three field observations of planning exercises for combined (army, navy, air force and marines) and integrated planning for the use of aircraft. The first exercise, Roving Sands, was conducted in New Mexico and Texas, and our observer was Tom Miller. The second exercise was conducted in the Pacific, and our observers, Tom Miller and Laura Militello, were stationed on the USS Kittyhawk. The third exercise was conducted in the Atlantic, with Tom Miller stationed on the USS Mt. Whitney. We collected observations about strains in the planning process, about procedures that were formalized, procedures that were informal, and procedures that were ignored. We collected data about planning strategies that had to be abandoned and strategies that were improvised out of necessity. We also reviewed data from studies we had previously performed in other domains (Patriot missile batteries, U.S. Marine Corps regimental command posts, medical evacuation planning teams). These studies covered seven different observations and more than one hundred interviews.

We learned that planning is not a simple, unified activity. We need to distinguish the functions of the plans and the types of environments in which the planning and execution will take place.

Plans can differ with regard to the functions they serve. These functions include:

This last function, to promote individual and team learning, can overshadow the others. Sometimes planners engage in lengthy, detailed preparations that quickly become obsolete, yet they continue with the same process, again and again; it appears that the function is to help them all learn more about the situation and to calibrate their understanding, rather than to produce plans that will be carried out more successfully.

In our sample of planning environments, we found that plans differed along some key dimensions: how precise the plan was made, whether the plan was modular (elements that were relatively independent) or integrated (coordinating all the elements), and the level of complexity. Sometimes precision can be useful; at other times it reaches unnecessary levels. There are times for using modular elements (that are loosely coupled with each other) and times to build integrated strategies that are more efficient but less robust. Complexity can be a sign of sophistication—or a sign that the plan is likely to break down.

We also learned that forcing functions in the environment played a major role in determining the type of plan adopted. A stable environment permitted more precise and complex plans. A rapidly changing environment favored modular plans because these permitted rapid improvisation. A resource-limited environment favored integrated plans that were more efficient. Time pressure and uncertainty made it more difficult to construct integrated plans.

Our work on nonlinear problem solving helped us to notice events that were not occurring. In one high-level command and control setting, we realized that goals were not being widely disseminated, leverage points were not being identified, and evaluations were not being conducted. We were able to study how the forcing functions were contributing to the omissions of these processes. Goals were not being disseminated because the planning team was distributed, consisting of experienced members who shaped the priorities, and less experienced members who compiled the detailed orders. The experienced planning cell did not want to communicate the rationale for priorities because they did not want the compilers to interpret the goals. As a result, leverage points were not being identified. This might have reduced efficiency, but the planners were not concerned with efficiency. Evaluations were not being conducted because the compiled plans were so modular that the planners had difficulty differentiating good from poor plans. This system resulted in modular rather than integrated plans. One advantage of this system was that if a plan needed to be changed at the last minute, it was fairly easy to make the shift without disrupting the rest of the units. We have seen in domains where the plans were highly integrated that changes in just one portion resulted in a ripple effect, and therefore discouraged improvisation.

By viewing planning as a type of problem solving and taking into account nonlinear aspects of problem solving, it should be possible to gain a richer appreciation of the planning process. Cognitive scientists have not given as much attention to planning as to problem solving, and there seems to be a good opportunity for research here. In particular, the functioning of distributed planning cells would appear to be worth examining in more depth.

Another topic to consider is strategic planning. Mintzberg (1994) has written a comprehensive account of the failures of strategic planning and of its inherent limitations. These limitations are consistent with the naturalistic decision-making perspective to support expertise and to be wary of how decomposing tasks and performing context-free analyses can degrade intuition.

Key Points

Notes