5 The Power of Mental Simulation

During a visit to the National Fire Academy we met with one of the senior developers of training programs. In the middle of the meeting, the man stood up, walked over to the door, and closed it. Then in a hushed voice he said, “To be a good fireground commander, you need to have a rich fantasy life.”

He was referring to the ability to use the imagination, to imagine how the fire got started, how it was going to continue spreading, or what would happen using a new procedure. A commander who cannot imagine these things is in trouble.

Why did the developer close the door before he revealed this ability? Because the idea of using fantasy as a source power is as embarrassing as the idea using intuition as a source of power. He was using the term fantasy to refer to a heuristic strategy decision researchers call mental simulation, that is, the ability to imagine people and objects consciously and to transform those people and objects through several transitions, finally picturing them in a different way than at the start. This process is not just building a static snapshot. Rather, it is building a sequence of snapshots to play out and to observe what occurs.

Here is a simple exercise that calls for mental simulation. Figure 5.1 shows two pictures: a truck on the ground—the initial state—and a truck in the air supported by a column of concrete blocks—the target state. Can you think of a way that you, working alone, can get that truck up in the air on a column of blocks? You need to do this all by yourself, using an inexhaustible pile of concrete blocks and a jack, the only machine you are permitted.

11307_005_fig_001.jpg

Figure 5.1 Truck levitation

One way to imagine the task is shown in figure 5.2. You begin with the initial state. Then you jack up the right rear tire, slip a set of blocks underneath that tire, remove the jack, and use it on the left rear tire. You repeat the process until all the tires have been raised. You build a pile of blocks underneath the center of the truck. Then you (carefully) remove all of the blocks supporting each tire so that you wind up with the target state.

11307_005_fig_002.jpg

Figure 5.2 Transition sequence

This is an action sequence because you are changing from one state to another, bridging the gap between the initial state and the target state. Some people do this with visual imagery, perhaps what is shown in figure 5.2. Others claim that they figure it out logically without ever using visual images.

Example 5.1
The Car Rescue

The leader of the emergency rescue team is called out to save a person who crashed his car into a concrete pillar supporting an overpass. Other firefighters are already on the scene when he arrives, and they are bringing out the Jaws of Life, a hydraulic machine that can be inserted into a narrow place to exert great force that widens the opening. It is designed to force open things like car doors that have become crumpled in an accident, trapping the victim.

The head of the rescue team goes over to the car to investigate. The driver is the only person in the car, and he is unconscious. The commander walks around the car to test each door, but each is badly damaged and will not open. Using the Jaws of Life to pry open the doors will be difficult.

During his investigation, the commander has noticed that the impact has severed most of the posts holding up the roof of the car. He begins to wonder if they can lift off the roof and then slide the passenger out rather than fighting their way through the doors. He tries to imagine how that might be done. He imagines the roof being removed. Then he visualizes how they will slide the driver, where rescue workers will stand to support the driver’s neck, how they will turn the driver to maneuver around the steering column, and how they will lift him out. It seems to work. He runs through the sequence again to try to identify any problems but can’t find any. He had heard that rescues could be made this way, but he had never seen it.

He explains to his crew what they need to do, and the rescue works out as he had imagined. The only problem is that the driver’s legs become wedged underneath the steering wheel, and additional firefighters have to reach in to unlock his knees.

The car rescue is an example of a mental simulation that worked out well. The earlier example of the harness rescue (example 3.2) showed a case where the mental simulation was incomplete; it neglected the fact that the ladder belt could not be pulled tightly enough.

Mental simulation does not always work, as the next example shows.1

Example 5.2
The Libyan Airliner

It is almost 2:00 P.M., February 21, 1973, and a Libyan airliner that took off from Bengazi airport is heading toward its destination, Cairo. At least, that is what its crew believes. In fact, the plane is just passing over Port Touafic, at the southern end of the Suez Canal. It is about to cross over to the Sinai Peninsula, occupied by Israel at that time. The plane is seriously off course.

Israeli radar has spotted the plane. The Israeli Defense Forces are on alert because of warnings of a terrorist attack—a plan, it is said, to hijack an airplane and explode it over a populated area. Possible targets include the city of Tel Aviv, the nuclear installation at Dimona, and other civilian or military targets. The Israelis note that this airplane is deviating from the typical air corridors and violating Egyptian airspace. It has flown above the most sensitive spots of the Egyptian war zone, yet no Egyptian MiGs are being scrambled to investigate. No Egyptian ground-to-air missiles are fired, although the Egyptians are supposed to be on full alert. The Egyptian communications do not refer to the intruding plane. The Israelis know that the Egyptians have a very sensitive early warning system. A month earlier, they had shot down an Ethiopian plane that had flown over an Egyptian war zone by mistake.

At 1:54 P.M., the aircraft penetrates into the Israeli war zone in the Sinai Desert, cruising at 20,000 feet. The plane is flying a route referred to by the Israelis as “hostile,“ one used by Egyptian fighters in their intrusions.

At 1:56 P.M., two Israeli F-4 Phantom fighters are sent to investigate. At 1:59 P.M., the F-4s intercept the plane. They do not see any passengers since all the window shades are down. The F-4 pilots identify the plane as an airliner with Libyan markings. They can see the Libyan crew in the cockpit. They are certain the Libyan crew recognizes them, by the Shield of King David markings on their planes. One of the F-4 pilots reports that the airplane’s copilot looked directly into his eyes.

Using the international practice of signaling by radio and rocking their wings, the F-4s signal for the airliner to land at Refidim air base. The intercepted plane is supposed to respond by following the instructions, notifying the appropriate air traffic services unit, and establishing radio communications with the interceptor. The Libyan airplane performs none of these actions. The air crew does seem to convey by hand signals that they understand the request and intend to obey; nevertheless, the airliner continues to head northeast.

At 2:01 P.M., the F-4s fire some tracer shells in front of the airliner’s nose. The airliner turns toward Refidim air base and descends to 5,000 feet. The pilot lowers the landing gear. Suddenly he turns back toward the west and increases altitude, as if he is trying to escape. The F-4s fire warning shots in front of him, but he continues. The Israeli generals monitoring the situation decide that the airplane is indeed on a terrorist mission, and they are determined to prevent its escape. They direct the F-4s to force it to land.

At 2:08 P.M., the F-4s shoot at the airliner’s wing tips. Even after its right wing tip is hit, the airliner continues west. The F-4s begin firing at the wing bases, and finally the airliner attempts a crash landing. The pilot is almost successful but slides into a sand dune. Of more than a hundred passengers and crew, only one person survives.

The Israeli Perspective

The airliner is not in direct communication with Cairo airport. It receives no attention as it flies over sensitive Egyptian locations, and for these and other reasons, it is identified as a hostile intrusion.

The Israelis try to imagine how they can be dealing with a legitimate civilian airliner. A captain responsible for the safety of passengers avoids even the slightest risk. That is why pilots obey hijackers so quickly. Given this mind-set, any legitimate crew would land when clear signals are given.

This plane does not land. Here is what Israeli Air Force General “Motti“ Hod said: “The captain sees our Israeli insignia on our F-4s. He sees Refidim airfield ahead. He knows we want him to land, since he lowers his landing gear. Then he retracts the landing gear and flies off! At first he doesn't fly directly west but turns to circle the air base. We interpret this as an attempt to make a better approach. Then he turns and starts to fly west. That is when we order our F-4s to start firing tracer bullets, which the crew must see. Still they keep going west. No genuine civilian captain would behave in such a manner. In contrast, a crew on a terrorist mission would show such behavior. Therefore, we must prevent their escape and force them to land.

“Moreover, recent history just confirmed our beliefs. A few months earlier, an Ethiopian airliner had strayed into the Egyptian ground-to-air missile system and was shot down. An American private airplane also was shot down by missiles above the delta. Several other planes were fired on when they penetrated areas that maps warn pilots to avoid. Pilots have learned to become familiar with these free-fire zones and to stay far away from them. Yet here was a plane flying right through them!”

The Israelis try to imagine how a civilian airline captain would behave in the ways they were observing. They cannot fit the pieces together. In contrast, they can fit the pieces together if they imagine that the plane is on a terrorist mission. The Israelis’ diagnosis of the situation is not very difficult, given the clear-cut nature of the evidence.

The Airliner Perspective

During the afternoon, a sandstorm covers the sky of Egypt. The airliner is on a routine flight. The captain and the flight engineer are French, the copilot Libyan. The black box from the airplane is later recovered and shows that the pilot and flight engineer are conversing in French; the copilot is not sufficiently fluent to participate in the discussion. The captain and flight engineer are drinking wine and do not realize they have deviated more than seventy miles from their planned route.

At 1:44 P.M., the captain begins to have his doubts about their course. He talks to the flight engineer, but not the copilot, about these doubts. He does not report his worries to Cairo airport. Instead, at 1:52 he receives permission from Cairo airport to start his descent.

At 1:56 P.M., the captain is still uncertain about his actual position. He tries to receive beacon signals from Cairo airport, and those he receives are contrary to the ones he expects according to his flight plan. Nevertheless, he continues flying as scheduled.

Between 1:59 and 2:02 P.M., the crew achieves radio contact with Cairo airport and explains its difficulties in receiving the radio beacon and their inability to receive the Cairo nondirectional beacon. Cairo airport believes that they are close and directs them to descend to 4,000 feet.

When the Israeli Phantoms come up, the crew identifies them as Egyptian fighters. The Libyan copilot reports, “Four MiGs behind us,” The Egyptians fly Soviet MiGs; the Israelis do not.

When the Israeli pilot approaches the airliner and gives hand signals to land, the Libyan copilot reports this to his colleagues. The captain and the flight engineer gesticulate angrily about the rudeness of the MiGs. These may have been the hand signals the pilot reported. The Libyan captain and flight engineer continue to speak in French.

There are two airfields in the Cairo area: Cairo West, an international airport, and Cairo East, a military base. The crew interprets the fighters’ actions as warning them that they have overshot Cairo West and are over the military base. The air crew interprets the fighters as a military escort. When they begin to descend at Refidim air base, they see that it is a military base, so they realize they must be at Cairo East. They decide it would be a mistake to land at Cairo East, so they turn toward Cairo West.

At 2:09 P.M., the captain reports to Cairo control: “We are now shot by your fighter.” This is unthinkable, since at the time Egypt and Libya were on excellent terms. When they are fired on again, they think the Egyptian fighters are crazy. Why would an Egyptian fighter shoot a Libyan civilian airliner? The fighters were friendly escorts, making sure they did not land in the wrong place. They change course obediently. They are fired on unexpectedly and are unable to build this into their story. Why would Egyptian fighters be firing on them?

They are still trying to figure this out when, just before they crash, the Libyan copilot finally identifies the fighters as Israeli planes. The black box does not reveal how they incorporate that fact into their diagnosis of the situation.

What happened here? The Israeli generals ran into a situation that exceeded their wildest fantasies. Similarly, the crew of the airliner was not able to imagine what was happening to them until the very end. In this example, mental simulation is used differently than in the car rescue. In that case, mental simulation was used to imagine how a course of action would be played out into the future. Here, the Israeli generals were using mental simulation to imagine what could have happened in the past to account for the strange goings-on. Mental simulation about the past can be used to explain the present. It can also be used for predicting the future from the present.

Beth Crandall and I have been interested in mental simulation for several years because the process seems so central to decision making and because we keep finding it in expert performance.

We found that as early as 1946 Adriaan de Groot had studied the mental simulation of chess masters. Two decision researchers, Kahneman and Tversky (1982), had written a paper on the simulation heuristic, based on laboratory studies. They described how a person might build a simulation to explain how something might happen; if the simulation required too many unlikely events, the person would judge it to be implausible.2

With funding from the Army Research Institute, Beth and I did an exploratory study on mental simulation to learn more about its nature. Our idea was to gather and examine a set of cases to see if there were any regularities. For the most part, the cases came from our own records (the harness rescue case, the car rescue), and we also included cases from other sources, such as the Libyan airliner incident, as well as examples from a book by Charles Perrow, Normal Accidents (1984), which gives the details behind a variety of disasters and breakdowns. In addition, we performed some informal interviews and asked the people in our company to volunteer examples.

After we had collected all the cases, we reviewed them for commonalities and differences. Then we tried to code them for features such as time pressure, experience level of the person, use of visual versus nonvisual simulations, and so on. We dropped about 20 percent of the cases because the description did not clearly show how the mental simulation was being used. We could imagine how the person might have been using mental simulation, but when it was a stretch, we were not sure if we were studying the person’s fantasies or our own. That left us with seventy-nine cases.

These cases showed the same patterns (Klein & Crandall, 1995). The people were constructing mental simulations almost the way you build a machine: “Here is the starting point. Then this kicks in, which changes that, and then this other thing happens, and you wind up there.“ It's like designing a watch or a mousetrap. For the overpass harness rescue, the fireground commander imagined how they would lower the ladder belt down, then lift the woman up by an inch, slide the belt under her, buckle it, then lift it up. For the car rescue, the commander imagined how they would lift off the roof, climb in the car, support the man's neck, slide him away from the steering column, grab hold of his arms and legs, and swivel him up and out of the car.

We noticed something else: the mental simulations were not very elaborate. Each seemed to rely on just a few factors—rarely more than three. It would be like designing a machine that had only three moving parts. Perhaps the limits of our working memory had to be taken into account. Also, there was another regularity: the mental simulations seemed to play out for around six different transition states, usually not much more than that. Perhaps this was also a result of limited working memory. If you cannot keep track of endless transitions, it is better to make sure the mental simulation can be completed in approximately six steps.

This is the “parts requirement“ for building a mental simulation: a maximum of three moving parts. The design specification is that the mental simulation has to do its job in six steps. Those are the constraints we work under when we construct mental simulations for solving problems and making decisions. We have to assemble the simulation within these constraints.

Of course, there are ways of avoiding the constraints. If we have a lot of familiarity in the area, we can chunk several transitions into one unit. In addition, we can save memory space by treating a sequence of steps as one unit rather than representing all the steps. We can use our expertise to find the right level of abstraction. For example, in the car rescue, the team leader could count the removal of the car roof as one step without imagining the coordination needed to make sure no one got hurt. If he had included the position where everyone would stand before lifting the roof, how they would step in unison, and so forth, the simulation might have bogged down. If he were worried about this particular step, he might do a separate mental simulation for it, to satisfy himself that the roof could be quickly disposed of. The more of these side trips he had to take, the more he would need to hold in his mind. Another strategy for overcoming memory limits is to write things down and draw diagrams to keep track of the transitions.

If the moving parts interact with each other in each transition, you have to remember a lot more since you have to keep track of the parts themselves at each point. Even diagrams start to fall apart as more and more arrows are drawn to represent interactions. We heard a lot about this from some software programming experts we interviewed. Part of a quality inspection team, they had recently finished inspecting a program with over 900,000 lines of code. Each person on the team inspected around 5,000 lines of code a day. They told us that a software program is like a giant machine, with many different moving parts (all the variables) and fixed parts (the operators that transform the variables). Since the code was written down, they did not have to remember all the moving parts; the variables were listed in front of them. The inspection was to imagine how the machine was going to work when the program started running. If the program was linear, as in the truck example in figures 5.1 and 5.2, the job was not too hard. If the variables interacted with each other, the job of visualizing the program in action became quite difficult. So this is another challenge in building a mental simulation. We search for a way to keep the transitions flowing smoothly by building a simulation that has as few interactions as possible.

Considering all these factors, the job of building a mental simulation no longer seems easy. The person assembling a mental simulation needs to have a lot of familiarity with the task and needs to be able to think at the right level of abstraction. If the simulation is too detailed, it can chew up memory space. If it is too abstract, it does not provide much help.

We came across a few cases where people could not assemble a simulation. For instance, the Israeli generals failed to construct a simulation of how a genuine commercial airline pilot would take so many risks. Because of this failure, the generals concluded that the airplane was not a legitimate airliner. Cases like this gave us some idea about how experience is needed to build a mental simulation.

We wanted to study how people failed to build a mental simulation, so we tried something else. We asked people to generate mental simulations in front of us. This was how we came to study the Polish economy.

The Polish Economy

The Polish economy received little coverage in the news media when it was reformed, yet it is one of the boldest experiments of our time. In 1989 the Polish government, freed from control by the Soviet Union, realized that socialism was leading it nowhere, so the government made preparations to convert to a market economy. On January 1, 1990, the new Polish government decreed the switch to capitalism. From that moment, the government would no longer use state companies to provide meaningless jobs. They allowed zlotys to be traded on the open market. They let inflation and unemployment run uninhibited. It was a dramatic moment in the retreat from communism.

The announcement about Poland’s switching to a market economy came as we were gathering cases of mental simulation. I realized that I could find some experts and ask them to simulate mentally what was going to happen in Poland during the coming year. Would the reform succeed, or would the Poles have to back down?

Fortunately I was able to interview a bona fide expert: Andrzej (pronounced Andrei) Bloch, an associate professor of economics at Antioch University. He is Polish and received all his education up through his B.A. in Poland. (He got his Ph.D. in the United States.) He makes regular trips back to Poland.

Before I tell you about Andrzej, think about what you would have if I had interviewed you. Could you have set up a mental simulation for the Polish economy during the year 1990? Could you do it now, for the coming year? Where would you start? What would you include in the simulation? Most people shrug their shoulders. If they give any answer at all, it is to parrot back something they have heard on TV: “I hear there is more expansion in Eastern Europe these days.”

Andrzej created wonderful simulations. Without prompting, he boiled it down to three variables: the rate of inflation, the rate of unemployment, and the rate of foreign exchange. I asked Andrzej to imagine how the Polish economy would do on these three variables by quarter for the year 1990. According to Andrzej, since the government was not going to fight inflation artificially, the inflation rate was going to zoom up from its (then) current rate of 80 percent a year to an annual rate of about 1,000 percent for a few months. (This meant prices would increase around 80 percent a month instead of 80 percent a year.) Goods were going to become quite expensive. Prices would rise faster than wages. Quickly, people would not be able to afford to buy very much, so demand would fall, and the prices would stabilize. He estimated that this would take about three months. To put things in perspective for me, he noted that food shortages were the traditional source of unrest in Poland and Russia; people were more likely to protest food shortages than a lack of political freedom. If they could not afford to buy bread, that might cause the government to collapse. Nevertheless, he felt that the euphoria over the Solidarity movement was high enough and that the period of sharp inflation would be short enough so there would not be problems on this score. When I reviewed his predictions with him a year later, we found that this was accurate. He had accurately called the sharp increase up to 1,000 percent for January and February, as well as the downturn to around 20 to 25 percent by April and thereafter.

Next, he considered unemployment. If the government had the courage to drop unproductive industries, many people would lose their jobs. This would start in about six months as the government sorted things out. The unemployment would be small by U.S. standards, rising from less than 1 percent to maybe 10 percent. For Poland, this increase would be shocking. Politically, it might be more than the government could tolerate and might force it to end the experiment with capitalism. When we reviewed his estimates, we found that unemployment had not risen as quickly as he expected, probably, Andrzej believed, because the government was not as ruthless as it said it would be in closing unproductive plants. Even worse, if a plant was productive in areas A, B, and C and was terrible in D and E, then as long as they made a profit, they continued their operations without shutting down areas D and E. So the system faced a built-in resistance to increased unemployment.

Finally, he looked at foreign exchange, which he saw as a balancing force. As the exchange rate got worse, increasing from 700 zlotys per dollar to 1,500 zlotys per dollar, people would find foreign goods too expensive so they would buy more Polish items. Similarly, outsiders would find that Polish-made items were a bargain, so exports would boom, increasing employment and improving economic health. He thought this might take a few years to accomplish, if at all. He expected that during 1990, the exchange rate would continue to increase, eventually to 1,400 zlotys per dollar. He expected that the government would intervene at that point. During the year I noted that the zlotys went up to around 900 per dollar and stayed there. Andrzej had been too pessimistic. In 1991, I discussed this with him, and he felt that the problem again was that the government was softening the blows. Had the full market economy shift been made as advertised, the rate would have increased much faster, and the shift would have been finished much quicker.

This mental simulation depended on three factors and on a few transitions (rapid inflation, reduced level of inflation, gradual rise in unemployment and loss in exchange rate, improved employment, and finally, stabilized exchange rate).

Andrzej was not finished. He estimated the likelihood of success for this market economy experiment at 60 percent. A virtuoso at simulating Polish futures, he generated pessimistic mental simulations and showed how the experiment could fail. He switched to political simulations.

Buoyed by this example, I lined up two more people to interview, neither as expert as Andrzej. The first was one of Andrzej’s best students, who had visited Eastern Europe recently. The other was a professor of political science at another university; this man had spent a sabbatical in Poland many years earlier. Neither could generate any mental simulations at all. They considered only two variables, inflation and unemployment, so there was no balancing factor such as foreign exchange. Worse, they did not know the current rate of inflation or unemployment, and they had no sense for what would count as a high rate for either variable. The student thought that there would be tough times but that the experiment would succeed, which made him happy. The professor was a Marxist who thought the Poles were making a big mistake by retreating from the wave of the future. He thought there would be tough times and that the government would have to dismantle the market economy, which made him happy.

The implications of this minor sideline in an exploratory study are clear: without a sufficient amount of expertise and background knowledge, it may be difficult or impossible to build a mental simulation. The expert, despite his desire to see the market economy experiment work, could imagine different ways for it to fail and to anticipate early warning signs. He told me about several (e.g., if the rate of inflation does not come down below an annual rate of 50 percent by April, start worrying). Incidentally, the Polish economy did fairly well during its first year as a market economy. Inflation stayed at a reasonable level, unemployment did not increase too much, and foreign exchange remained stronger than expected. The experiment seems to be working, and as of this writing Poland has become the first former communist country to enjoy economic growth.

The example of the Polish economy shows how difficult it is to construct a useful mental simulation. But once it is constructed, it is impressive. We do this all the time in areas about which we are knowledgeable. We do it to imagine how a supervisor will react, or how to repair a car, or to explain the way a neighbor has been behaving. We do it without giving much thought to what an important source of power we have in our ability to construct mental simulations on demand. In technical fields, people may spend hundreds of thousands of dollars, and sometimes over a million dollars, to build computer simulations of complex phenomena. The computer is needed to keep track of all the variables and the interactions and all the different pathways that are possible. The programs are not as limited by memory capacity as we are. However, these programs are extremely specialized. They run simulations only in the single area for which they have been programmed. In contrast, you and I carry around a multipurpose mental simulator that adapts to all kinds of different problems and requires virtually no reprogramming time. True, it has a limited memory capacity, but its versatility is astounding.

Models of Mental Simulation

Figure 5.3 shows a generic model of mental simulation. It shows the two types of needs: to explain the past and project the future. It shows that we specify the parameters by pinning down the initial state (if we are projecting the future), the terminal state (if we are explaining the past), both initial and terminal states (if we are trying to figure out how the transformation occurred), and the causal factors that drove the transformation.

11307_005_fig_003.jpg

Figure 5.3 Generic model of mental simulation

In assembling the action sequence, figure 5.3 reminds us that mental simulations generally move through six transitions, driven by around three causal factors. Once the person tries to assemble the action sequence, he or she evaluates it for coherence (Does it make sense?), applicability (Will it get what I need?), and completeness (Does it include too much or too little?). If everything checks out, the action sequence is run and applied to form an explanation, model, or projection. If the internal evaluation turns up difficulties, the person may reexamine the need and/or the parameters and try again.

The cases Beth and I reviewed fit into two major categories: the person was trying to explain what had happened in the past, or trying to imagine what was going to happen in the future. We developed models of each that were variants of the generic model. Incidentally, we coded the seventy-nine incidents of mental simulation independently and found that we could reliably code the incidents into the same categories.

Explaining a Situation

For cases where a person was trying to explain what had happened in the past, the reason was either to make sense of a specific event (such as a juror’s trying to figure out if the evidence showed that the defendant had committed the crime) or to make sense of a general class of events by deriving a model (such as Einstein’s imagining how a beam of light shining through a hole in an elevator might seem to curve if the elevator was moving). Figure 5.4 shows a specific version of the model that describes how we explain a chain of prior events, or a state of the world.

11307_005_fig_004.jpg

Figure 5.4 Using mental simulation to generate an explanation

Consider this example. Some need arises for building a mental simulation; let us say a coworker has suddenly started acting rudely toward you. The simulation has to let you infer what the original situation that led to the events you are observing. You assemble the action sequence: the set of transitions that make up the simulation. Perhaps you recall an incident that same morning when you were chatting with some other people in your office and said something that made them laugh. Perhaps you also recall that earlier that morning, your coworker had confided some embarrassing secret to you. So you construct a sequence in which your coworker trusts you with a confidence, then regrets it immediately afterward and feels a little awkward around you, then sees you possibly entertaining some other people with the secret, and then feels that it is going to be unbearable to live with you in the same setting. Now you can even remember that after you made the other people laugh, you looked up and saw the coworker giving you a look that made you feel uneasy. This set of states and transitions is the action sequence, the mental simulation that explains the rude behavior.

The next step is to evaluate the action sequence at a surface level. Is it coherent (Do the steps follow from each other)? Yes, it is. Does it apply (Does the sequence account for the rudeness)? Yes, it does. How complete is it (Does it leave out any important factors, such as the excellent performance evaluation you have just received)? Yes, there are some more pieces that might belong to the puzzle. But in general, the mental simulation passes the internal evaluation. It is an acceptable explanation. That does not mean it is correct.

Sometimes the mental simulation will not pass the internal evaluation, and that also helps you make sense of things. Example 5.3 illustrates this with a story reported in a newspaper.

Example 5.3
The IRA Terrorist

A well-respected lawyer has agreed to defend a man accused of committing an act of terrorism: planting a bomb for the IRA. The lawyer, asked why he would take the case, answers that he interviewed the accused man, who was shaking and literally overcome with panic. He was surprised to see the man fall apart like that. He tried to imagine the IRA’s recruiting such a person for a dangerous mission and found that he could not. He cannot conjure up a scenario in which the IRA would give a terrorism assignment to a man like this, so his conclusion is that the man is innocent.

This lawyer could not generate an action sequence that passed his internal evaluation—specifically, the requirement that the transition between steps be plausible. His failure to assemble a plausible sequence of steps led him to a different explanation than the prosecutors had formed. That’s why you see a long, curving arc in figure 5.4: the failure to assemble the mental simulation was the basis of the conclusion.

There are also times when you use mental simulation to try to increase your understanding of situations like these. You are trying to build up better models. When you run the action sequence in your mind, you may notice parts that still seem vague. Maybe you can figure out how to set up a better action sequence, or maybe there are some more details about the present state that you should gather. Going back to the example of your coworker, your explanation has not included the fact that you received such a good performance evaluation. What was your coworker’s performance evaluation? Possibly the coworker felt you had gotten recognition for work that someone else had done. Perhaps you can get a general sense of the situation by talking to your boss. That might give you some more data points for building your explanations.

Projecting into the Future

In many cases, decision makers try to project into the future, either to predict what is going to happen and perhaps to prepare for it (manufacturers bidding on a new part who try to imagine how they will make the part and how long that will take) or to watch a potential course of action to find out if it has any flaws (e.g., the car rescue).

Figure 5.5 shows how you can try to build a bridge from your present condition to a future one. You know the initial state and you are trying to imagine the target state. Sometimes you also have a good picture of the target state, as in the truck example, and your job is to figure out how to convert one into the other. What is new here is the way you run and review the action sequence. Recall the car rescue. The team leader put his plan under a microscope, scrutinizing each step to see if there could be a problem. He was trying to find pitfalls in advance. In the end, he evaluated his plan based on the nature and severity of the problems that he found.

11307_005_fig_005.jpg

Figure 5.5 Using mental simulation to project into the future

Sometimes we form a general impression that a plan is going to work out, without putting the plan under a microscope. We recognize aspects of it that match typical cases that worked for us in the past or led to difficulties. So we use intuition to form an emotional reaction of optimism or worry. In his research with chess masters, Adriaan de Groot found that they frequently formed these global impressions of whether a line of play was going to work, even before they studied the sequence.

Once you have evaluated the action sequence, which is usually a planned course of action, you may try to modify the plan to overcome the pitfalls; or you may decide it cannot be salvaged, so you reject it; or you may carry it out. In some cases, the purpose of the mental simulation is to make a prediction (e.g., how long a process will take), so you run the action sequence through in your mind and form a judgment. Finally, you can use mental simulation to prepare for carrying out a course of action, by rehearsing what you are going to do.

We have left the truck in figure 5.2 dangling in the air long enough. The parameters are that you know the initial state and the target state; your job is to transform one into the other. You assemble the action sequence of what is going to change from one state to the next. Then you evaluate this sequence. Does it make sense? If you are simply trying to make up a plausible plan, then you have succeeded. If you are going to carry this plan out, then your evaluation must be a little more careful. You might notice that the truck can roll forward or backward each time you jack it up. Maybe you can use some of the blocks to wedge around the tires. You might also notice that all the weight of the truck in the target state seems to be resting on a single point. Is there any place on the bottom of a truck that can bear such weight? Perhaps it will be safest to build a very large central platform to spread the weight out more.

The mental simulation study that I have been describing was done for purposes of exploration, and so the proposed model has to be considered tentative until we conduct more research. Nevertheless, since completing this exploratory study, we have examined mental simulation more carefully in other research projects and have not come across anything to contradict the models shown in figures 5.3, 5.4, and 5.5.

Mental simulations, run from the past into the present or the present into the future, can stretch to help us infer a missing cause, a missing effect, or a bridge between the two. They can also mislead us.

How Mental Simulations Can Fail

The biggest danger of using mental simulation is that you can imagine any contradictory evidence away. The power of mental simulation can be used against itself.

Consider the example of the rude coworker. You believe the rudeness comes from paranoia that you have divulged the coworker’s secret. To test this explanation, you ask a mutual friend to ask the coworker the reason for the rude behavior. The friend reports back that your coworker was not aware of any rudeness, harbors no anger toward you, and never thought that you might reveal confidential information. Does that stop you? Not for a second. The coworker was not aware of rudeness, but who ever is? Similarly, not feeling any anger at all sounds suspiciously like denial. That makes you even more convinced of your explanation. You do not really believe the claim that the coworker never thought you might divulge a confidence, since everyone who tells a secret must have these worries; more likely, the coworker did not want to appear paranoid in front of the mutual friend and was faking good. Or maybe the mutual friend is not so trustworthy after all. Maybe this so-called mutual friend is not telling you the truth on purpose, to lull you into a trap, perhaps to get you to tell the precious secret. Maybe they are all in on it!

We can ask what evidence you would need to give up your explanation. Sadly, the answer is that if you are determined enough, you might never give it up. You can continually save the explanation by making it more comprehensive and complicated. In the nineteenth century the British scientist Sir Francis Galton tried an experiment to see if he could experience what it was like to be paranoid. He tried to maintain the belief that everyone he encountered was plotting against him. Two people talking suddenly look up at him? They are part of the plot. A horse in the park shies away when he comes into view? Even the animals are against him. Galton continued this for part of the day but had to give up the exercise before the day was over. His paranoid explanations were becoming so convincing that they were beginning to get out of control, and he became frightened for his own sanity.

Perrow (1984) has described similar cases that have resulted in major accidents. He refers to them as de minimus explanations—explanations that try to minimize the inconsistency.3 The operator forms an explanation and then proceeds to explain away disconfirming evidence. The following example was an incident that took place on the Mississippi River.4 Figure 5.6 depicts the incident.

11307_005_fig_006.jpg

Figure 5.6 Track lines of Trademaster and Pisces

Example 5.4
The Trademaster and the Pisces

The plot is simple: a ship in a safe passing situation suddenly turned and was impaled by a cargo ship.

To understand how such a thing could occur, remember all the times you have been walking down a hall, approaching someone coming from the other direction. You try to move aside, only to find the other person has moved in the same direction, zigging back at the same time, zagging in unison again. Finally you both come to a full stop, smiling, and resorting to hand signals to sort out the passage. In the case of the Trademaster and Pisces, the momentum did not allow them to stop.

The Mississippi is sufficiently well mapped for ships as large as these (600 feet long, 24,000 and 33,000 tons) to be able to figure out their passing patterns. Even before they see each other, even as they are rounding the final bend, the two captains radio their arrangement to pass starboard to starboard (with their right sides closest).

Then the complications start. The captain of the Pisces sees that he is going to be overtaken by a tug and will have to steer too closely to rafts of barges on his left. Therefore, he uses his radio to request a port-to-port passage with the Trademaster, at the same time turning to his right to get in position. Unhappily, the captain of the Trademaster never gets the message. He sees the Pisces swing out and figures that the captain will correct his error soon enough. He does not want to turn right, since he expects the Pisces to go in that direction any moment. Instead, he turns farther left himself, to give the Pisces more room to shift back. The Pisces, wondering why the Trademaster was swerving out so far, turned even more sharply to give the Trademaster more room. That is how they collided.

This simple case shows how disconfirming evidence is explained away. A ship that should be staying to your right swings over to your left? No problem; you have seen it before. There is no reason to think anything might be wrong, until it is too late.

Scientists also fall prey to de minimus explanations. The following example shows how easy it is to invent an explanation that discounts some inconvenient observations.5

Example 5.5
The Disoriented Physicists

Two physicists associated with the Aspen Center for Physics are climbing in the Maroon Bells Wilderness near Aspen, Colorado. While descending, they lose their bearings and come down on the south side of the mountain instead of the north side, near Aspen. They look below them and see what they identify as Crater Lake, which they would have spotted from the trail leading home. One of them remarks that there is a dock on the lake. Crater Lake does not have a dock. The other physicist replies, “They must have built it since we left this morning.”

A couple of days later they reach home.

I do not count it as a weakness of mental simulations that they are sometimes wrong. My estimate is that most of the time they are fairly accurate. Besides, they are a means of generating explanations, not for generating proofs.

I do count it as a weakness of mental simulations that we become too confident in the ones we construct. One reason for problems such as de minimus explanations that discard disconfirming evidence is that once we have built a mental simulation, we tend to fall in love with it. Whether we use it as an explanation or for prediction, once it is completed, we may give it more credibility than it deserves, especially if we are not highly experienced in the area and do not have a good sense of typicality. This “overconfidence” effect has been shown in the laboratory by Hirt and Sherman (1985). They asked subjects to generate cartoon sequences for the big Penn State versus University of Pittsburgh football game. The subjects were then asked to rate their confidence in one of these teams’ actually winning the game. The result was that the confidence rating was affected by the cartoon sequence. Subjects who imagined a sequence in which Penn State won the game rated Penn State’s actual chance as higher than the subjects who imagined a sequence where the University of Pittsburgh won the game.

Mental simulation takes effort. Using it is different from looking at a situation and knowing what is happening. Mental simulation is needed when you are not sure what is happening so you have to puzzle it out. When you are pressed for time, you may not do as careful a job in building or inspecting the mental simulations you have constructed. That shortcoming, however, does not argue for any other approach. If you want to deduce every inference logically, you will still run into time barriers.

A final shortcoming is that we have trouble constructing mental simulations when the pieces of the puzzle get too complicated—there are too many parts, and these parts interact with each other. If we are trying to repair a piece of equipment and keep testing it to find out what is wrong, we have a lot of trouble if more than one thing is broken. Once we find one fault, we may be tempted to attribute all the symptoms to it and miss the other fault, so we fix just the problem we know about, and the result is that the equipment still does not work.

Despite these limitations, mental simulation allows us to make decisions skillfully and solve problems under conditions where traditional decision analytic strategies do not apply.

People can also draw on some self-correcting characteristics of mental simulation, to overcome some of the limitations that were described above. We often have a general sense of whether the simulation is becoming unrealistic.

Marvin Cohen (1997) believes that mental simulation is usually selfcorrecting through a process he has called snap-back. Mental simulation can explain away disconfirming evidence, but Cohen has concluded that it is often wise to explain away mild discrepancies since the evidence itself might not be trustworthy. However, there is a point when we have explained away so much that the mental simulation becomes very complicated.6 At this point, we begin to lose faith in the mental simulation and reexamine it. We look at all of the new evidence that had been explained away to see if maybe there is not another simulation that makes more sense. Cohen believes that until we have an alternate mental simulation, we will keep patching the original one. We will not be motivated to assemble an alternate simulation until there is too much to be explained away. The strategy makes sense. The problem is that we lose track of how much contrary evidence we have explained away so the usual alarms do not go off. This has also been called the garden path fallacy: taking one step that seems very straightforward, and then another, and each step makes so much sense that you do not notice how far you are getting from the main road. Cohen is developing training methods that will help people keep track of their thinking and become more aware of how much contrary evidence they have explained away so they can see when to start looking for alternate explanations or predictions.

Here is an example of snap-back. Occasionally I engage in orienteering, which takes place in nature preserves. I am given a map with markers at designated points. I need to find those points, punch the card with the special punches located at each to show that I was there, and orient my way around the course. For beginners, the course follows paths in the woods. When you move to a higher level, you are on your own, crossing streams and scrambling up hills. Once when I was leaving a designated way point, I absentmindedly turned west when I meant to turn east. Soon none of the terrain features matched my topological map. Nevertheless, I was able to continue for a surprisingly long time. A small creek appeared where none was marked. Must have been new since the map was made. A curving path was supposed to be straight. It was sort of straight, for a small section. On I plunged, forcing the terrain to fit my map, reinterpreting the cues I was supposed to be seeing. Eventually I came to a section that I could not explain away except by turning the map 180 degrees, and that violated north and south unless my compass had also broken. That was the moment of snap-back; the accumulated strain of pushing away inconvenient evidence caught up with me.

Applications

Cohen, Freeman, and Thompson (1998) have been working on a “crystal ball” method to help people become sensitive to alternate interpretations of a situation. They ask officers to describe an explanation in which they felt high confidence. Then they pretend to peer into a crystal ball and inform them that their explanation was wrong. The crystal ball does not show why it was wrong. The officers have to sift through the evidence and come up with another explanation, and perhaps another. In doing so, they see that the same evidence can be interpreted in different ways.

My coworkers and I use a similar method to help people anticipate what will happen when a plan is put into action. We call it the premortem strategy. The idea comes from some work we did on the amount of confidence people place in a plan they had imagined. We hypothesized that people may feel too confident once they have arrived at a plan, especially if they are not highly experienced. You can ask them to review the plan for flaws, but such an inspection may be halfhearted since the planners really want to believe that the plan lacks flaws. We devised an exercise to take them out of the perspective of defending their plan and shielding themselves from flaws. We tried to give them a perspective where they would be actively searching for flaws in their own plan. This is the premortem exercise: the use of mental simulation to find the flaws in a plan.7

Our exercise is to ask planners to imagine that it is months into the future and that their plan has been carried out. And it has failed. That is all they know; they have to explain why they think it failed. They have to look for the causes that would make them say, “Of course, it wasn't going to work, because ... .” The idea is that they are breaking the emotional attachment to the plan’s success by taking on the challenge of showing their creativity and competence by identifying likely sources of breakdown.

It takes less than ten minutes to get people to imagine the failure and its most likely causes. The discussion that follows can go on for an hour. We have tested this premortem method, and it seems to reduce confidence in the original plan, as we expected. We have also come to include it as a part of the kickoff meeting for new projects, to help us identify the trouble spots.

We have also suggested that the premortem approach be used during military planning exercises. In a study of team decision making in army helicopter missions, we observed the mission rehearsal. The simulated mission was to cross the battle lines into enemy territory, drop off some troops, and return to home base. The drop zone was being pounded by artillery; a one-minute period was defined during which the artillery would stop and the troops would be delivered. This sounds simple enough, but imagine flying around hills, avoiding enemy anti-air batteries, getting lost, getting found, and still arriving at the drop zone during the one-minute window. Out of twenty crews, only one made it through the window. Yet during the mission rehearsal, none of the crews we observed asked what they should do if they arrived too early or too late. It never occurred to them that the plan might not work and that they should prepare some contingency plans. Our recommendation was that mission rehearsal should include a premortem rehearsal to imagine ways the mission could run into trouble.

Royal Dutch/Shell, a large European petroleum company, has described how it used mental simulation to construct future scenarios of the world economy. During the early 1970s, executives of oil companies expected that the future would be like the past: consistent growth in supply and demand, with a small amount of variability. The Royal Dutch/Shell Group planning department discovered that there was going to be a discontinuity. Supplies would fall as demand increased, so that by 1975 there would be a major increase in the cost of oil. (The forecast was made before the 1973 oil crisis in which political events speeded up the adjustment.) Anticipating the jump in prices was the easy part. The hard part was to convey this change to the executives at Royal Dutch/Shell.

Pierre Wack (1985a, 1985b), the head of the planning department, described their strategy of building decision scenarios. These scenarios are like mental models except that they are written down, charted out, and developed to change the way executives think. The problem with forecasts and conventional scenarios is that they try to provide answers. Decision scenarios, in contrast, were built to describe the forces that were operating so the executives could use their own judgment. “Scenarios,” wrote Wack (1985a), “must help decision makers develop their own feel for the nature of the system, the forces at work within it, the uncertainties that underlie the alternate scenarios, and the concepts useful for interpreting key data” (p. 1 40). Typically the decision scenarios featured only a few variables, usually around three, and a few transitions, rarely more than five or six.

The planning group presented a set of scenarios so the senior executives would not fixate on one. The ideal number of scenarios in this setting was three. The first one corresponded to the mental model actually held by the higher executives. This scenario was dubbed the “three-miracle scenario,” since it revealed that the model depended on three unlikely assumptions that all had to hold for that scenario to work. The other two scenarios showed different ways of seeing the world. The point of these scenarios was not to get it right but to illustrate the forces at work. Moreover, the two alternate views rested on different plausible assumptions. The planning group found that if the two alternatives differed on a single dimension, managers would just split the difference. This example shows how mental simulations can gain force when made explicit; the executives responded more favorably to decision scenarios than to forecasts based on statistics and error probabilities. They abandoned their incremental strategy, and their response successfully anticipated the sharp price increases.8

The study of mental simulation is also relevant to consumer psychology. In a marketing research project for a large company, we studied how consumers imagined a product in action. We were trying to learn why they adopted different practices for using the product and to anticipate how they might react to a new type of product. Many consumers could not formulate a mental simulation to describe how some common products really worked. If they came up with any answer, it was likely to be a snippet of animation from a commercial. The consumers were like the novices who could not construct simulations of the Polish economy. We should be careful in assuming that consumers know how products work. Some were using the product inappropriately, getting unsatisfactory results, and blaming the product.

Key Points

Notes