4 The Power of Intuition

Intuition depends on the use of experience to recognize key patterns that indicate the dynamics of the situation. Because patterns can be subtle, people often cannot describe what they noticed, or how they judged a situation as typical or atypical. Therefore, intuition has a strange reputation. Skilled decision makers know that they can depend on their intuition, but at the same time they may feel uncomfortable trusting a source of power that seems so accidental.1

Bechara, Damasio, Tranel, and Damasio (1997) found that intuition has a basis in biology. They compared patients who were brain damaged to a group of normal subjects. The brain-damaged subjects lacked intuition, an emotional reaction to anticipated consequences of good and bad decisions. In the normal subjects, this system seemed to be activated long before they were consciously aware that they had made a decision.

This chapter will describe the research we have done to understand how intuition is used during decision making. It will describe studies with tank platoon leaders, U.S. Navy officers, and nurses, as well as fireground commanders.

For the first formal interview that I did in our first research project with firefighters, I was trying to find some difficult incident where my interviewee, a fireground commander, had to make a tough decision. He could think of only one case, years ago, where he said his extrasensory perception (ESP) had saved the day. I tried to get him to think of a different incident because the one he had in mind was too old, because he was only a lieutenant then, not a commander, and because I do not have much interest in ESP. But he was determined to describe this case, so I finally gave up and let him tell his story.

Example 4.1
The Sixth Sense

It is a simple house fire in a one-story house in a residential neighborhood. The fire is in the back, in the kitchen area. The lieutenant leads his hose crew into the building, to the back, to spray water on the fire, but the fire just roars back at them.

“Odd,” he thinks. The water should have more of an impact. They try dousing it again, and get the same results. They retreat a few steps to regroup.

Then the lieutenant starts to feel as if something is not right. He doesn’t have any clues; he just doesn’t feel right about being in that house, so he orders his men out of the building—a perfectly standard building with nothing out of the ordinary.

As soon as his men leave the building, the floor where they had been standing collapses. Had they still been inside, they would have plunged into the fire below.

“A sixth sense,” he assured us, and part of the makeup of every skilled commander. Some close questioning revealed the following facts:

The whole pattern did not fit right. His expectations were violated, and he realized he did not quite know what was going on. That was why he ordered his men out of the building. With hindsight, the reasons for the mismatch were clear. Because the fire was under him and not in the kitchen, it was not affected by his crew’s attack, the rising heat was much greater than he had expected, and the floor acted like a baffle to muffle the noise, resulting in a hot but quiet environment.

This incident helped us understand how commanders make decisions by recognizing when a typical situation is developing. In this case, the events were not typical, and his reaction was to pull back, regroup, and try to get a better sense of what was going on. By showing us what happens when the cues do not fit together, this case clarified how much firefighters rely on a recognition of familiarity and prototypicality. By the end of the interview, the commander could see how he had used the available information to make his judgment. (I think he was proud to realize how his experience had come into play. Even so, he was a little shaken since he had come to depend on his sixth sense to get him through difficult situations, and it was unnerving for him to realize that he might never have had ESP.)

The commander’s experience had provided him with a firm set of patterns. He was accustomed to sizing up a situation by having it match one of these patterns. He may not have been able to articulate the patterns or describe their features, but he was relying on the pattern-matching process to let him feel comfortable that he had the situation scoped out. Nevertheless, he did not seem to be aware of how he was using his experience because he was not doing it consciously or deliberately. He did not realize there were other ways he could have sized the situation up. He could see what was going on in front of his eyes but not what was going on behind them, so he attributed his expertise to ESP.

This is one basis for what we call intuition: recognizing things without knowing how we do the recognizing. In the simple version of the RPD model, we size the situation up and immediately know how to proceed: which goals to pursue, what to expect, how to respond. We are drawn to certain cues and not others because of our situation awareness. (This must happen all the time. Try to imagine going through a day without making these automatic responses.)

There may be other aspects of intuition than the one I have been describing. I do know that the firefighters’ experience enables them to recognize situations quickly.

Many people think of intuition as an inborn trait—something we are born with. I am not aware of any evidence showing that some people are blessed with intuition, and others are not. My claim in this chapter is that intuition grows out of experience.

We should not be surprised that the commander in this case was not aware of the way he used his experience. Rather than giving him specific facts from memory, the experience affected the way he saw the situation. Another reason that he could not describe his use of experience was that he was reacting to things that were not happening rather than to things that were. A third reason that he was unaware of his use of experience was that he was not drawing on his memory for any specific previous experience. A large set of similar incidents had all blended together.

These are reasons why decision makers have trouble describing their intuition. Even researchers have problems with this concept. For example, in 1978, Lee Beach and Terry Mitchell presented a contingency model of decision making, arguing that the type of strategy a person uses will change depending on the context of the decision task. Sometimes people use the rigorous analytical methods; sometimes they rely on nonanalytical methods. Beach and Mitchell had no trouble explaining what they meant by rigorous analytical methods, and they could point to a number of techniques being studied at the time. However, when they wanted to explain what they meant by nonanalytical methods, they came up short. They suggested things like “gut feeling,” and tossing a coin, and even going “eeney meeny miney moe.”

Now we can say that at least some aspects of intuition come from the ability to use experience to recognize situations and know how to handle them. Described in this way, intuition does not sound very mysterious.2 In fact, the simple version of the RPD model is a model of intuition.

Intuition is an important source of power for all of us. Nevertheless, we have trouble observing ourselves use experience in this way, and we definitely have trouble explaining the basis of our judgments when someone else asks us to defend them. Therefore, intuition has a bad reputation compared with a judgment that comes from careful analysis of all the relevant factors and shows each inference drawn and traces the conclusion in a clear line to all of the antecedent conditions. In fact, research by Wilson and Schooler (1991) shows that people do worse at some decision tasks when they are asked to perform analyses of the reasons for their preferences or to evaluate all the attributes of the choices.

Intuition is not infallible. Our experience will sometimes mislead us, and we will make mistakes that add to our experience base. Imagine that you are driving around in an unfamiliar city, and you see some landmark, perhaps a gas station, and you say, “Oh, now I know where we are,” and (despite the protests of your spouse, who has the map) make a fateful turn and wind up on an unescapable entrance ramp to the highway you had been trying to avoid. As you face the prospect of being sent miles out of your way, you may lamely offer that the gas station you remembered must have been a different one: “I thought I recognized it, but I guess I was wrong.”

The fireground commanders we studied were aware that they could misperceive a situation. Even the commander who thought he had ESP did not make a habit of counting on it. The commanders rely on their expectancies as one safeguard. If they read a situation correctly, the expectancies should match the events. If they are wrong, they can quickly use their experience to notice anomalies.3 In the example of the vertical shaft fire, the commander walked out of the building as soon as he heard that the fire had spread beyond the second floor. He needed to get another reading about what was happening to the building. The commander who thought he had ESP was so discomfited when his expectancies were violated that he pulled his crew out of the building. These decision makers could formulate clear expectancies based on their experience, so that early in the sequence they could detect that they had gotten it wrong.

Firefighters are not the only ones who confuse intuition and experience with ESP. Naval officers also do it.

Example 4.2
The Mystery of the HMS Gloucester

In February 1992, I heard about a curious incident in which the HMS Gloucester, a British Type 42 destroyer, was attacked by a Silkworm missile near the end of the Persian Gulf War. The officer in charge of air defense believed strongly that the radar contact was a hostile missile, not a friendly aircraft, seconds after first detection and before the identification procedure had been carried out—even though the radar blip was indistinguishable from an aircraft, and the US. Navy had been flying airplanes through the same area. The officer could not explain how he believed this was a Silkworm missile. The experts who looked at the recordings later said there was no way to tell them apart. Nevertheless, he insisted that he knew. And he shot the object down.

At the time, his captain was not so confident. We watched the videotape of the radar scope and listened to the voices. When the radar blip is destroyed, the captain asks hesitantly, “Whose bird was it?” (that is, who shot the missile that destroyed this unknown track?). The anti–air warfare officer nervously replies, “It was ours, sir.” For the next four hours the HMS Gloucester sweats out the possibility that they shot down an American plane.

The mystery of the HMS Gloucester was how the officer knew it was a Silkworm missile, not an aircraft.

In July and August 1993, I conducted a workshop on cognitive task analysis interviews for George Brander, a human factors specialist at the Defense Research Agency in the United Kingdom. (The methods of using cognitive task analysis for interviewing are discussed in chapter 11.) Brander arranged to have us practice the methods with actual naval officers. One of them was Lieutenant Commander Michael Riley, the anti-air warfare officer on the Gloucester who spotted the Silkworm.

We expected that Riley would be tired of going over the incident, but we found just the reverse. He was still puzzling it out, and he suggested to us that we focus our session around the Silkworm attack.

The facts were simple. The Gloucester was stationed around twenty miles off the coast of Kuwait, near Kuwait City. The Silkworm missile was fired around 5:00 A.M. As soon as he saw it, Riley believed it was a missile. He watched it closely for around forty seconds until he had gathered enough information to confirm his intuition. Then he fired the Gloucester’s own missiles and brought the Silkworm down. The whole incident lasted only around ninety seconds, and the Gloucester almost did not get its shot off in time. Riley confessed that when he first saw the radar blip, “I believed I had one minute left to live.” The puzzle was how he knew it was a Silkworm instead of an American A-6 aircraft. The Silkworm travels at around 600 to 650 knots, the same speed as the American A-6s as they return from bombing runs. Both are around the same size and present the same profile on the radar scopes. They are the same size because of all the explosive the Silkworm carries. It is about as large as a single-decker bus, large enough to devastate a type 42 destroyer like the Gloucester.

There are four ways to distinguish an American A-6 airplane from an Iraqi Silkworm missile.

The first way is location. The Allied forces knew the location of the Iraqi Silkworm sites and the naval ships. Theoretically the airplanes should return to aircraft carriers by preestablished routes, but the American pilots returning from bombing runs were cutting corners, and flying over the Silkworm site in question. All the previous day they had done so. Even worse, the British Navy ships had recently moved closer to shore, and the pilots had not yet taken this changed position into account, so the A-6s were frequently overflying the ships. Riley and others had insisted that the practice of overflying ships be ended, but he had not seen any change. So the first cue, location, was useless for identifying the radar blip.

Radar is the second way to distinguish airplanes from missiles. The A-6s were fitted with identifiable radar, but most of them did not have their radar on when they were returning (the radar would make them more easily detectable by the enemy). Thus, the absence of radar was not conclusive.

The third way is a special system, Identify Friend or Foe (IFF), which allows an aircraft to be electronically interrogated to find out its status. Pilots obviously shut it down as they approach enemy territory, because it would be a homing beacon for hostile missiles. They are supposed to switch it back on when they leave enemy territory so their own forces will know not to shoot them down. Yet after completing a bombing run and avoiding enemy defenses, many A-6 pilots were late in turning their IFF back on. So the absence of IFF did not prove anything either.

Finally is altitude. The Silkworm would fly at around 1,000 feet, and the A-6s at around 2,000 to 3,000 feet and climbing. Therefore, altitude was the primary cue for identification (unless an A-6 had damaged flaps and had to fly lower, but none had been seen coming in below 2,000 feet). Unfortunately, the Gloucester’s 992 and 1022 radars do not give altitude information. In fact, it didn’t have any radar that worked over land, so the first time it picked up a track was after the track went “feet wet” (i.e., flew off the coast and over water). The radars sweep vertically, through 360 degrees, until the radar operator spots a possible target. Only then can the Gloucester turn on the 909 radar that sweeps in horizontal bands, to determine roughly the altitude of the target. It takes about thirty seconds to get altitude information after the 909 radar is turned on. (Maddeningly, the Gloucester’s weapons director failed in his first two attempts to type in the track number, first because the track number was changed just before he typed it in, and next, he transposed the digits.) As a result, it was not until around forty-four seconds into the incident that the 909 informed Riley that the target was flying at 1,000 feet. Only then did he issue the order to fire missiles at the track. Yet he had felt it was a Silkworm almost from the instant he saw it, before the 909 radar was even initiated, and long before it gave him altitude information. Because there was no objective basis for his judgment, Riley confessed to us that he had come to believe it had been ESP.

You can see how little information there is here. To make matters worse, clouds of smoke particles from the burning oil fields were adhering to moisture in the air and obscuring the radars. The Gloucester’s mission was to protect a small battle force, including the USS Missouri, whose guns were pounding the Kuwait coast, some minesweepers clearing the way for the ships to get closer, and a few other ships as well. The Missouri wanted to get closer to the coast and on the day of the attack was only twenty miles off. And the closer it got, the less time the Gloucester had to react to a Silkworm attack.

Riley told us about the background to the incident. The war was ending, with American-led forces driving up the coast toward Kuwait City. Soon they would overrun the Silkworm site. The constant shelling from the Missouri was taking its toll. Also, the Allies had just flown a helicopter feint. Large numbers of helicopters, launched off carriers, staged a mock attack and then flew back. Riley had earlier run a mental simulation, putting himself in the minds of the Iraqi Silkworm operators. If they did not fire their missiles soon, they would lose any chance. There was nothing to save the missiles for. And they had a nice, fat target, the Missouri. If Riley was a Silkworm operator, this was when he would fire his missiles.

The Gloucester’s crew had been working more than a month on a six-hour-on, six-hour-off schedule. That meant six hours of staring at radar screens, then six hours to eat, perform other tasks, and grab some sleep. Fatigue had been building up during that time. Riley’s shift had started at midnight, so his crew had been going for five hours. Because of Riley’s imagining what he would do if he were running a Silkworm site, he believed they were under greater risk than at any time earlier. Perhaps an hour before the attack, he warned his crew to be on their highest alert, because this was when the Iraqis were likely to fire at them. Riley repeated his warning again, maybe at 4:55 A.M. As a result, the crew was ready when the missile came.

When we pressed Riley about what he was noticing when he first spotted the radar blip, he said that he knew it was a missile within the first five seconds. Since the radar sweep on the 922 is around four seconds, that means he identified it by the second sweep. Riley said he felt it was accelerating, almost imperceptibly. That was the clue. The A-6s flew at a constant speed, but this track seemed to be accelerating as it came off the coast. Fortunately, there were no other air tracks at the time, so he and his crew could give it their full attention. Otherwise, he doubts he would have noticed the acceleration.

That should have wrapped things up—except that after Riley left the interview, we discovered some inconsistencies in his account. First, he had no way to calculate acceleration until he saw at least three radar sweeps. He needed to compare the distance between sweeps 1 and 2 to the distance between 2 and 3, to see which was larger. Even more troubling, there was no difference at all between the distance traveled by the track during its entire course. We could not see any signs of acceleration, nor could the experts who analyzed the tape. So, using objective measures, there was no indication of acceleration.

We also wondered about Riley’s sense that he knew it was a missile almost from the first contact. That first blip was recorded a little way off the coast, because the ground clutter had masked the missile until it flew far enough over water. This took one or two sweeps. Then the 992 radar picked up the track. What was there about that track that alerted Riley? We watched the tape again and again, trying to figure it out. Eventually we succeeded. Rob Ellis, from the Defense Research Agency at Farnborough, realized what it was. (Before reading on, you may want to reread the information and see what you come up with. All the relevant information has been presented.)

Ellis tried to figure out why a track would look as if it was accelerating, when it really was not, and before all the necessary information was in. He realized that the one difference between an A-6 and a Silkworm was altitude: 1,000 feet versus around 3,000 feet. Just as the track came off the coast, the track was masked by ground clutter. The Gloucester was twenty miles away. Ellis reasoned that the 992 radar would pick up a track flying at 3,000 feet earlier than one flying at 1,000 feet. The lower track would be masked by ground for a longer time. Maybe that meant that the higher tracks, at 3,000 feet, could be spotted on radar on the second radar sweep, after they went feet wet, whereas the Silkworm, flying at 1,000 feet, would not give a radar return until the third radar sweep. Perceptually, the Silkworm would first be spotted farther off the coast than the A-6s had been. The Gloucester’s crew, and Riley, were accustomed to A-6s. They knew the location of the Silkworm site, and they were looking for a radar blip coming from that direction, at a certain distance off the coast. Instead, Riley saw a blip farther off the coast than usual. That caught his attention and chilled him. The second radar return showed the usual distance for a track flying around 600 to 650 knots. Compared to how far that first track had come, it must have felt as if the track was moving really fast when it came off the coast. Then it seemed to slow down. Riley must have had a sense of great acceleration as he confounded altitude with speed.

We asked Riley if he wanted to hear our hypothesis, and when we explained it, he agreed that we might have solved the riddle. Although the 992 radar does not scan for altitude, a skilled observer was able to infer altitude using the distance from the coast where the blip was first seen when the track went feet wet.

In this example, as in the previous one, it is the mismatch or anomaly that the decision maker noticed. Perhaps such instances are difficult to articulate because they depend on a deviation from a pattern rather than the recognition of a prototype.

We have seen the use of intuition with firefighters and U.S. Navy officers, and each time the decision makers had trouble describing it. The next section illustrates the point using another domain, nursing.

The Infected Babies

In this project, we studied the way nurses could tell when a very premature infant was developing a life-threatening infection. Beth Crandall, one of my coworkers, had gotten funding from the National Institute of Health to study decision making and expertise in nurses. She arranged to work with the nurses in the neonatal intensive care unit (NICU) of a large hospital. These nurses cared for newly born infants who were premature or otherwise at risk.

Beth found that one of the difficult decisions the nurses had to make was to judge when a baby was developing a septic condition—in other words, an infection. These infants weighed only a few pounds—some of them, the microbabies, less than two pounds. When babies this small develop an infection, it can spread through their entire body and kill them before the antibiotics can stop it. Noticing the sepsis as quickly as possible is vital.

Somehow the nurses in the NICU could do this. They could look at a baby, even a microbaby, and tell the physician when it was time to start the antibiotic (Crandall & Getchell-Reiter, 1993). Sometimes the hospital would do tests, and they would come back negative. Nevertheless, the baby went on antibiotics, and usually the next day the test would come back positive.

This is the type of skilled decision making that interests us the most. Beth began by asking the nurses how they were able to make these judgments. “It's intuition,” she was told, or else “through experience.” And that was that. The nurses had nothing more to say about it. They looked. They knew. End of story.

That was even more interesting: expertise that the person clearly has but cannot describe. Beth geared up the methods we had used with the firefighters. Instead of asking the nurses general questions, such as, “How do you make this judgment?” she probed them on difficult cases where they had to use the judgment skills. She interviewed nurses one at a time and asked each to relate a specific case where she had noticed an infant developing sepsis. The nurses could recall incidents, and in each case they could remember the details of what had caught their attention. The cues varied from one case to the next, and each nurse had experienced a limited number of incidents. Beth compiled a master list of sepsis cues and patterns of cues in infants and validated it with specialists in neonatology.

Some of the cues were the same as those in the medical literature, but almost half were new, and some cues were the opposite of sepsis cues in adults. For instance, adults with infections tend to become more irritable. Premature babies, however, become less irritable. If a microbaby cried every time it was lifted up to be weighed and then one day it did not cry, that would be a danger signal to the experienced nurse. Moreover, the nurses were not relying on any single cue. They often reacted to a pattern of cues, each one subtle, that together signaled an infant in distress.

Some of the Costs of Field Research

The project with the NICU nurses was draining for our staff. The problem was not solely the minor shocks of inadvertently witnessing distressing medical procedures or the strain of seeing such tiny babies struggling to survive. It was the strain in the nurses themselves. More than half of the nurses interviewed cried at some point as they recalled infants who had not made it, signs they should have seen but missed, and even babies who had close calls. None of our other studies were as emotionally demanding as this one.

Other studies posed their own challenges. Once we had a chance to observe and interview the command staff of a Forest Service team trying to bring a forest fire under control that had spread over six mountains in Idaho. One member of the research team was Marvin Thordsen.

In Idaho, Marvin was attached to a team in charge of the fire on one of the mountains. He dutifully tagged along, listening and looking agreeable. After a few days of this, Marvin was sitting in on one of their planning meetings. The team got to an issue and realized that they had already made this decision several days ago, but no one could remember what they decided. Marvin could listen only so long before he broke down. Knowing that he was violating the creed of observers just to watch and never to intervene, he flipped back a few pages in his notebook and read to them what their plan had been. Jaws dropped open as the team found out how helpful it is to have someone serving as their official memory. By the end of his stint with the Forest Service, he was included as part of the planning team. Before finishing their meetings, they would ask him if he had anything to add.

Marvin was a witness when another member of our research team, Chris Brezovic, tear-gassed himself during our study of tank platoon leaders being trained at Fort Knox, Kentucky. As their training scenario was winding down, Chris was standing with one of the instructors, a sergeant. Part of the exercise was to train the platoon leaders to don their protective gear quickly in case of chemical weapons. As the sergeant tossed out several canisters of tear gas, the wind shifted. It collected the tear gas and lifted it up in a cloud that began to move slowly toward Chris and the sergeant, neither of them with gas masks or protective gear. Chris knew about tear gas, and his first impulse was to run away while there was still time. But the sergeant was standing firm, and Chris decided that he did not want to be the first one to run. The sergeant decided that he did not want to look like a coward in front of observers.

He would stay until this civilian could not take it anymore, and then he would leave. So they both stayed where they were, while the cloud got closer. As the cloud floated directly overhead, the wind plunged the tear gas down at them. They tried to run, but it was too late. They collapsed, coughing uncontrollably, their faces covered with tears and mucus.

In terms of pure adrenaline rush, I do not think any of our experiences can exceed the time we were studying airport baggage screeners, to understand how they interpreted the X-ray images. Their difficult job is to identify weapons that may be rotated in any direction and may be superimposed on other metal objects. They need to make the judgment in only a few seconds. During this project sponsored by the Federal Aviation Administration, two of our researchers, Steve Wolf and Dave Klinger, traveled to Cleveland to observe and conduct interviews. Approximately ten minutes after they arrived at the checkpoint, a baggage screener saw something suspicious in the bag that a large, expensively dressed man had sent through the machine. The baggage screener did not know it was a gun (if she had, she would have locked it into the machine). She asked the man if she could open the bag. Security guards were summoned and quickly confiscated a gun, then led the man off in handcuffs. Our stunned researchers looked on in amazement. The odd part of the story is that the man had not intended to smuggle the weapon at all. He was carrying drugs and had forgotten to remove his 0.38 when he arrived at the airport.

Applications

The part of intuition that involves pattern matching and recognition of familiar and typical cases can be trained. If you want people to size up situations quickly and accurately, you need to expand their experience base. One way is to arrange for a person to receive more difficult cases. A fireground commander in a small, rural community may get little direct experience. Recall the case of the Christmas fire where the commanders acted like novices compared to the two consultants. In contrast, firefighters in a large city with many old buildings can get a tremendous amount of experience in a short time.

Another approach is to develop a training program, perhaps with exercises and realistic scenarios, so the person has a chance to size up numerous situations very quickly. A good simulation can sometimes provide more training value than direct experience. A good simulation lets you stop the action, back up to see what went on, and cram many trials together so a person can develop a sense of typicality. Another training strategy is to compile stories of difficult cases and make these the training materials.

In the NICU project, when Beth Crandall showed her findings to the nurse manager of the unit, the woman asked if Beth would present the cues to nurses on the unit in a training program. Beth objected that all the items on the list of critical cues had come from the nurses themselves. The head nurse was not deterred. Because this type of perceptual expertise was not getting shared or compiled, it was a particularly hard thing for new nurses on the unit to master. Beth eventually developed training materials to illustrate all of the critical cues that the nurses could use to diagnose when an infant was in the early stages of an infection. There were different ways to present these materials, such as simple lists of cues; Beth embedded the cues in the stories themselves so that the nurses could see how the cue appeared in context.

Another project that Beth performed involved the study of myocardial infarction, commonly known as heart attacks. We had been working in the area of cardiopulmonary resuscitation to investigate how paramedics develop perceptual skills. The paramedics we interviewed said they could judge whether the person was actually having a heart attack or just suffering from indigestion. They also said they could tell when a person was going to have a heart attack, days or even months ahead. At first we did not give this much thought; it sounded like the commanders who insisted they had ESP. After we heard it from more than one source, we paid attention. For instance, a paramedic described a family gathering where she saw her father-in-law for the first time in many months.

“I don't like the way you look,” she said.

“Well, you don't look so great yourself,” was his answer.

“No, I really don't like the way you look,” she continued. “We're going to the hospital.”

He grudgingly agreed to go the next day, but she insisted they go right then. An examination showed a blockage to a major artery. By the next day, he was having surgery to clear the blockage.

Many of us view the heart like a balloon. A person is walking along just fine and then, ping, something snags the balloon and the person goes down with a heart attack. This metaphor is not accurate. The heart is a pump, with thick, muscular walls. It does not burst, like a balloon. Instead, it clogs up, like a pump. Sometimes it clogs up quickly, as when a clot lodges somewhere (here the balloon metaphor may come in). When it clogs up slowly, during congestive heart failure, there are signs. Areas of the body that are less important get less blood. By knowing what they are and by being alert to patterns in several of these areas, you can detect a problem in advance. The skin gets less blood and turns grayish. That is one of the best signs. The wrists and ankles show swelling. The mouth can look greenish. Our interviews with physicians, paramedics, and others turned up these indicators and several others. We also scanned the literature and verified our list of critical cues with specialists.

Perceptual cues can be defined. This expertise does not have to remain in the realm of the specialist or in the intuition of the nurse or paramedic who has seen similar cases. It should be possible to train ordinary citizens to look at each other and recognize when a friend or coworker is starting to show the signs of impending heart problems.

Hugh Wood (program chair for emergency incident policy and analysis curriculum) and Carol Bouma (instructional design specialist) at the Federal Emergency Management Agency/National Fire Academy have been using the RPD model to revamp the academy’s curriculum in order to supply commanders more training in pattern matching and in recognizing situations. In the U.S. Marine Corps, Lieutenant General Paul Van Riper has guided organizations such as the Marine Corps Combat Development Command at Quantico, Virginia, to support intuitive decision making. The marines are beginning to use rapid pattern-matching exercises developed by Major John Schmitt (U.S. Marine Corps Reserves) and other officers. In both places, the emphasis on pattern matching seemed more useful than lessons on formal analysis of alternate options.

Notes