After the attack on Pearl Harbor in 1941, the Japanese commander, Mitsuo Fuchido, was surprised by its success. He asked, “Had these Americans never heard of Port Arthur?” That event, which was famous in Japan, had preceded the Russo-Japanese War of 1904. The Japanese tactic was to destroy the Russian Pacific fleet at anchor at Port Arthur in a surprise attack. The tactic worked, and to the amazement of the world, the Japanese won the war (Wohlstetter, 1962).
People use analogues and metaphors to perform a variety of difficult tasks: understanding situations, generating predictions, solving problems, anticipating events, designing equipment, and making plans. An analogue is an event or example drawn from the same or a related domain as the task at hand; a metaphor comes from a markedly different domain. Each experience we have, whether it is our own or one we have heard about from someone else, can serve as an analogue or a metaphor. Each time we take on a task, we can draw on this vast knowledge base, this bank of experiences and stories and images. We may overlook an analogue, select a misleading one, or fail to interpret one correctly. Usually our experience bank works smoothly, providing us with structure and interpretation even for tasks we have not been faced with before.
In the early 1980s, engineers given the task of estimating how long auxiliary power unit of a B-1 aircraft would last before it had to be repaired looked up the data for the same unit on a C-5A airplane to get a ballpark figure. The C-SA auxiliary power unit was an analogue they used for prediction.
Around the same time, Stephen Jobs and Steve Wozniak were designing the Macintosh computer, using ideas they had seen at the Xerox Palo Alto Research Center, Hewlett-Packard, and other places. They adopted the idea of making the interface of the Macintosh work like a desktop. Users could see folders, maneuver a mouse to pick up one folder and put it in another, and so forth. The concept of a desktop was a metaphor for using the Macintosh interface.1
Once, metaphors were treated simply as adornments to language. An English teacher would cite a favorite example from Shakespeare, perhaps, “My love is like a red rose,” and we were all supposed to lean back in our seats, giddy with the fragrance of the metaphor. (This is actually a simile because it uses like, but we need not worry here about these types of details.) However, researchers have been trying to understand the way metaphors affect our thinking, as well as our emotional reactions.2 They have found that metaphors affect what we see and how we interpret it. Lakoff and Johnson (1980), in their book Metaphors We Live By, showed that metaphors govern the way we think about issues. The metaphor ARGUMENTS ARE LIKE WAR tells us we should be attacking each other’s positions, particularly the weak points, and defending our own weaknesses. If we used the metaphor ARGUMENTS ARE LIKE MUSIC PRACTICE, we would be using arguments as an opportunity to find out how we are each contributing to disharmonies.
Lakoff (1986) has shown that a woman who complains to her boyfriend that “this relationship isn’t going anywhere” is using the metaphor of a journey for assessing the relationship. A journey has a beginning, a clear ending, and an expectation of continual progress. If progress ceases, then it is no longer a journey—or in this case, a worthwhile relationship. A different metaphor, that a relationship is an arch, would convey that the pieces achieve more together than they could individually. They transcend themselves, and success is seen as stability rather than change and progress.
Lakoff and Johnson and other researchers have made us aware of the metaphors that guide our thinking. A metaphor uses a base domain, a familiar domain, to achieve situation awareness, that is, to interpret and understand a new domain. Political debate can be seen as a struggle for the defining metaphors. In the United States, when there is an opportunity or need to help a small country in the midst of a civil war, the metaphor of a schoolyard may be introduced. One of the stronger students steps between two smaller children having a scrap, letting them cool off before they can do too much damage. This is a heroic image. The countermetaphor is usually the tar baby image from the Uncle Remus stories—an image of trying to help but getting stuck further and further in the tar, until it is impossible to be extracted. This is a frightening and vivid metaphor, usually tied to the Vietnam experience.
Metaphor does more than adorn our thinking. It structures our thinking. It conditions our sympathies and emotional reactions. It helps us achieve situation awareness. It governs the evidence we consider salient and the outcomes we elect to pursue.
If this framework makes sense, then it should produce some useful implications. To find out, Dent-Read, Klein, and Eggleston (1994) conducted a study to understand how we might use metaphor to design equipment.3 We began by interviewing the leading metaphor researchers to see what suggestions they might make. None of them had any helpful ideas, and some were discouraging, arguing that metaphors cannot be put to work in this way.
Then we shifted to designers themselves, visiting and interviewing people identified as producing innovative systems and interfaces. Here we got the opposite answer: that metaphor was at the core of these designers’ work. Some designers explained that every time they considered an interface, they had to consider the metaphors that would be uppermost in the users’ minds. For the first word processors, the metaphor was the typewriter, and the job of the interface designer was to build on that metaphor while avoiding dissimilarities. The keyboard was the same, but users did not have to hit the carriage return at the end of each line, and that was going to take some getting used to. They also had to issue warnings about periodically saving a file so it would not be lost in case the system crashed. This was not a problem with typewriters.
One designer, John Reising of the U.S. Air Force’s Flight Dynamics Laboratory, showed us different schemes for helping pilots navigate a safe path through a dangerous airspace. The planners could often determine an effective route in advance, so the task was to help the pilots follow it. One approach Reising showed us was to use the metaphor of a highway in the sky. His displays portrayed a highway on the screen, and the pilots guided their airplanes to fly on this highway. A U.S. Navy approach was to present a phantom airplane on the screen, flying the selected route. Here the pilots flew in formation with this phantom, following it wherever it went. If they were going to have to start speeding up, the phantom’s afterburners would light up. If the pilot was going too fast, the phantom would deploy its drag chute, ordinarily used on landing to slow down.
We learned that effective metaphors were the ones helping to organize action. They were trading in on well-learned behaviors, such as flying formation, driving on a highway, or moving folders on a desktop, so that the new task could be performed smoothly using coordination skills that had already been developed.
We also found metaphors that did not help very much, such as a hospital metaphor to show pilots which subsystem of their aircraft was sick. For these ineffective metaphors, the organization was conceptual, not coordinating actions. By noting that the hydraulic system was “sick,” the interface was not offering anything useful to pilots. The pilots do not have the well-learned reactions to sickness they do to flying in formation. We can also see that metaphor can be used for training. On an airplane trip, I was once sitting next to a woman studying a tennis book. The book specified the proper form for each stroke. For hitting the ball at the net, the legs were supposed to be so many inches apart, shown in an illustration as the length of a racket. The arms should be at the proper height, elbows flexed at the proper angle (the illustration showed a protractor overlaid on the elbow), racket head facing the proper direction, and so forth. I could not imagine how anyone could ever swing at the ball, trying to remember so many rules. I compared that approach to the advice I once heard from an experienced instructor that “hitting a ball at the net is like pushing a pie into someone’s face.” In short, it was not like stroking the ball, hitting a forehand. It was a choppy motion. Although few of the students actually had pushed a pie into anyone’s face, the metaphor was compelling. It achieved its effect of coordinating arms, legs, body, and timing for a difficult shot that most of the students had previously been hitting poorly. The instructor also taught the students that to hit a backhand in tennis they should try to imagine they were throwing a Frisbee. Again, the metaphor provided an overall for body flow in performing the action. When an action is decomposed into its elements, the chore of coordinating those elements comes at the end. With a metaphor, the overall coordination was the starting point.
In trying to understand how people solve ill-defined problems, one strategy is to try to reach a goal while simultaneously trying to define the goal, using failures to specify the goal more clearly. There is a second strategy: to find an analogy that suggested features of the goal. For instance, if my car will not start, my goal is to get the engine running. If I recall a time when I left the lights on by mistake and killed the battery, I might figure out that the lights run directly off the battery. Therefore, I could test whether the battery is okay by turning on the lights to see if they work. If they do not, then my problem is not the vague one of getting the engine started but the specific one of overcoming a dead battery. Analogical reasoning can also suggest options. If I drive a car with a manual transmission, perhaps I recall a time when I saw someone start such a car by letting it roll downhill, to get the engine started even without a working battery. But there are no hills near me. Perhaps I can have my passengers push the car to create the necessary force.
There are several hypotheses about how analogical reasoning works. One approach was the work of Robert Sternberg (1977), who studied the components of solving analogue problems a:b::c:d—for example, dog is to flea as shark is to flea as shark is to (whale, remora, eel, squid). The correct answer would be remora, since it attaches itself to the shark, rides along with it, and depends on it for food. Sternberg’s research was carefully done and interesting, but this model of the components is not very helpful, for two reasons. One was that he was working on fairly artificial types of problems, with limited context. Even worse, Sternberg gave his subjects the second term of the analogue. He told them what analogue to use. In contrast, problem solvers have to find an appropriate analogy to use. In most cases, finding a good analogy is the hardest part of the job.
Another approach was offered by Amos Tvtrsky (1977), who had proposed a method for defining similarity in terms of the number of elements two items shared in common. This method could explain how people judged degree of similarity and satisfied themselves that a similar case was worth using. The difficulty is that every pair of items shares an infinite number of features in common. Take this book and your left shoe. They are both closer to the moon than the sun, both closer to the sun than to the center of the Milky Way, both lighter than your car, both larger than your mouth, and so on, forever. Merely counting common features would not work either. The features in common had to be important ones. They had to be features that had the same cause-and-effect relationships.
Julian Weitzenfeld (1984) determined that similarity does not make sense without purpose.4 If your purpose is to start a car with a dead battery, letting it roll downhill and pushing it on a level stretch of highway are similar actions. If your purpose is to start a car that is out of gasoline, letting it roll down a hill to a nearby telephone booth is not the same as pushing it on a level stretch of highway. You cannot just look at the two actions—letting the car roll and actively pushing it—and determine if they are similar. It depends on what you want to do. A coffee can and an inner tube are both similar as containers for transporting gasoline. They are dissimilar as ways of maintaining an even pressure on the tires of a bicycle.
Weitzenfeld and I studied the way people actually used analogies.5 We had discovered a community of engineers at Wright-Patterson Air Force Base who make extensive use of analogies to solve difficult problems. While Julian and I were thinking about how people apply analogues, this community was doing it every day.
The engineers we studied were faced with an important and difficult task: predicting the repair rates of the component parts of new airplanes before the airplanes were ever built. If the engineers overestimated the reliability, the air force would not stock up on enough spare parts; the airplanes might have to stop flying while manufacturers made more spares. If the engineers underestimated reliability, then the air force would have warehouses filled with unnecessary spare parts and waste taxpayers’ money. It was important to make accurate predictions.
The engineers would have preferred to use an analytical method to figure out the reliability, but there were no good analytical methods for making the predictions. In 1971, two people figured out a strategy for the engineers to use. One was Major Don Tetmeyer, working in the U.S. Air Force’s Directorate of Engineering at Wright-Patterson Air Force Base. The other was Frank Maher, a psychologist working for a private company near the base. The strategy they came up with was to use data on similar pieces of equipment. They called their method comparability analysis. It works in this way:
Since comparability analysis was developed, it has been applied in the air force for a variety of new aircraft, and it is also widely used in the navy and the army.
Julian and I realized this was a good example of analogical reasoning. In studying the way comparability analysis had been used in predicting the reliability of the subsystems on a bomber, the B-1 aircraft, we found straightforward applications and some interesting twists. Here is an example of a straightforward case.
The B-1 duct system is similar to the duct system of the FB-111 aircraft, except that the B-1 is larger and needs more ducting. Therefore the FB-111 numbers are adjusted upward by the right proportion. (In this example, “FB” stands for “fighter-bomber.”)
Here is a slightly more complex case.
To estimate the reliability of the hydraulic system on the B-1 airplane, an engineer chooses as his comparison case the hydraulic system on the B-52, the airplane the B-1 is supposed to replace. However, the B-1 hydraulic is going to use 4,000 pounds per square inch of pressure, versus 3,000 psi for the B-52. The engineer recognizes that the higher pressure will cause increased wear and lower reliability, so he takes the reliability data for the B-52 hydraulic and reduces it by a third. He does not think the B-1 hydraulic system will be as reliable as the B-52.
Others might disagree with his estimate. They might claim that the new materials used in the B-1 would help things out. Even if they disagreed, they could see the rationale for his prediction, and they could add their own adjustments.
These predictions work in the following way. A set of factors affects the reliability of a hydraulic system. We know many of them, but not all of them, and we do not know how they interact with each other. If we can find an analogy with which we feel comfortable, we can use it because it reflects the full set of causal factors, even the ones we cannot yet identify. The analogy also reflects the interactions between causal factors, interactions we cannot specify, so it lets us make a prediction that reflects factors whose existence we do not know and whose properties we do not know. That is the power of analogical reasoning.
If we did not want to use analogical reasoning for tasks like these, we would be stuck. We would not know enough to construct formulas or to use them or have enough hard information to proceed. By using analogues, we are tapping into the same source of power for stories. We are applying an informal experiment, using a prior case with a known outcome and a semi-known set of causes to make predictions about a new case.
An engineer finds that the auxiliary power unit on the B-1 will be identical to the auxiliary power unit used on a cargo plane, the C-5A. The engineer judges that the demands on an auxiliary power unit from a bomber like the B-1 are going to be so different and so much greater than on a cargo plane that he throws out the C-5A data.
Bombers may have to scramble, which means starting up and taking off in a hurry without waiting for everything to warm up gradually. Unlike a cargo plane, all sorts of systems are started up at once, placing great burdens on the auxiliary power system. In addition, bombers may have to conduct sharp maneuvers to avoid enemy fighters and antiaircraft missiles. Cargo planes are not subjected to these types of forces. Because of the difference in the working environment, the engineer concluded that the data from cargo operations would not help him.
From these examples, Julian and I learned a great deal about how people use analogical reasoning in a natural setting.
First, we learned that they do not select analogues just based on similarity. If you are buying a green car, you do not try to find the reliability records for green cars. You would select an analogue that shares the same dynamics, the same factors that affect what you are trying to predict. If you do not have enough experience to take causal factors into account, you can get into trouble. The engineers we studied were all knowledgeable.
Later, when Chris Brezovic, Marvin Thordsen, and I studied the decision making of new tank platoon leaders, we found that analogical reasoning was about as likely to hurt them as to help them. For example, during one mission, the tank platoon leader decided to use a certain route because he had used it three days earlier, when the exercise had been in the same area. However, he was inexperienced. He did not consider the fact that it had rained heavily the night before. This time when he sent his tanks down that route, the first two got stuck in the mud, and the whole mission fell apart. He thought he had a perfect analogue for selecting a route that would work, but he missed a key difference.
Second, we learned that some causal factors are easy to adjust for, and others are not. If pressure affects the reliability of a hydraulic system, then even if I pick an analogue with a different pressurization, I can just multiply to get an estimate. That is easy. On the other hand, if the pattern of use affects the reliability of an auxiliary power unit because the strains of a bombing run are vastly different from the smooth profile of a cargo flight, then I do not know how to make this adjustment. As a result, I must throw the analogue out altogether, even if the equipment is identical. If the difference can be captured as a proportion, then the causal factor can be adjusted for. If it cannot be reduced to a proportion, then we would only try to guess at the adjustment if we were desperate. Julian concluded that we select analogues on the “mysterious” features (the qualitative categories that we cannot easily adjust) and adjust on the more straightforward features. We would select an analogue airplane that flew the same type of mission and adjust on a size dimension; we would not select an analogue that matched on size and adjust on type of mission.
Third, we learned that the logic of reasoning by analogy is similar to the logic of an experiment: to draw a conclusion without having to know all of the important factors operating. Imagine that someone comes back from the Brazilian rain forest with a new drug, derived from the leaves that grow above the canopy of vines and branches, and claims that the drug will cure AIDS. We might test it in an experiment by identifying AIDS victims to serve as subjects, randomly assigning them to an experimental group that gets the drug and a control group that does not. We would see if the AIDS patients in the experimental group improve, compared to the control group. If they do improve, we conclude that the drug had an effect. Notice that we draw this conclusion even though we do not know all the causes of AIDS, and we do not know how the causes we have identified interact. By randomly assigning the subjects, we could feel confident that the same causal factors were at work in the two groups, although we did not know what they were.
Similarly, when we use an analogy, we are trying to set up a condition where the same causal factors are at work, even though we do not know what they are. The analogue is selected to match on the causal variables as closely as possible, and we adjust the data to take into account aspects where we know the matching is inadequate. Sometimes we do not take into account variables that are important because we do not know about them. That is why we have less confidence in analogical reasoning than in the results of an experiment. Analogical reasoning is intended to be a guide in cases where we cannot set up experiments, cases where we do not know enough to work out the equations.
There is a delicate balance to using analogues. If we know a great amount, we do not need the analogues; we can just figure out the formulas. If we know very little, analogical reasoning may be as likely to get us into trouble as to help us. Analogical reasoning seems to help most when we are in between: we know something about the area but not enough for a satisfactory analysis. Just as we discussed in the previous chapter on the power of stories, an analogue represents a set of causal variables interacting with each other. By using and adjusting the analogue, we are making predictions that incorporate factors that we cannot identify.
The next question to investigate was how accurate these predictions are. To find out, I collected a set of predictions that had been made on the A-10 aircraft. I obtained the comparability analysis predictions made during the design of the A-10 and compared them to the actual reliability data. In the hands of experienced engineers, predictions made from analogies matched up fairly well with the actual data (Klein, 1986). The correlation between the A-10 predictions and the A-10 data on the mean time between failures was statistically significant at +.76. For another measure, mean maintenance hours per flying hour, the correlation was even higher, +.84.
Some predictions were not accurate. The correlation between predicted and actual repair time was only +. 36. I found that these were cases where the engineers lacked solid data for the analogues chosen and had generated their own estimates. This lesson seems important. The method of using analogues uses so much subjective estimation that it needs to be grounded in hard data. If we make up the data that we then adjust, the accuracy drops.
I live in a village, Yellow Springs, Ohio, with a population of less than 4,000. It has some pleasant restaurants and shops where artisans sell pottery, clothing, and paintings. And it has a wonderful movie theater, the Little Art, which plays movies that never come to multiplex theaters in malls.
Antioch College is aware of how important the theater is to the community, so when the theater is about to close, the college purchases it. The college does not want to lose money on the deal, so it requires the people running it to at least break even.
Jennie Cowperthwaite, the long-term manager of the theater, can no longer just show movies that are important or receive critical acclaim regardless of their popular appeal, like Orion’s Belt. She needs to figure out which movies are going to attract large audiences. To help her out, Dan Friedman, in the Psychology Department at Antioch College (and also a movie fan), arranges for a student who needs an honors project to go through the Little Art database. The student assembles all the information the Little Art has gathered on all the movies they had shown in the previous ten years. Dan and the student and I figure out a set of categories for coding each movie: action/romance/political, American/British/subtitled, cartoon/documentary/revival, and so forth. The student enters these data into a database, and we analyze the trends. We can identify the types of movies that seem to draw the largest crowds and the ones that should never be booked again. When we are done, we have a chance for a study, using this database to make predictions about the attendance at future movies. The Little Art shows two movies a week. We select an upcoming eighteen-week span during which the theater is going to play thirty-five movies (one plays for a full week). For each movie, Dan and I try to find an analogue—a movie close to it—that had already played at the Little Art and is in the database. We retrieve the data on that movie, look at its characteristics, make the adjustments, and generate a prediction. We do this for all thirty-five movies. Next, we go to a control group of seventeen people who live or work in Yellow Springs and ask each to predict the attendance at each of the upcoming thirty-five movies.
We also collect one more data point. We ask Jennie, the manager, to make her predictions without using the database. Then Dan and I wait for the thirty-five movies to be shown to tally up the outcomes.
The control group predicts slightly better than chance. The correlation between their median (typical) predictions and the actual attendance is +.17—not very high. Jennie does much better. Her predictions correlate +.31 with the actual numbers. Dan and I are two amateurs who have never booked a movie in our lives. Our predictions correlate +.45, the highest of all. This is a tribute not to our experience or our math abilities, but to the fact that we used the database filled with analogues.
People regularly use analogical reasoning. If you want to sell your house, the realtor estimates its market value by calling up a database of houses that have recently been sold, to capture the current demand for houses in your area. The realtor looks for houses like yours—same neighborhood, same style. These are difficult variables to adjust for; it is easier to match on them and adjust for number of square feet and size of lot. If the house you are selling has a swimming pool but none of the good matches has one, the realtor may estimate the value of swimming pool separately by looking through the database for two houses that are virtually identical except for a swimming pool. The difference between them is the value of the pool, and the realtor can factor this into your home. The logic and the procedures are virtually identical to what the engineers were doing in estimating reliability, and to what Dan and I did to predict movie attendance.
This section has concentrated on one use of analogical reasoning—making predictions—as a way to illustrate how analogues are retrieved and modified. There are several other important ways that we use analogues: to generate expectancies, and solve problems.
Analogues can be extended to help project what is going to happen in a new situation. Refer back to example 12.1, concerning the threat of a falling billboard at the scene of a fire. The commander looked up and saw a billboard on the top of the building. He remembered a previous fire, an analogue, in which the flames had burned through the wooden supports for a billboard, sending the billboard crashing down. The commander then ordered that the crowds of onlookers be moved back a safe distance in case the same thing should happen. The analogue provided him with an expectancy that he used to anticipate and avoid a potential problem. Analogues and metaphors also provide scientists with new hypotheses (Hesse, 1966) in some of the same ways that they provide decision maker with expectancies.
Analogues provide the problem solver with a recommendation about what to do. In doing a homework assignment in mathematics, a student will look back through the notes to find any exercises that the teacher did can be used as a template, to follow the same strategy. The logic of this process is the same as the logic for making predictions: recall a previous case that had the same dynamics as the current one, identify the strategy used, modify it to meet the current requirements, and carry it out. Even if you do not have a chance to think it all through, you trust that the causal factors are roughly the same so the procedure should work; you have selected the analogue in the first place because it matches on the causal factors.
As we saw earlier, you need a certain amount of experience to use analogical reasoning reliably. Novices run the risk of missing important causal factors and therefore choosing the wrong example as a model or misapplying that example. If we want to help train novices, we might provide them with annotated examples. For example, a mathematics teacher might present a core set of examples to serve as analogues. For each example, the teacher could describe the choice points in the solution where someone might have taken the wrong path or chosen the wrong formula. The teacher could explain the cues the expert used in avoiding these traps. In that way, learners get the advantage of the examples along with guidance on the principles they have to learn to safely apply the examples.
System designers make frequent use of analogues. They draw on previous projects that they did or on other people’s projects with which they are familiar. In studying the types of evidence and information on which designers rely, Klein and Brezovic (1986) found that design engineers prefer to gather firsthand evidence by running little demonstrations using mockups. When demonstrations are impractical, the design engineers looked for previous systems to serve as analogues. They used these analogues to tell them what tolerances to use, what configurations, and so forth.
We have also seen designers run into trouble by misusing analogues. In the following example, the designers ignored important causal factors and believed they had a match when they should have rejected the analogue.
The joint surveillance target attack radar system (JSTARS) is a new aircraft designed to fly close to battle lines and look down at the adversary’s movements. JSTARS is going to be flying as close to the battle lines as possible, putting it in the range of anti-aircraft weapons. Because of the risk, JSTARS needs to have a dedicated workstation to manage its own defense. The designers try to understand what needs to go into this self-defense suite and use the AWACS aircraft as an analogue.
The designers conclude that self-defense will not be much of a problem, because it has not been a problem for AWACS. Both aircraft are slow moving and not very maneuverable. AWACS has a large number of weapons directors at radar scopes to keep an eye out for trouble, and JSTARS will also have a number of people at radar scopes. It seems like a good analogue, transferring the AWACS experience to JSTARS—except that JSTARS will not have dedicated air support, interceptors whose primary task is to protect it, the way AWACS does. Also, the people on the radar scopes will be searching the ground for enemy movement. In AWACS, the weapons directors are looking at the air picture. JSTARS has very little capability for looking out for airplanes that might be attacking it, whereas AWACS can see for hundreds of miles away. AWACS is a secure platform, whereas JSTARS is a giant kick-me sign in the sky. It will be flying a slow and predictable course, generating lots of electrical signals for an enemy to home in on. In our judgment, the AWACS analogue misled the original design team so they did not give the self-defense function sufficient attention.
During Operation Desert Storm, the U.S. Air Force did use JSTARS. However, it made sure that there were no enemy fighters or anti-aircraft batteries around, and they provided escorts of friendly fighters to protect JSTARS.
Two types of application stand out: using analogues in comparability analysis, and using them in advanced computer reasoning systems.6
The technique of comparability analysis has been used for many different functions during the past twenty years. It has also been misused, most notably by careless technicians who did not bother making adjustments. For instance, in one case the reliability data for a piece of naval equipment were simply plugged into an army project, ignoring the fact that ships tend to float smoothly, whereas tanks and trucks bounce around. The predictions were not accurate. Failures such as this will reduce the credibility of the method even though the failure was in the application of the method. The method was misused because the people applying it did not understand the logic behind it.
In our own work on analogical reasoning, we found scant opportunity to use comparability analysis. Most people are fully capable of reasoning by analogy themselves. We cannot do much except to give a technical name to what they are already doing. We prepared a description of how to do a more thorough job of using analogues in making predictions. We found that most people are fairly satisfied and do not want to go to the extra effort.
One type of assistance is to build a computer-based system for helping people retrieve and use analogues, or prior cases. Computer scientists have been exploring the use of systems based on analogical reasoning to overcome the weaknesses of the rule-based expert systems. Roger Schank (now at Northwestern University) was one of the leaders of this movement, along with Janet Kolodner (Georgia Tech University) and Kris Hammond (University of Chicago). Edwina Rissland (University of Massachusetts) has studied the use of case-based reasoning for providing legal advice. Several case-based reasoning shells are currently available.7 Here is an example of one of the case-based reasoning projects on which we worked. This project, sponsored by the Air Force Materials Laboratory, was intended to demonstrate how analogical reasoning could be applied in the area of manufacturing.
Most manufacturing companies keep records of previous work, and these records could be a valuable database for making bids on future work. However, the data are kept by part number, and when time is short, the marketing department may not be able to find the right case. Maybe someone will remember, “Hey, didn’t we make something like that three years ago?” and then, with luck they may track down the part number and find out what it had cost. More often, the reply will be, “You may be right, but I have no idea how to dig that out,” so they will start from scratch in preparing their bid.
We set up a working relationship with a manufacturing company near us, Enginetics, a job shop that makes jet engine parts to order. In contrast to companies that turn out the same part again and again, firms like Enginetics are called on to make the unique parts of which people need only a few copies. With each new job, they are challenged to determine if they can make it, and, if so, what process they will use. They have to trade off the costs of the processes (e.g., bore the holes on this machine, then shape it on that machine, and so forth), against the time needed for each step and even the scrap materials. It may be easier to make it one way, but they will wind up with too much scrap, so it is better to use a different strategy. They even have to take their own learning curve into account. There may be a clever new approach, but they know it might take them four to five runs before they get it right, and they do not have enough time or profit margin to gamble. Each job is a new problem to be solved.
Our agreement is to build a system to help the marketing staff find and use prior cases. This is a case-based reasoning system, although the system does not do much reasoning; its primary value is helping the planners find relevant cases and use the information contained in them. We help them assemble their corporate history, the jobs they bid (including the ones they lost), and the history of manufacturing the parts for the jobs they won. Buzz Reed, the CEO of my company, and David Klinger performed this project. Their system is called Bidder’s Associate.
Bidder’s Associate uses the existing database. In preparing a bid on a new part, it lets the staff rapidly search through their files to see if they had made anything like it in the past. Sometimes they find a previous instance with the same part number, which is great, unless the costs of materials or something like that have changed. More often, there are no simple matches, and the system has to find similar cases. We code the previous cases on several features, such as size, type of material, and even popular names. For example, a round part that has slots cut into it was given the nickname “turkey feather assembly.” That is how everyone refers to it, so that gets put into its file to help in retrieval.
Bidder’s Associate finds a number of similar cases and lets the operator pick which one(s) to work with, to adjust. The output is a documented bid with the cost categories so that everyone will know the reasoning behind the bid. The output includes a concept of how the part will be made if Enginetics wins the bid. The system allows bids to be prepared with less time and effort, and the company also has more confidence in these bids.
Bidder’s Associate is delivered in 1989. The first time a process engineer uses Bidder’s Associate, the first case that pops up on the screen is nearly a perfect match, a part they made two years ago. He had not worked on the earlier project, but be recognizes that it is a mirror image of the one he is bidding on. He reads that the earlier part had a 30 percent scrap rate, so he goes to the shop floor and finds that the scrap rate can now be drastically reduced. He is able to prepare a bid with a dear idea of how the company will manufacture the item.