Hyperrationality is a mental disturbance in which the victim attempts to handle all decisions and problems on a purely rational basis, relying on only logical and analytical forms of reasoning. In the initial states, this condition can be mistaken for a healthy development of critical thinking. Only later do we observe an unwillingness to act without a sound, empirically or logically supported basis. The final stages degenerate into paralysis by analysis.
To understand the affliction of hyperrationality, other diseases can serve as analogues. Consider two different kinds of diseases that can affect vision. One is macular degeneration, in which the fovea and central zone of the retina is destroyed. The fovea is the center of the retina, packed with cone cells, and is the only part of the eye capable of fine discrimination. When we examine something carefully, we aim our foveas at it. I used to believe this would be about the worst visual impairment I could imagine short of blindness. Every time you tried to see something, you would center it in your visual field, and it would disappear. The second visual disease is retinitis pigmentosa, in which the peripheral vision deteriorates. This never seemed as devastating to me as macular degeneration. Who needs peripheral vision anyway?
I was wrong. Retinitis pigmentosa is the more disorienting condition. To understand why, hold out your arm in front of you as far as it will go. Raise your thumb as if you were hitchhiking. Look at the thumbnail. The area of your thumbnail is the total part of the visual field captured by the fovea. All the rest belongs to peripheral vision. If you lost all your peripheral vision, you would have this tiny searchlight sweeping endlessly back and forth, trying to locate everything and retain orientation. Without peripheral vision, you would even have trouble sitting quietly and reading, since you need peripheral vision to direct your eye movements.
Hyperrationality is like retinitis pigmentosa, in which we try to do all our thinking with just one of our sources of power: the ability to apply rational procedures. In this chapter I examine hyperrationality, but I will not fall into the right brain/left brain, holistic versus linear diatribe. That line usually means the author has dramatically made contact with the right brain and is preaching its virtues while mocking the poor left-brained folks. That’s not going to get us anywhere. Besides, this book would never have been written without analyses and even some logical thought.
Kenneth Tynan, the British essayist, producer, and playwright, described some advice he had been given. “Never take the anti-intellectual side in an argument. You’ll find that most of the people who applaud you will be people you hate” (Tynan, 1994, p. 88). Rational analysis is a cornerstone of intellectual activities and a very important source of power. We do not want to encourage people to make ill-informed, impulsive decisions.
This chapter compares the experiential sources of power that have been the topic of this book, with analytical sources of power. Our ability to use intuition and pattern matching is based on experience. Our ability to use mental simulation depends on having the knowledge and experience to set up the mental simulations. In contrast, our ability to analyze situations requires rational thought that is independent of experience. Statisticians, logicians, and decision analysts can provide guidance no matter what the domain.
Rational analysis is a specialized and powerful source of power that may play a limited role in many tasks, a dominant role in a few tasks, and sometimes no role at all. Rational thinking is like foveal vision using cone cells, which provides us with the ability to make fine discriminations but is not sufficient to maintain orientation and is irrelevant during nighttime. Analysis lets us make fine discriminations between ideas, and calculations let us find trends in noisy data. We need peripheral vision to detect where to apply the analyses and calculations.
Rational analysis reduces the chance that an important option will be overlooked. It supports the broad search for many options, rather than deep searches of only a few options. It comes closer to error-free decision making than other sources of power. And it allows the decision maker to use declarative knowledge.
Without rational analysis, we would not have the exciting growth of science and technology, the miracles of medicine, and so forth. Decision trees and cost–benefit analyses can help us make sense out of complicated choices, but there are some limitations to rational analysis, and that seems to make some people nervous. If analysis is a source of power, it will have to have some limitations and boundary conditions.1
Rational comes from the Latin root ratio, which means “to reckon.” To think by reckoning, or calculating, we need to do the following things:
Following this agenda often leads to significant accomplishments, particularly in the areas of science and technology. Rational thinking is an important source of power. It provides the benefits of orderly and systematic approaches to complex problems. For a task such as running a complex piece of equipment, perhaps a nuclear power plant, we want the operators to have a theory or mental model of the layout of the plant; we want them to be able to decompose problems to perform troubleshooting when they detect signs of malfunctions; we want them to collect objective data that can be described and checked by others.2 The goal of making the thinking explicit means that a community can arrive at a common perspective and that teams can be set up to work separately on different parts of a problem with some confidence that their work will fit together at the end.
To perform an analysis means decomposing a situation or problem into its constituents. However, there are not any “primitives” that naturally exist. The components defined are arbitrary and depend on individual goals and methods of calculation. The basic elements of a fire are different to a firefighter, an insurance claims adjuster, and an arson investigator.
Logical atomism, the belief that ideas and concepts can be decomposed into their natural elements, was popular among philosophers in the 1920s and 1930s. It has since been abandoned in philosophy, and in psychology, atomistic schemes have usually proved arbitrary and unworkable. In psychology we usually cannot reduce natural situations to a reliable and valid set of symbolic units that can be treated with logical operators.
There is no “right” way to break down a task. Different people find different schemes. Even the same person might choose different schemes depending on the goals being pursued. If we try to predefine the basic elements, we must either work with an artificial or narrow task, or run the risk of distorting the situation to make it fit into the so-called basic elements.
Alternatively, we can accept the importance of experience in seeking useful ways to decompose a task within a task. Most of the time we blend the analytical and experiential sources of power to get things done. Few of us fall into the trap of hyperrationality.
Rules and procedures take some sort of the if-then form. They often sound simple, but the hard part is figuring out if the antecedent condition, the “if” part of the rule, has been met.3 That is why researchers prefer to work on rational inference using context-free artificial problems that leave no ambiguity. Outside the laboratory, we find it difficult to pin down the context so that people can agree that the antecedent conditions were met, and expect that the rule will be carried out. In example 13.1, concerning the Goeben, Churchill gave an order that was essentially a rule: if you are faced with a superior force, do not fight it. Can we say that Admiral Troubridge violated the rule? The context of the situation, the ambiguities, make it hard to judge whether the Goeben constituted a superior force to Troubridge’s twelve ships.
Most people are sensitive to how much judgment and interpretation are needed to carry out a rule or an order. We rarely try to plan out every contingency. Instead, we try to make it easy to understand the intent behind the rule or order.
Even when we know which rules apply and which to perform, we still have to initialize the equations or arguments. It is usually difficult to make the estimates called for by calculational methods. When the calculations require people to estimate probabilities or utilities, to estimate their values or to make other unnatural judgments, we are going to have trouble. The experiential sources of power do not appear to be helpful in generating the estimates that would be used in analyses.4
Formal methods of rational analysis can run into difficulties when they consider a large set of factors (as is found in a natural setting) and try to work out the implications of all the different permutations. As you add more knowledge, the job of searching through the connections will increase exponentially. “The problem with inferences,” say Schank and Owens (1987), “is that there are too many of them to make. If, for example, we can make five inferences from a fact, and five more inferences from each of those inferences and so forth, then the combinatorial complexity of trying to investigate inference chains of any length greater than a few steps becomes overwhelming. Processing power is not infinite, either in people or in machines” (p. 12).
In our everyday lives, we do not face combinatorial explosions because we are not relying on calculations. We use experiential sources of power to frame situations and arrive at manageable representations. Then, if necessary, we bring in the analytical methods to add precision.
The analytical methods run into limits when we try to use them without recourse to the experiential sources of power. The problem is not rationality but hyperrationality.
I have always been a fan of consistency. One of my pleasures is detecting inconsistencies in the actions and ideas of other people. My wife has learned to be tolerant when I triumphantly find yet another example of her inconsistencies. She recites the phrase, “A foolish consistency is the hobgoblin of little minds,” and turns to more important matters.
We all know that consistency is important because of all the errors that can be traced to inconsistencies. For example, your friend needs to use your car, so you pick the friend up, switch places, get driven to your home and dropped off; you wave goodbye, walk up to your door, and realize that your house key is on the key chain in your car ignition. In one part of your brain, you knew you needed the key chain to get into your house. In another place, maybe just a few neurons away, you knew you had to leave the key chain with your friend. Somehow the two ideas never got connected. If we can detect and eliminate inconsistencies, we can eliminate the errors caused by inconsistencies.
Rational analysis is appealing because it is a strategy for reducing or eliminating inconsistencies. We can try to decompose complex tasks, plans, or beliefs into smaller elements to find any inconsistency. Unhappily, several philosophers have recently questioned our ability to detect inconsistencies reliably (Cherniak, 1981; Harman, 1973, 1986; Stich, 1990).5 Cherniak (1981) showed that we cannot just use a truth table method to make sure our beliefs are consistent: “Suppose that each line of the truth table for the conjunctions of all beliefs could be checked in the time a light ray takes to traverse the diameter of a proton, an appropriate ‘supercycle’ time, and suppose that the computer was permitted to run for twenty billion years, the estimated time from the ‘bigbang’ dawn of the universe to the present. A belief system containing only 138 logically independent propositions would overwhelm the time resources of this supermachine” (p. 93).
In view of this, we cannot expect anyone to maintain a perfectly consistent set of beliefs. It is easy to trace an error backward and find an inconsistency, but that takes advantage of hindsight. We cannot root out the inconsistencies in advance.
Harman examined a type of inconsistency where we continue to hold a belief even when we no longer accept the evidence on which it was based. To stamp out this type of inconsistency, we would have to classify, code, and store in memory all of the evidence on which every belief is based.
Cherniak has presented another type of inconsistency, which he calls memory compartmentalization. Here, a person holds inconsistent beliefs but does not make the connection because the beliefs are stored in different contexts in memory. Cherniak gives these two examples. First, “At least a decade before Fleming’s discovery of penicillin, many microbiologists were aware that molds cause clear spots in bacteria cultures, and they knew that such a bare spot indicates no bacterial growth. Yet they did not consider the possibility that molds release an antibacterial agent.” The other is that “Smith believes an open flame can ignite gasoline …, and Smith believes the match he now holds has an open flame …, and Smith is not suicidal. Yet Smith decides to see whether a gas tank is empty by looking inside while holding the match nearby for illumination” (p. 57).
The Fleming story does not seem like an instance of inconsistency, and the Smith story does. However, both show the same pattern: an inconsistency between the set of beliefs held and the actions taken. Once we see the error or the missed opportunity, we can trace it back to the beliefs that did not get connected. It is too much to expect that every piece of information in memory be continually matched against every other piece to catch these connections and draw their implications. It would take exhaustive memory search to find the interesting connections. We can feel proud when we do discover them, but if the pieces belong to different memory compartments, we cannot expect high levels of success.
These examples indicate that it is impossible to free ourselves from inconsistency, belief perseverance, and memory compartmentalization. Actually, there is one way to ensure that people find inconsistencies and discover connections: by keeping the number of beliefs small. If we could struggle through life with only a few beliefs, perhaps fewer than ten, then we might have a chance to purge inconsistencies.
There is worse news. Consistency may not be as helpful as we imagined. Jonathan Grudin (1989) has questioned the advice given to people who design computer interfaces to eliminate all inconsistencies. He wondered how good this advice was. In his home, he does not keep all his knives in one place. The knives used for eating go in one drawer. The putty knife he uses in his shop goes in a separate room. The set of large carving knives is kept in a wood block on the kitchen counter. The Swiss army knife is with other camping gear. If he stored them together, he would have less trouble trying to find a knife, but many of them would be in inconvenient locations. Grudin is using consistency of function as his guiding principle. This takes more effort, judgment, and insight than consistency of feature. It is not sufficient to identify something as a knife. You have to understand how it will be used. If you settle for maintaining consistency of feature and expect that this strategy will keep everything well organized, you will be disappointed.
The same problem arises for computer screens. Should a designer adopt only consistent procedures? Consider the rule that if you select an action from a menu, the computer should persevere in that mode. The “last action selected” strategy makes sense for some functions, such as picking a font style. Once I pick a style, the system retains that style until I change it. The “last action selected” rule also makes sense in searching for items. Once I enter a name in the query box and set the computer searching for it, I am likely to want to continue through the document looking for other instances in which the name appears. The search function retains that name until I change it. The “last action selected” rule does not make sense for the cut-and-paste function. When I cut something, the most likely next act will be to paste it somewhere else. My computer anticipates this and makes it easy by switching modes automatically. The designers realized I would not be cutting one sentence or paragraph after another.
The designers would not have done a good job if they insisted on a consistent principle, such as retaining a default option once someone selects it until it is changed. They had to understand how I was going to use the system and design around my needs. They had to preserve the consistency of function rather than consistency of feature.
Therefore, we should be wary of efforts to ensure consistency at the level of features that do not consider the functions we are trying to perform. Rigor is not a substitute for imagination. Consistency is not a replacement for insight. Most of the time we would rather have consistency than leave ourselves open to the problems created by inconsistency. It is a goal worth seeking, but attempts to ensure unrealistic levels of consistency are a symptom of hyperrationality. Our experience helps us anticipate the impacts of inconsistent beliefs and set the level of effort for reducing inconsistencies.
Logic is indifferent to truth. The goal of logic is to root out inconsistent beliefs and generate new beliefs consistent with the original set. Logic does not consider whether our beliefs are true. A logical person can be wrong in everything she or he believes and still be consistent.
Although we cannot always calculate inconsistencies, we are alert for them. We try to perceive inconsistencies in order to detect anomalies; the anomalies trigger our efforts to diagnose situations and initiate problem solving. We try to see the inconsistencies. The following example shows how finding an inconsistency helped someone to see.
It is Saturday morning. I am lying on the sofa reading a Simenon mystery novel about Inspector Maigret, wandering the streets of Paris with him in search of yet another criminal, when my wife makes an unhappy discovery: one of her contact lenses is missing. Before I start crawling around on the floor, we go over the facts. We had guests for dinner the night before. She removed her contacts after they left. She was seated at the dining room table and carefully removed them over the tablecloth so she could find them easily if they popped out. She thinks something might have distracted her during this time. She put each into its own little cell, screwed the lids on, and carried the case to the bathroom. This morning she picked up the contact lens case, opened up the first compartment, and found that it was empty.
She thinks it is possible that one of the lenses might have stuck to her finger as she attempted to deposit it in the case; this has never happened before, but it is something she worries about.
We deduce that the missing lens could be in one of two places: either it was transferred poorly last night, and should be on the tablecloth or the dining room floor, or it slipped out this morning and is somewhere in the bathroom. A third possibility is that it wound up on the dining room floor and has been crushed or tracked to some unrecoverable place. We choose to repress this third hypothesis.
I spend the next thirty minutes carefully searching the dining room. No luck. The next thirty minutes is devoted to the bathroom. Again, nothing. I am certain that the contact lens is in neither place. The third hypothesis must be true: the lens fell to the dining room floor and has gone to a permanent resting place. I have done all I can and return to my book.
My wife is desperate to find the lens, and she continues her search alone. After an hour she again asks for my help, but we have no new leads, no clues. However, the spirit of the book I am holding steals over me. I quickly review the chronology of events and then lean back on the sofa, puffing an imaginary pipe of the sort Inspector Maigret might smoke. Then a voice disfigured by an attempted French accent tells my wife, “Go into the bedroom, to the clothes hamper, retrieve the tablecloth that is folded there, carefully open up the tablecloth on the floor, and you will find your missing contact lens in the middle.”
I settle back into my novel. When I hear my wife’s exclamation of triumph, I permit myself a thin, smug smile.
This is one of my most successful cases. (Actually, it is my only successful case, and I am pleased to have this opportunity to get it in print.) What has happened is that the previous night, one contact lens did slip out while my wife transferred it to the case, and it landed on the tablecloth. After bringing the contact lens case to the bathroom, my wife prepared for the next day, as she usually does after a meal, by rolling up the tablecloth to trap all the crumbs and food debris so that they would not fall on the floor. She carried the tablecloth to the hamper, where it would remain until the next washday, when it would be shaken outside. This is her routine. Then we went to sleep. Saturday morning she brought out a fresh tablecloth, performed a number of other tasks, and then went to put in her contact lenses.
The inconsistencies in our beliefs were due to memory compartmentalization. My wife and I knew that Friday night’s tablecloth was in the hamper. We knew that the contact lens had been removed Friday night. We searched for it on the clean tablecloth that had been put out Saturday morning. The two tablecloths were even of different colors: yellow (Friday) and blue (Saturday). Yet neither of us noticed the disconnect, until that last-minute mental simulation sorted out the two tablecloths. I am convinced that the credit should go to Inspector Maigret. He and his kind are the only ones who can be relied upon to catch inconsistencies.