10 Managing Ignorance

Ignorance is not such a bad thing if one knows how to use it.

—Peter Drucker

In the previous chapters, we have seen that despite learning, the ignorance that remains is infinite; that, while some portion of our ignorance is known to us, an immeasurable portion is not; and that, although simple ignorance is “natural,” there are many forms of intentional and constructed ignorance. Moreover, as the last chapter chronicled, our capacity to know is limited in many ways. Ignorance is thus a fixture of our lives; it is part of the human condition. Now what are we to do about it?

Much of our ignorance is benignly irrelevant to our lives; our daily existence is unaffected by it. But relevant ignorance manifests itself, and we are regularly faced with the unknown. Early on, I spoke of the terrible cost exacted by ignorance; but, unfortunately, it is difficult to know which unknowns will remain benign and which will confront us and become costly. When we suddenly encounter the unknown, our biologically based, hardwired, responses—largely physiological—are engaged. But, smart creatures that we are, we have gone beyond these to develop additional ways of coping with the unknown: emotional, intellectual, practical, and social ways. These coping mechanisms include arrays of sophisticated conceptual tools and techniques, whole new fields of knowledge (ironically), and even special cultural institutions and social practices—all aimed at managing our ignorance.

I propose to catalog some of the most important of these, beginning with primitive responses and moving to those of increasing complexity. This presents no shocking discoveries or technological advances, and my treatment of each of these responses is necessarily brief. But there is a cumulative effect, I believe, in seeing them as a set of tools for ignorance management. On the one hand, this account will illustrate the ways in which even irremediable ignorance may inspire knowledge; on the other, it will show how our knowledge is embedded within a silent recognition of our ignorance. The progression from recognizing ignorance to responding to it, to coping with it, to managing it suggests that we may gain increasing control. Since ignorance is a permanent fixture of our lives, learning to manage it is a worthwhile, even fundamental, epistemic aim and a practical imperative.

Responding to the Unknown

An encounter with the unknown may pose three sorts of problems for us: emotional, intellectual, and practical. Basic emotional responses will often be activated before our reason is engaged. Like other primates, humans may react to the unknown with fear or aversion, but also with curiosity and fascination. This is especially true of sudden or surprising encounters. These two disparate tendencies form a polar tension: our need for safety and our inquisitiveness. Observing at a safe distance is often an intelligent compromise. When our reason kicks in, when we can safely observe, we can decide whether fear or caution are justified; and our curiosity can lead to cogitation—identifying, associating, classifying, inquiring, researching, computing, assimilating, comprehending, explaining—activities that render the unknown, known. This intellectual processing helps with practical issues: how to treat or use this unfamiliarity, how to apply it, whether to share it with others.

Early humans regularly encountered unpredictable, powerful, and unfathomable forces—especially the forces of nature and dangerous animals; and they often responded in fear and awe by attempting to appease these powers and ingratiate themselves through sacrifice and supplication. The rituals they developed may be understood as ways of coping with the mystery of supernatural forces, ways of living with gods. Various forms of fortune-telling and divination, especially haruspicy, became popular techniques. Still widely practiced, they employ indirect evidence and signs to reveal one’s fate or god’s will. Soothsayers generally claim insight into the future, not genuine knowledge of it; but we should recognize these activities, more plausibly, as attempts to cope with ignorance.

A contrasting approach (made with greater confidence) is to harness these forces for human purposes through craft and technique. The recipes that result do not rest on deep theoretical understanding; they are transmittable instructions for producing a desired result. The ancient glassblower or brewer knew little of chemistry, but could follow age-old instructions that were developed by trial-and-error methods. Recipes yield products; through serendipity and experiment, they may be improved. Recipes are externalized forms of knowing how; they do not require knowing why. They too may be understood as another way of managing processes and forces of which we are, in fact, ignorant.

Rituals reflect an attitude of humility toward the unknown; recipes reflect an assertion of human purpose and will to channel if not control natural processes. Ritualism establishes roles: priests, acolytes, supplicants, and others, who have special authority and play a specific part in the ceremonies—think of religious weddings and masses. Sacred objects and hallowed places are often required, and the sequence of actions and events is critical. Recipes, by contrast, are transferable—though a master will be more effective that an apprentice, often because of tacit knowledge. But the issue for a successful craft process is not so much who does it as how it is done. Recipes require tools and instruments, not sacred objects; and while sequence is important, the dominant temporal concern is for “just the right moment”—the precise moment for blowing and bending the glass or picking the grapes. As written culture evolved, both sacred rituals and recipes became textual; and it is true that blending, combining, and confusing of ritual and recipe occurs. Even today, we may begin or conclude the use of a recipe with a ritual: the toast offered for a new wine; the smashed champagne bottle to christen a new ship; the singing and blowing-out-of-candles before eating a birthday cake.

Note, however, that neither rituals nor recipes are attempts to understand the world. Nonetheless, they do help us cope with our ignorance: they address the emotional and practical dimensions. They offer a form of order and predictability; they impart a sense of security through continuity; and they help make the world a familiar place in which we may feel at home. But it is left to philosophy and science to address our wonder and intellectual curiosity about the unknown.

Coping with Ignorance

Among the aspects of ignorance that cause us concern are uncertainty, risk, error, and harm. Uncertainty occurs when we lack decisive determinants for thought or action. Future events are unpredictable, and so are people’s actions. About the past and present too we often lack information, or have incomplete or conflicting information. Uncertainty leaves us at risk for error through action, inaction, or obliviousness, and real harm that may range from the trivial to the catastrophic. The emotional burden is stress and anxiety or a hope that may be false; and the regret or guilt that may follow.

The primitive responses have a legacy. Superstition or magic is a persistent, if misguided, way of coping with our ignorance; this is the belief in supernatural agencies that control events and can be influenced toward good or bad outcomes. Special objects, it is thought, may magically determine events: good luck charms or talismans to influence outcomes positively; amulets and apotropaics to ward off evil and prevent harmful outcomes; and poppets and cursed objects to cause harm to another. Blessings and curses or hexes are ritualized verbal techniques for trying to direct future events through magic. Magic combines ritual and recipe in arts that do not attempt to understand the world (because the world is governed by ineffable forces), but rather to influence, even to control, events. While all such superstitious practices seem to exemplify human ignorance, they also represent in their attempts to shape events a strategy for coping with our ignorance.

Fortunately, uncertainty may instead generate inquiry. Thinking is a multipurpose ability, but it is particularly useful in coping with the unknown, the strange, and with situations in which one must respond despite uncertainty. The routine situations of daily life we negotiate by habit, but (as the American Pragmatists emphasized) thought arises out of problems, situations in which we have neither certainty nor a comfortable routine. We may cope with pressing ignorance by intellectual activity that seeks to understand, and perhaps thereby gain the power of knowledge to control or act wisely. We’ve developed three basic intellectual strategies to cope with ignorance: (1) vanquish it with learning; (2) identify, map, and target our ignorance; (3) reduce the range and impact of specific residual ignorance. We all know about the first. It is the other two that call for examination here. They employ the tools and practices that have moved us from recognizing and coping to managing our ignorance.

In chapter 5, I discussed the idea of mapping our ignorance. The technique, begun in medicine and medical pedagogy, is now infiltrating other professions. “Ignorance management,” largely defined in terms of identifying and mapping, has become a trendy term in management circles, leading to articles, consultants, seminars, and workshops on how to manage organizational ignorance. A team of British scholars led by John Israilidis, using the language of the management classroom, explains it this way: “Ignorance Management is a process of discovering, exploring, realising, recognising and managing ignorance outside and inside the organisation through an appropriate management process to meet current and future demands, design better policy and modify actions in order to achieve organisational objectives and sustain competitive advantage.”1 The core of this approach is the use of the four-part grid of second-order epistemic categories introduced in chapter 3 (known knowns and known unknowns; unknown knowns and unknown unknowns) as categories for identifying and mapping organizational ignorance. The ultimate aim, of course, is to treat these as resources for knowledge creation.

Psychologists and behavioral economists have been conducting empirical research on the ways in which we make decisions under conditions of uncertainty.2 Uncertainty is not identical to ignorance; yet ignorance is its characteristic property. Uncertainty implies that one already possesses some salient knowledge, but the relevant information may be incomplete, ambiguous, or conflicting. This research is interesting because it often reveals cognitive biases, apparently irrational tendencies that pattern our choices. What is generally less clear is whether these patterns are, like visual illusions, inescapable even when we understand how they work, or whether they are tendencies that are correctable with proper information and alertness. Most likely, they are a mix of both. This work is descriptive of actual decision making; what is needed for ignorance management is to set these data against a normative account of decision making. For the latter, one may look to rational choice theory. Unfortunately, the field of choice theory has been limited in application because of its initially simplistic and controversial assumptions (such as assuming the sole motivation to be universal individual self-interest) and its need for mathematically precise specifications. In a deeper sense, all human decisions are made under conditions of uncertainty, because of the necessary limits to our knowledge.

Transformations in the Dark

Some of the most personally significant decisions we make must be made in conditions of radical uncertainty and fateful consequence: they are transformative. The contemporary philosopher L. A. Paul has called attention to two types of transformations: (1) epistemic transformations, in which we come to know what something is like—what it is like to live in a very different culture, perhaps, or to face death in battle; to gain cognitive abilities or a new sensory system, as to experience sight when one has been blind; and (2) personal transformations in which one’s values and even one’s identity are altered, in which one changes “how you experience being who you are.”3 The first can lead to the second, of course, and both can happen to us whether we desire them or not. But Paul’s interest is in the cases in which we are presented with a choice. (Her opening thought experiment involves being presented with the choice to become a vampire!)

The difficult issue with personal transformative choices is that we cannot know what the change would be like until we experience it. Yes, we can study what others have said; we can accumulate objective evidence. But the subjective experience, the knowing what it is like, is unavailable to us. The decision to become a parent, to change gender, to retire in a distant land, or to undertake an education—we make these and other life choices without really knowing what we are in for or how we would be changed by it. Moreover, such decisions are usually irreversible: once you are a vampire, there is no going back. Such choices are irrational or nonrational in that they cannot conform to the norms of rational choice: we have no basis on which to assign comparative values to the alternatives. Yet they may open us to new and wondrous forms of life.

So, when it matters most, we feel our ignorance most keenly. If rational choice theory fails us in such matters, how can we manage? We have our subjective values, which involve more than valuing pleasure or happiness. Either we choose the status quo and affirm our current life and its conditions, or we choose to discover a new intrinsic nature and to evolve in our preferences and outlook—along with all the possible emotions and insights it will bring. Though some temperaments may make such choices in fear and trembling, others embrace openness. Paul’s ultimate conclusion is this: “A life lived rationally and authentically, then, as each big decision is encountered, involves deciding whether or how to make a discovery about who you will become. … One of the most important games of life, then, is the game of Revelation, a game played for the sake of the play itself.”4

Unpredictability and Commitment

Our concern may not be with ourselves, however; it may be with others. Unpredictability is a troubling source of uncertainty, and the actions of other people are often unpredictable to us—not just the actions of strangers, but sometimes of intimates as well. We cannot, to use a Woody Allen phrase, “peer into the soul” of another person. Indeed, we are not very transparent to ourselves! Yet social interaction and the benefits of cooperation depend on people’s reliability. If we cannot rely on others or reliably predict their actions, we can’t trust them. We do not know what they will do.

An ancient, basic strategy to address this uncertainty is to constrain others’ actions through exacting commitments that are formal and public. Oaths and promises, for example, are speech acts that create obligations. We use them to remove or reduce our uncertainty about what someone will do. The more complex version—usually in more durable, written form—is a contract. A contract is a mutual pact that declares specific commitments and voluntarily constrains the future behavior of both parties, usually under the prospect of penalty for breaching the agreement. Advanced social systems support contracts with adjudication of disputes and enforcement of provisions and penalties. As these sorts of commitments become established institutions, our uncertainty diminishes, along with the risk of ignorance.

In contemporary society, less formal commitment practices serve the same purpose of reducing uncertainty; for example, the making and taking of reservations, in which explicit commitments from customers and businesses are exchanged—for dining or lodging, for admission to events or attractions, for renting vehicles or vacation homes, for seats on airplanes or trains, for wedding venues and free-range turkeys. Commitments in all these forms constrain future behavior; they increase predictability and reduce risk; they build confidence in what will happen. These commitments are performative speech acts in which we create new facts through appropriate utterance: “I promise,” “I agree to these stipulations,” or “I consent.”

These sorts of commitments are enabled by special features of language. Discussing the unknown and unpredictable requires language that can indicate and describe states of affairs that are doubtful, possible, imagined, or desired, but not factual. For this purpose, we have developed linguistic tools that enable a discourse of the uncertain. Individual words may signal these conditional states (like perhaps or maybe or suppose, in English); and we regularly distinguish between hypotheses and facts, between assumptions and observations. Hypothetical (or conditional) clauses also serve this purpose: they are the if portion of if-then sentences. Conditional sentences assert that a specific factual outcome follows from a hypothetical condition: “If you build it, they will come.” In English and many other languages, the subjunctive mood is used for contrary-to-fact or nonfactual discourse. And in logic, new forms were invented to deal with modalities, including possibilities and statements for which the truth is variable.

Chance

We commonly use the word chance to refer both to random events and to events the causes of which are so complex as to defy explanation or prediction. Rolling dice violates no law of physics, but the mechanical forces in play are too many and too subtle for us to predict their outcome reliably. The same is true for the tornado that may sudden veer from its path and leave a lone house standing. Chance is the default cause of striking and inexplicable events, like the convergence of two causal streams: “The man happened to bend down to tie his shoe just when the bullet was fired.” Some such convergences are chancy; some we call coincidences. Imagine: a distant, exotic location; you hail a cab and, upon entering, meet an old friend who is opening the opposite door. It is striking that, without foreknowledge or coordination, two friends should be in the same faraway place, reaching for the same cab at the same moment. Though no laws of nature were violated and each can reasonably explain his presence, the circumstances that led to that moment of intersection of their lives are so complex that we cannot explain their synchrony. We are baffled at such an unpredictable coincidence; so we say, “They met by chance.” (Some, of course, would say such coincidences are “fated.”) When chance events affect us either positively or negatively, we may call them a matter of luck—either good or bad. The point is that the unpredictability and inexplicability of all these sorts of events and the ascription of chance or luck are functions of our ignorance.

Chance is also applied to events in the past (“By chance, this was the only manuscript to survive the fire”) as well as in the future; however, it is the future that occupies our decisions and our actions. The future represents possibility and potential, but it also combines our ignorance (uncertainty) with risk. And the number of rational strategies available to us is distressingly small.

The simplest is to anticipate bad outcomes and prepare for them: saving money for unexpected expenses; stocking supplies in case of violent storms; practicing fire drills and other emergency responses. Preparation can involve a backup “Plan B”: selecting a second-choice restaurant in case our first choice is closed or fully booked; including a “safe school” among one’s college applications, in case the first-choice institutions send rejections; issuing “if that approach fails, then do this” instructions to employees. These plans, which often begin as consideration of “preferred-case, worst-case” outcomes, can grow into complex and formal documents as multiple scenarios are imagined: governmental disaster plans, military battle plans, and institutional long-range plans are examples. Anticipating bad outcomes and arranging contingent responses to them, at least in pure forms, do nothing to reduce the likelihood and little to reduce the costs of those events (the “harm”). In fact, preparation and planning represent additional costs.

A more proactive strategy is to reduce the impact of our residual ignorance, to reduce the risk of incurring harm from unforeseen events. And where we cannot reduce the likelihood or risk, we may try to reduce loss. Societies have developed instrumentalities, corporations, even whole industries designed to reduce loss: insurance, in its various forms, is paradigmatic. Other forms include warranties, guarantees, prenuptial agreements, and bail bonds.

Think of the historically important example of insuring ships and their cargo. Both the shipping company and its insurers are ignorant of future events, and there is considerable risk of loss of property and lives on the high seas. So, the two parties agree to a special form of contract in which the insurer “underwrites” the voyage. In exchange for payment of a premium, the insurer takes on some of the risk by guaranteeing payment in the event of damage or loss. The shipping company agrees to pay in order to reduce its risk of much greater financial losses.

Like gambling, insurance is based on the calculation of odds regarding future events. In the case of insurance, however (assuming no criminal manipulation), both parties desire the same outcome: a safe and sound voyage, in my shipping example. In the preferred outcome, the shipper would deliver the goods intact, having felt protected against loss, for which payment of a premium was worthwhile. At the same time, the underwriters would gain a profitable payment and, thankfully, have had to expend no reimbursement for losses. The calculation each makes starts with the riskiness of the voyage: the premium rises as the perceived risk increases, and some voyages may be deemed simply too risky to insure. In all cases, the shipping company must determine whether the risk is sufficiently high to pay the premium asked. Calculations of risk and premiums are made by actuaries, based on statistics regarding past events and trend lines. Where scientific laws are insufficient to predict outcomes, the historical record of like events is the best source of information. Insurance is thus an instrument for managing our ignorance and its effects, and actuarial calculation is its defining technique.

Because so many aspects of future events are important to us, and because underwriting can be quite profitable (especially as technology and safety measures reduce risk), the insurance industry has grown and diversified in startling ways. One may now insure against damage and loss regarding a wide range of personal and corporate property, from automobiles and homes to jewelry, livestock, and smartphones. Insurers now distinguish the causes for potential losses, so, for example, we must insure our homes separately for losses from fire or storm, from flood, or from theft. We may insure our “beneficiaries” against the financial impact of our own death. These are tame, familiar examples. What some people hold to be precious, however, and what risks some have worried over, have led to quite exotic insurance policies, the stuff of tabloids. In the 1940s, as a publicity stunt (at least in part), Twentieth Century Fox insured each of Betty Grable’s shapely legs for $1 million. Decades later, Rolling Stones guitarist Keith Richards insured one finger, his middle finger, for $1.5 million. An Australian cricket player, Merv Hughes, whose handlebar mustache is a personal trademark, claims to have insured it for $400,000. It has recently been reported that over $10 million has been spent in the United States to insure against abduction by aliens—with higher payouts for repeated abductions.5 What a sideshow!

A stock market is a temple of calculated risk-taking. What sort of insurance could there be against significant financial loss in the market? Having a diverse investment portfolio was the traditional wisdom, but it was a prophylactic, hoping to avoid loss rather than offering real protection against actual loss. So-called hedge funds, however, have often been promoted as investment vehicles that attempt to return a profit regardless of whether the market was up or down. Historically, the term hedging referred to the attempt to reduce the risk of a bear market, usually by “shorting” the market.6 Today, however, most of these funds aim to maximize return on investment; and because they are often aggressively managed and trade in devices like derivatives, it seems they may actually involve more risk than conventional market investments. While some investors have profited by hedge funds during downturns, others have lost disastrously—and in nearly all cases, hedge fund managers have fared better than their clients.7

From Possibility to Probability

Essential to all whose profession is trading in uncertainty—insurers, stocker brokers, gamblers—are the mathematical tools developed to manage ignorance and minimize risk. All such tools are refinements of the concept of probability.

Our plight can be described simply: we are ignorant of future outcomes, yet we want to anticipate (predict) them to reduce risk; we know there are many possible outcomes, but we don’t know the likelihood that a particular outcome will actually occur. What mathematics offers us is the quantification or measurement of that likelihood, which we call probability. The concept of probability shaped a theory that has blossomed into a rich and complex field of mathematics. On the surface, it seems to replace our ignorance with quantitatively precise knowledge, but—as we shall see—our ignorance is managed, not vanquished. The theory has contested alternative versions, and all of them are freighted with persistent philosophical problems.

In this section, we will discuss briefly four interpretations of probability. (A mathematically challenged reader need not panic: a very basic discussion will be sufficient to illustrate the relevant points.)

(1) The classical interpretation is derived from the situations for which probability theory was originally constructed: gambling. Rolling dice is an easy example. There is a precise number of possible outcomes in rolling a die: each of six sides of a fair die, 1 through 6. Which number will actually be rolled on any throw is unknown. In the classical interpretation, the alternative outcomes are assumed to be equally probable. How probable is it that the player will roll, say, a 6? It is one out of six, or 1/6, or 16.667 percent. Thus, an impossible outcome (like rolling a 0) has a probability of 0; a certain outcome has a probability of 1.0 or 100 percent.

Here is the classical interpretation as summarized by Pierre-Simon Laplace in 1814:

The theory of chance consists in reducing all the events of the same kind to a certain number of cases equally possible, that is to say, to such as we may be equally undecided about in regard to their existence, and in determining the number of cases favorable to the event whose probability is sought. The ratio of this number to that of all the cases possible is the measure of this probability, which is thus simply a fraction whose numerator is the number of favorable cases and whose denominator is the number of all the cases possible.8

The process seems clean and clear. We could, by extrapolation, apply this method to determining the probability of betting on the right number in roulette, of drawing a royal flush, or of winning a lottery.

Note, however, that all this hinges on the assumption that each alternative outcome is in fact “equally possible,” that the die (or roulette wheel or deck of cards or whatnot) is fair. There is the rub. We accept dice as fair because they are properly marked and are constructed symmetrically (except perhaps for the minuscule differences in weight produced by the varying number of dots). We see no relevant differences among the six sides. The “fairness” of a die is not a function of the numbers one actually rolls with it; no one at the factory tests dice for their fairness by rolling them and marking the results. Yes, in theory, a fair die is one that will, given an infinite number of rolls, roll each number an equal number of times. But of course no one can roll dice infinitely. There is a hint here of the circularity that makes philosophers uneasy: the classical theory requires probability to be calculated on the base concept of equally possible alternative outcomes, but the notion of events being “equally possible” seems itself to designate a measure of probability—equal probability.

(2) Most of life’s choice situations are not as neatly defined as gambling problems. For these situations, the frequency interpretation may be applicable.

Earlier, I spoke of transformative choices in which we cannot know the nature and value of an alternative presented to us; but in many of life’s decisions, we cannot identify all of the alternative outcomes or fix their number; and even when we can, we may not be able to establish the likelihood of each occurring. Every student knows that even the blithely unprepared have a one-in-two chance of correctly answering a true-false question. But the calculation for a fill-in-the-blank question is much more difficult: we cannot identify all the possible choices or establish their likelihood. Calculating life insurance rates requires estimating the probability that someone will die in a given period. But the actuary cannot assume that all people are equally likely to die in that given period. All of us will die, of course; the relevant question is when.

A different method of coping with uncertainty was developed for these cases—it is the method applied in underwriting insurance: the researcher examines the frequency of alternative outcomes in similar situations in the past. This method is statistical inference, which combines the analysis of numerical data with induction, thereby drawing inferences regarding a whole set of cases from a given sample. It holds that probability statements are about ways in which the future will reflect the past. In this view, probabilities refer not to an individual event but to classes of similar events. For useful guidance, one needs a large number of previous cases, a “valid sample size” to produce “statistically significant” results.9

Take this statement: “The probability that a sixty-five-year-old American male will die during the year is one in sixty-one (1.6389 percent).” It means that in the set of such men in recent years, one of every sixty-one died during the year. We should expect the set of such men this year to exhibit the same death rate, if other conditions remain constant. This interpretation makes probability statements empirical and inductive, not a matter of logic alone. These inferences presume that the whole is like the part, and that the future will be like the past.

Notice the difference between the classical and the frequency methods. In the case of dice, the probability is an abstract calculation based on an idealized notion of possible outcomes. The probability that anyone will roll a 6 is not derived from a statistical analysis of previous dice rolls. By contrast, the probability that males will die is inferred from historical data. Nonetheless, abstract games-of-chance mathematics is often used to express statistical inferences. The death rate of males between sixty-five and sixty-six years old in the United States, which is determined by statistical analysis, is 0.016389; this is roughly equivalent to 1/61.10 The chance of any such male dying this year is therefore one in sixty-one. The gaming model of odds, which is the likelihood of possibilities, is thus used to express results in both methods.11

The frequency interpretation requires drawing inferences from a relevant set of previous outcomes, but a difficulty arises in delineating the sets so as to derive the information we seek. Grouping things as sets always requires us to ignore individual differences among its members, but it is possible that some of those differences may relate to the probability we seek. Imagine a patient learns that a certain surgical implant has a 30 percent rejection rate. If this means that 30 of every 100 patients receiving the implant suffered rejection and the implant had to be removed, then the presumption is made that one patient is like another for purposes of this comparison. And that may be false. Patients are unique, and there is no patient exactly like the current subject.

Another concern in both examples is the ceteris paribus requirement—“other things being equal,” the stipulation that “other conditions remain the same.” In the implant rejection case, there may now or soon be improved methods of implantation, antirejection drugs, advanced postoperative care, a younger and generally healthier group of patients needing the implant, and so on. The future context is never an exact match with the past.

Most importantly, notice also that neither method will predict the outcome of a particular case: knowing the odds doesn’t predict the next roll of the die; knowing the death rate doesn’t identify which individuals will die. Our ignorance regarding any specific case remains. This is the genius—and the limitation—of probability theory: because identifying determinative causal factors is so difficult, it ignores them altogether.12 Dice are used as models of chance because the determinative factors in any roll are impossibly complex for our analysis and are not in our control. Similarly, causes of death are various and subject to individual factors; while they are not random, they cannot be processed so as make individual predictions. Using the statistical method, researchers may attempt to reveal causes by subdividing the population, testing different subsets, and applying regression analyses. For example, one might divide the relevant male population by race or smoking practices or marital status to determine whether there are differential correlations, different death rates for each group. But a high rate of correlation is still just that: a correlation, not a cause. Here is the takeaway point: probability theory allows us to claim knowledge of populations and series, large classes of events, and to manage—and mask—our ignorance of individual events that lie beyond our prediction or control.

(3) A third interpretation makes probabilities neither about logic nor about a pattern of past events in the world, but rather about one’s degree of belief. It is usually termed the subjectivist interpretation. This account seems most plausible for statements like: “I will probably attend the concert” or “I am not likely to finish this project by tomorrow.” In our previous cases, the intent was to assert something about the outcomes themselves, not about one’s confidence that a particular outcome would occur. Moreover, it is true that one may be quite confident that an event is very unlikely, or unsure of whether an event is highly probable. Thus the concepts of probability and confidence are clearly distinguishable. Where they connect is when one is addressing the likelihood of one’s choices or actions; one’s intention and one’s level of commitment to a belief then become relevant factors in determining the likelihood of outcomes.

The three theories sketched here have been ably summarized by Herbert Weisberg in his discussion of the history of probability theory:

There are three main ideas that have been subsumed in what we generally regard today under the heading of probability. First, probability is assumed to obey certain mathematical rules that are illustrated in their purest form by games of chance. Second, statistical regularities observed in natural and social phenomena are regarded as examples of probability in action. Third, probability reflects in some sense a degree of belief or measure of certainty.13

All three interpretations (and there are other variations)14 report probabilities in quantitative terms, but the first two offer a basis for precision in objective (or, rationally debatable) factors; they conceive of probability itself as an inherently mathematical notion. The subjectivist’s “degree of belief” is inherently less quantitatively precise. (How does one measure the difference between a confidence of 75 and 80 percent?) But it reveals more openly the underlying ignorance that the first two interpretations disguise.

(4) Finally, an older, even ancient, notion of probability is still operative in some arenas today. I will term it the judgment interpretation. It is neither formalized nor quantized, but rather takes probability to be a matter of the weight of salient evidence. We find this notion today in the application of the legal term “probable cause.” A judge, asked to issue a warrant or subpoena, does not determine whether there is a 1/6 cause or a 5/6 cause; the determination is made more loosely based on an assemblage of relevant evidence, argument, and legal considerations. Since probability in this context is a matter of judgment, it is easy to see how the focus can morph from the weight of evidence into the (judge’s) degree of confidence in belief (the subjectivist interpretation). But the concept of probable cause points to objective factors—the factors the judge weighs. Admittedly, the judicial standard of “beyond a reasonable doubt” for a guilty verdict seems to point ambiguously toward the evidence and also toward the jury’s degree of confidence in their belief. Both embody claims about whether a probability is sufficient for a particular judicial action; both assist us in the management of ignorance.

The Chance of Rain

As we have seen, nagging questions remain about the concept of probability under its varying interpretations: vicious circularity, assumptions of statistical inference, unfounded quantification of confidence, the imprecision of judgment, and so on. These issues leave us with the vexing problem of understanding what exactly we mean by a statement of probability. Assertions of probability are commonplace, but do we really know what they mean? For example: what does a weather forecaster mean by the statement, “There is a 20 percent chance of rain tomorrow”?

National Public Radio asked this very question with hilarious results. The responses from the public revealed wildly different beliefs and the admitted uncertainty of many, including a mathematician and a respected meteorologist!15 The total number of respondents was 42,143—quite a good sample. Among the interesting results:

Note first that in all of the interpretations (except perhaps those unspecified), the implication is that we could replace “it will rain” with “it will certainly rain”: for example, “it will (certainly) rain tomorrow for 20 percent of the time.” In other words, these interpretations attempt to eliminate or cash out the uncertainty involved in probability by its expression as a statistical fact. This uses redirection to transform ignorance into apparent certainty. Second, these interpretations are still plagued by vagueness: where will it rain; who count as “weather forecasters”; what are the boundaries of “the region”; what exactly are “days like tomorrow”? The fourth interpretation has a verification problem: it projects an infinite number of “days like tomorrow”—but in practice, we cannot observe an infinite number of trials. And all of them are imprecise about what is meant by “rain”—a few drops, a drizzle, a short and scattered burst, a sustained deluge, and so on. Forecasters are aware of this problem and usually supplement the probability statement with additional descriptions, such as “scattered showers,” or “thunderstorm around noon.”

Setting aside audience perceptions, what is the intended meaning of the forecast? To answer this question, the National Weather Service published an official definition, though they used “a 40 percent chance of rain” (which might seem to double the odds):

Mathematically, PoP [Probability of Precipitation] is defined as follows:

PoP = C × A where “C” = the confidence that precipitation will occur somewhere in the forecast area, and where “A” = the percent of the area that will receive measureable precipitation, if it occurs at all.

So … in the case of the forecast above, if the forecaster knows precipitation is sure to occur (confidence is 100%), he/she is expressing how much of the area will receive measurable rain. (PoP = “C” × “A” or “1” times “.4” which equals .4 or 40%.)

But, most of the time, the forecaster is expressing a combination of degree of confidence and areal coverage. If the forecaster is only 50% sure that precipitation will occur, and expects that, if it does occur, it will produce measurable rain over about 80 percent of the area, the PoP (chance of rain) is 40%. (PoP = .5 × .8 which equals .4 or 40%.)

In either event, the correct way to interpret the forecast is: there is a 40 percent chance that rain will occur at any given point in the area.16

So we now know that “rain” means “measurable precipitation”—which is elsewhere stipulated as “at least .1 inch.” And we see that the affected portion of the “forecast area” is indeed a factor (“how much of the area will received measurable rain”). This reduces the probability to a fact: a percentage of area.

But the other factor is quite odd: the degree of confidence of the forecaster (singular, not plural) that precipitation will occur somewhere in the area. This is very troublesome. First, combining these two factors creates an ambiguity: any given percentage (say, our initial 20 percent) could represent either the product of a small area and high confidence (20 percent of the forecast area and 100 percent confidence) or a large area and low confidence (or 100 percent of the area, and a measly 20 percent confidence)—and any combination in between. But the prediction would read the same: “20 percent chance of rain.”

Second, how does one quantify a degree of confidence? It seems a strange approach to make the percentage a function of both the degree of belief and a geographical area. Indeed, it is hard to see how “correctness” could be applied to a quantified statement that refers to a subjective level of confidence—and with the implied singular forecaster. The combination of ambiguity and vagueness leads me to conclude that the official attempt to interpret this probability statement has failed.

Given the widespread meteorological use of computer modeling, one might well have expected that a forecaster’s prediction would be based on computer simulations, which are, in turn, derived from the patterns of past weather records.17 In short, my expectation was that a version of the frequency interpretation, grounded in statistically based modeling, would have informed the definition. My suspicion is that this omission is an unfortunate mistake, but I am astonished that is no mention of a frequency interpretation in this official account. Even so, every weather day is unique; so one would need to set parameters, necessarily deciding to ignore individual differences, in order to construct a class of past weather days like today and examine the tomorrows that followed.

However one unpacks the definition, the result would not cash out the inherent uncertainty in the statement. The assertion that “there is a 20 (or 40) percent chance of rain tomorrow” will not be confirmed or refuted by whether it rains or shines on me tomorrow. So the forecaster can claim a correct prediction either way.

Despite their vagueness, ambiguity, and residual uncertainty, we often are guided by such probability statements, especially when the probability stated is high or low—and rightly so: we are more likely to carry an umbrella if the chance of rain is 90 percent than if it is 20 percent. Where probabilities are cashed out as patterns or percentages of actualities, they are descriptively verifiable only as statements about a class of previous events; but where they are derived from such facts, yet offered as predictions, forecasts, odds, or chances of future events, they are indexes of our ignorance.

Other Intellectual Tools

Though I have focused on probability theory, mathematics offers numerous concepts and techniques for ignorance management. The statistical concept of margin of error is an example. It is a measure of sampling error, the likely variation from the sampling result one should expect if the whole class were surveyed. If a political candidate receives 33 percent voter support among the sample in a political poll with a ±3 percentage point margin of error, we should expect between 30 and 36 percent of the entire population to support that candidate. But the likelihood of the whole-population result being “within the margin of error” is itself a probability. Today, a researcher strives for a 95 percent confidence level, which is a statistical measure of the reliability of results—in this case, there is a probability of at least 95 percent that the results are accurate (true). This too can be read as an index of uncertainty.

One could argue that the entire discipline of algebra is a tool for gaining knowledge by managing ignorance. Algebra abstracts mathematical relationships from numbers, allowing the manipulation of unspecified quantities, using letters (a, b, c … x, y, z) to represent unknowns. It is from its use in algebra that X became a common symbol for “the unknown.”

This meaning of the letter X may be traced to the Arabic word for thing, or šay. Early Arabic texts such as Al-Jabr (820 CE), which established the principles of algebra (and gave the discipline its name), referred to mathematical variables as things. So, we might read an equation as “3 things equal 21” (the thing being 7). Much later, when Al-Jabr was translated into Old Spanish, the word šay was written as xei, which was soon shortened to X. Today we find X used to designate phenomena that are mysterious (X-rays, The X Files, The X Factor, X the Unknown) or the unknown or forgotten (as when Malcolm Little honored his ancestors by changing to his name to Malcolm X). X is the symbol of our ignorance.

The devices of ignorance management that have been developed in other disciplines and in social use are endless. For policy and planning, we use projective scenarios, simulations, and feedback loops. In science, replication of experiments by others is used to increase confidence in the results. Special protocols of confirmation for important military orders are designed to reduce uncertainty. Passwords and related security devices serve to reduce uncertainty, both to identify those entitled to information and those excluded from it. We manage our individual ignorance about esoteric matters by consulting certified experts: it is the premise of television’s Antiques Road Show and of services such as jewelry appraisals, home inspections, and financial audits. The list is endless.

In closing this chapter, I should note once again that we often use ignorance constructively, especially to create conditions of fairness. To decide fairly who should receive the initial kick-off in a football game, we toss a coin. To determine who wins a door prize, we draw a ticket from an unseen pile in an opaque jar. To assign a high-risk mission, we may draw lots. In such cases, we turn over the decision to chance, to events so complex we cannot predict or control the outcome. This is quite different from John Rawls’s use of the veil of ignorance or from “blindfolded” justice in the courtroom: while these, too, use ignorance to assure fairness, they do not opt for chance or unpredictability; they choose the exclusion of specific information, purposefully bounded ignorance.

I also close with a nod to the game of baseball for my appropriation of its rich terminology. It was Muddy Ruel or Bill Dickey, depending on your source, who first applied the term tools of ignorance to the catcher’s equipment: mask, chest protector, shin guards, cup, and glove—a good metaphor for the instruments of this chapter.

Notes