Ralph Hertwig and Christoph Engel
Western history of thought abounds with claims that knowledge is valued and sought, yet people often choose not to know. We call the conscious choice not to seek or use knowledge (or information) deliberate ignorance. Using examples from a wide range of domains, this chapter1 demonstrates that deliberate ignorance has important functions. We systematize types of deliberate ignorance, describe their functions, discuss their normative desirability, and consider how the phenomenon can be theorized. To date, psychologists have paid relatively little attention to the study of ignorance, let alone the deliberate kind. The desire not to know, however, is no anomaly. It is a choice to seek, rather than reduce, uncertainty whose reasons require nuanced cognitive and economic theories and whose consequences—for the individual and for society—require analyses of both actor and environment.
Yet ah! Why should they know their fate?
Since sorrow never comes too late,
And happiness too swiftly flies.
Thought would destroy their paradise.
No more; where ignorance is bliss,
‘Tis folly to be wise. —Gray (1747)
The old saw “What you don't know won't hurt you” turns out to be false at a deeper level. Just the contrary is true “It is just what you don't know that will hurt you”.…Ignorance makes real choice impossible. —Maslow (1963)
When James Watson, co-discoverer of the structure of DNA, agreed to have his genome sequenced and released, he had one request: Information about the apolipoprotein E gene, associated with late-onset Alzheimer disease, should not be shared, even with him (Wheeler et al. 2008). What made this quintessential knowledge-seeker shrink from this information?
Knowledge is valued; knowledge is sought. Western history of thought abounds with examples. Adam and Eve could not help but eat from the tree of knowledge. The first line in Aristotle's Metaphysics reads: “All men, by nature, desire to know” (Ross 1924:255). English philosophers Thomas Hobbes and Francis Bacon celebrated curiosity and the pleasures of learning. Hobbes located curiosity among the passions and considered it a kind of “perpetuum mobile of the soul” (Daston and Park 2001:307). Curiosity is a pure desire, distinguished “by a perseverance of delight in the continual and indefatigable generation of Knowledge, [which] exceedeth the short vehemence of any carnall Pleasure” (Hobbes 1651/1968:124). Similarly, Bacon said of knowledge: “there is no satiety, but satisfaction and appetite are perpetually interchangeable” (Montagu 1841:250).
Modern psychology has echoed these views and portrayed humans as possessing an emotion-like urge to know (Silvia 2008) or an instinct-like “burning curiosity” (Maslow 1963:114). Building on Carnap's (1947:138–141) “principle of total evidence,” philosophers have argued that utility maximizers use all freely available evidence when estimating a probability (Good 1967), and economists have contended that utility maximizers always prefer more information to less (Blackwell 1953). Legal scholars claim that more knowledge promotes the veracity of judgments and facilitates settlement (Loewenstein and Moore 2004). Economic models often assume that more knowledge translates into greater bargaining power (see references in Conrads and Irlenbusch 2013). Psychoanalysts help individuals to liberate themselves from their “ostrich-like policy” of repressing painful knowledge (Freud 1950:152). Knowledge is valued; knowledge is sought.
In today's aging societies, the risk of outliving personal assets is real. Economic life-cycle models suggest spending those assets optimally; that is, tailoring consumption patterns such that assets reach zero at death (Modigliani 1986). To plan accordingly, however, retirees need at least one crucial piece of information: the date of their death. Yet do we mortals—as opposed to our economically rational alter egos—really want to know exactly when we are going to die? To have a “good” death, perhaps we should. The medieval Ars Moriendi literature warns that a sudden death robs people of the opportunity to repent their sins. From this perspective, prisoners facing execution are “fortunate,” as they know the hour of their death (Bellarmine 1989).
Although humans are often portrayed as informavores, the circumstances under which they refrain from acquiring or consulting information are many and varied. Take, for instance, individuals at risk of Huntington disease. Nearly everyone with the defective gene who lives long enough will go on to develop this devastating condition. Yet only 3% to 25% of those at high risk opt to take the near-perfect test available to identify carriers of the gene (Creighton et al. 2003; Yaniv et al. 2004). Similarly, up to 55% of people who decide to be tested for HIV do not subsequently return to learn their test results (Hightow et al. 2003).
Knowledge is not always sought (Ullmann-Margalit 2000). The Stasi, East Germany's secret police, recruited vast networks of civilian informers—colleagues, friends, and even spouses—to spy on anyone deemed disloyal. When East Germany ceased to exist, people were allowed to consult the files that had been kept by the Stasi to see who had informed on them, sometimes with heartbreaking results (Jones 2014). Not everyone, however, wanted to know. Nobel laureate Günter Grass, for example, a frequent visitor to East Germany, refused to find out which of his friends and colleagues had spied on him (Hage and Thimm 2010).
The reality, functions, and rationality of this epistemological abstinence are our focus here. We are not interested in ignorance, per se (Gross and McGoey 2015; Merton 1987; Moore and Tumin 1949; Schneider 1962), in the institutional “production” of ignorance (Proctor and Schiebinger 2008), or in the suppression of unwanted memories (Anderson and Green 2001). In addition, we do not doubt that ignorance can have enormous individual and collective costs (e.g., Marshall 2014). Our concern, instead, is on deliberate ignorance, defined as the conscious individual or collective choice not to seek or use information (or knowledge; we use the terms interchangeably). We are particularly interested in situations where the marginal acquisition costs are negligible and the potential benefits potentially large, such that—from the perspective of the economics of information (Stigler 1961)—acquiring information would seem to be rational (Martinelli 2006).
We believe that deliberate ignorance is anything but a rare departure from the otherwise unremitting quest for knowledge and certainty: It is an underrated mental tool that exploits the sometimes ingenious powers of ignorance. We therefore posit that psychological science has erred in choosing to remain largely ignorant on the topic of deliberate ignorance. We demonstrate that deliberate ignorance is widespread and propose a taxonomy that brings structure to the rich body of examples provided as well as address normative issues: Is deliberate ignorance a good thing? If so, when, for whom, and why?
Mainstream social and behavioral sciences have long skirted the topic of ignorance (“a certain sociological ignorance of ignorance”: Abbott 2010:174) or treated it as a social problem in need of eradication (Ungar 2008). Recently, though, sociologists, philosophers, and anthropologists have come to view ignorance as an object of study with important epistemological and political implications (Gross and McGoey 2015; High et al. 2012; Proctor and Schiebinger 2008). Psychologists, however, have barely been involved in the new study of ignorance or deliberate ignorance, although selective exposure is pertinent to it (Hart et al. 2009). Against this background, we propose the taxonomy outlined in Figure 1.1. Our taxonomy is just that: an attempt at organizing the evidence. An important next step will be theory building. But first, it is important to recognize the landscape of deliberate ignorance. The taxonomy maps out what is, in large parts, uncharted empirical and conceptual territory in psychology.
Figure 1.1 Taxonomy of types of deliberate ignorance.
People can manipulate their beliefs by selecting the sources of information they consult (Akerlof and Dickens 1982) and ignoring some sources altogether. Information avoidance, or defensive avoidance (Howell and Shepperd 2013), versus protective ignorance (Yaniv et al. 2004) has been defined as “any behavior intended to prevent or delay the acquisition of available but potentially unwanted information” (Sweeny et al. 2010:341). It has primarily been studied in the health domain (Howell and Shepperd 2012; Melnyk and Shepperd 2012; Shani et al. 2012). People may choose to avoid potentially threatening health information because it compromises cherished beliefs: they may fear loss of autonomy (e.g., a grueling medical regimen); anticipate mental discomfort, fear, and cognitive dissonance; or want to keep hope alive.
On a pragmatic level, medical information may have material implications. People with the Huntington disease gene may fear stigmatization, discrimination in the workplace, and loss of medical or insurance benefits (Wahlin 2007). In addition, once an irreversible decision has been made (e.g., to undergo a risky treatment), a person may want to avoid regret by not seeking information that suggests a different decision might have produced a better outcome (Van Dijk and Zeelenberg 2007).
The regulatory function of deliberate ignorance may extend to a wider range of domains (e.g., investors who ignore their portfolios in downturns: Karlsson et al. 2009) as well as to emotions (e.g., social and moral emotions: Elster 1996; Hutcherson and Gross 2011). One such emotion is envy. Pay secrecy can be a firm's strategy to hide pay inequality. Among employees, choosing not to discuss one's pay with one's colleagues can be a conscious strategy to avoid envy and its potentially detrimental effects on job satisfaction.
Suppose you are planning to spend the weekend binging on the new season of your favorite TV drama. Would you appreciate a friend giving you a preview? Hardly. People attend soccer games and read mystery novels for the drama. Revealing the ending would spoil their fun. Any policy designed to maximize suspense or surprise will reveal key outcomes (e.g., your birthday present) only at the last minute (Ely et al. 2015).
A common belief in psychology and beyond is that presenting learners with information on their task performance is a powerful and effective way to boost performance. Yet feedback has also been shown to reduce performance under some circumstances (Kluger and DeNisi 1996), such as when it causes attention to be directed away from the task to the self, depleting the cognitive resources needed for the task. It has also been suggested that feedback revealing large discrepancies between aspired-for and actual performance triggers arousal that, in turn, impairs performance (Kluger and DeNisi 1998). These detrimental effects raise the counterintuitive possibility that deliberately foregoing information may enhance learning and, relatedly, performance (Huck et al. 2015; Shen et al. 2015). For instance, arousal might be particularly high and disadvantageous when comparisons with a rival are involved (Garcia et al. 2013).
Another way in which deliberate ignorance may enable performance—and we admit that this idea is purely speculative—is the tendency to adopt an inside view when intuitively forecasting the future progress of a plan. According to Kahneman and Lovallo (1993), people tend to look at the unique details of a plan or project rather than focusing on the statistics of a class of past similar cases. This mindset is typically regarded as bias, resulting in overly optimistic forecasts. Yet taking an inside view and deliberately ignoring outside information may be instrumental to reaching the decision to engage in an ambitious project. It is possible that no textbook would ever be written, no house built, and no opera composed if people based their decision on the progress and success of similar endeavors.
In economics, psychology, political science, and sociology, the reason most frequently invoked to explain why people do not always seek knowledge is strategic ignorance. Strategic ignorance has diverse functions; we discuss four of them (Figure 1.1).
Since Schelling (1956), economists have investigated to what extent deliberate ignorance helps negotiators to gain a bargaining advantage (McAdams 2012). Consider a situation in which one negotiator does not know how costly a breakdown in negotiations would be for both parties. Typically, there are multiple options for striking a successful deal, and each has a different degree of appeal for the negotiating parties. Both parties would generally prefer any of these options to a breakdown in negotiations. In game theoretic terms, the typical bargaining situation puts negotiators in a “battle of the sexes.” If one party opts not to know what a reasonable solution is, the burden of avoiding a stalemate rests with the informed bargainer, who is forced to make concessions from which the ignorant party stands to gain.
Forsaking information may even help both parties. If the information is likely to be ambiguous, for example, any egocentric bias in resolving this ambiguity may shrink the bargaining range (Loewenstein and Moore 2004). Indeed, a number of experimental bargaining studies and principle-agent situations (Crémer 1995) have shown that negotiating players may benefit from ignorance and that a nontrivial number of players deliberately decide to remain ignorant. This observation holds if players can hide their intention to remain ignorant (Conrads and Irlenbusch 2013).
Second, deliberate ignorance may function as a self-disciplining device. This possibility is elaborated in Carrillo and Mariotti's (2000) theoretical analysis of a person with time-inconsistent preferences (i.e., a future incarnation of the self with other goals than the present self) with respect to consuming a good that exacts costs on future health. For instance, nonsmokers who believe the risk of lung cancer to be high may fear that seeing lower estimates would encourage them to smoke, and thus change their behavior in a way they will later regret.
Third, people can eschew responsibility for their actions by avoiding knowledge of how those actions and their outcomes affect others or public goods such as the environment (Thunström et al. 2014). Studies using the dictator game have shown that the opportunity to avoid responsibility (by choosing to be ignorant of the recipient's payoffs) increases the proportion of selfish choices; conversely, when players cannot avoid responsibility, they render fairer (or more ethical) choices (Dana 2006; Dana et al. 2007). Eschewing moral responsibility through ignorance also helps to prevent cognitive dissonance: “often it is better not to know because if you did know, then you would have to act and stick your neck out” (Maslow 1963:123). Utility-maximizing individuals may even be willing to pay to be shielded from information (Nyborg 2011).
Fourth, choosing to remain ignorant can be a strategy for avoiding liability in a social or even a legal sense (Gross and McGoey 2015; McGoey 2012a). It can be used in the context of
As just one example, scientific communities, funding institutions, and lawmakers decide to leave some areas of inquiry unfunded because exploring them involves profound risks to the public (e.g., research on highly pathogenic avian influenza H5N1 viruses: Fouchier et al. 2012). Finally, policy makers may resist evidence-based evaluation of their policies because they do not want to be held responsible for failures. In recent years, for instance, the German federal states have made it impossible for researchers to break down the data of the Programme for International Student Assessment (PISA) by state, thereby preventing scientists and the general public from comparing performance across federal states.
Let us briefly turn to the pervasive role of deliberate ignorance as a strategy for avoiding liability in legal affairs. There are few places where deliberate ignorance plays a more central role than in the courtroom. Under most rules of criminal law, it must be shown to the requisite standard that a defendant was aware of the facts that constitute the crime in question. To illustrate, consider the U.S. Code, Title 18 (Crimes and Criminal Procedure, Part 1, Section 1035)2 on social security fraud:
(a) Whoever, in any matter involving a health care benefit program, knowingly and deliberately…makes any materially false, fictitious, or fraudulent statements or representations, …shall be fined under this title or imprisoned not more than 5 years, or both” [emphasis added].
This and other provisions require the determination of positive knowledge. A potential defendant may therefore avoid criminal liability simply by not acquiring knowledge. Legal systems sometimes seek to override this strategy (Robbins 1990). For instance, the “ostrich instruction” tells jury members in U.S. courts that they may find the knowledge requirement to be satisfied by a defendant's willful ignorance of the relevant facts. Yet this instruction raises important questions, such as why the willfully blind actor is, in a normative sense, legally and morally culpable (Hellman 2009) and what exactly the mental state of willful ignorance is, including the underlying motives (Sarch 2014). Last but not least, how the legal system evaluates the implications of deliberate ignorance depends on who the homo ignorans is. In the attorney–client relationship, the lawyer's deliberate ignorance is tacitly approved. It has been argued that attorneys, notwithstanding their obligations to the public, must be permitted, in the interest of loyalty to their client, not to seek out important information pertaining to the client's conduct. This practice has been argued to raise ethical issues (Roiphe 2011).
In his conception of a social contract, Rawls (1999) asked readers to place themselves in a hypothetical state of not knowing their place in society, or any other personal, social, or historical circumstances. Theoretically speaking, everyone thus shielded by a thick veil of ignorance from the temptation of pursuing their own special interests would agree on universal standards of fairness and justice. Beyond the realm of thought experiments, this veil-of-ignorance method is used, for instance, by experimenters in double-blind randomized trials (Kaptchuk 1998), hiring boards, and courts to preempt bias. One example is blind auditioning in symphonic orchestras. This fairly recent change in the audition policies of major U.S. orchestras (e.g., candidates play behind a screen to hide their identity) has contributed to a higher probability of female musicians being hired, thus, substantially boosting the proportion of women in symphonic orchestras (Goldin and Rouse 2000).
In 2008, the average American was estimated to consume 100,500 words and 34 gigabytes (106 bites) of information per day (Bohn and Short 2009). Though vast, this amount is small compared with what they might theoretically have consumed (Hilbert and López 2011). With the arrival of technologies and data-collecting devices such as predictive genetic testing, self-tracking devices that measure, for instance, the number of bites per meal, ubiquitous computing, the Internet of Things, and myriad social media (e.g., Facebook, Twitter, WhatsApp), modern societies have entered a brave new world. Depending on one's perspective, it is either a paradise or a nether world where people drown in intractable amounts of information.
In this new world, countless actors (e.g., companies, advertisers, media, and policy makers) seek to colonize and appropriate people's attention. There is a risk that “hyperpalatable mental stimuli” designed to capture limited attentional resources will hijack the human mind, which evolved in a different information ecology (Crawford 2015). By the same token, obesogenic environments now brim with inexpensive, convenient food products engineered to take consumers to their bliss point (i.e., the concentration of sugar or fat or salt at which sensory pleasure is maximized). Evolved to crave such hyperpalatable food, consumers risk losing control over what and how much they eat (Moss 2013). Just as food engineers have become masters at hitting people's physical bliss points, the (social) media and Internet companies have become experts in designing mental stimuli that commandeer people's attention: The Internet now hosts some 700–800 million individual porn pages alone (The Economist 2015). “Stimulation begets a need for more stimulation” (Crawford 2015:17) and distractibility may be the mental equivalent of obesity. In an informationally fattening environment, citizens risk losing control over how they allocate their attention.
Alarm about information overload is arguably as old as the concept of information itself (Bell 2010). Nevertheless, attending to a piece of information does exact opportunity costs: the choice to know one fact invariably implies not knowing other facts. For humans, who are hardwired to monitor their environment, the ability to allocate one's limited attentional resources reasonably is becoming increasingly valuable in today's world. Indeed, the ability to select a few valuable pieces of information and deliberately ignore others may become a core cultural competence to be taught in school like reading and writing: “[A]n ability to ignore things would seem to remain important to the lifelong task of carving out and maintaining a space for rational agency for oneself, against the flux of environmental stimuli” (Crawford 2015). We conclude this classification of types and functions of deliberate ignorance with a few observations:
Our taxonomy is descriptive. What about the normative perspective? Is deliberately ignoring information desirable for the individual and for society? By what normative standards is ignoring information to be assessed?
Approaching these questions from a consequentialist perspective, one must identify and compare all foreseeable consequences of acquiring versus neglecting information: for the decision maker as well as for all others (potentially) affected by their choice. Take, for instance, health information. Although some researchers stress the individual and social harm of ignoring health information (Case et al. 2005; Sweeny et al. 2010), others emphasize the protective benefits of doing so (Shani et al. 2012; Yaniv et al. 2004). The balance between costs and benefits may depend on various subjective concerns and objective facts. One important variable is whether any action can be taken in response to the information obtained.
To illustrate, let us return to James Watson, who declined information on his genetic predisposition to late-onset Alzheimer disease, which was thought to have claimed the life of his grandmother (Nyholt et al. 2009). Watson perhaps thought that any benefits of knowing would be undone by the lack of medical treatment or cure available (Wheeler et al. 2008). Alternatively, he may have wanted to spare himself the dread of waiting for the onset of symptoms (Berns 2006). Is the choice not to know irrational or ethically dubious? Some researchers have suggested that individuals have a right not to know in the context of genetic predictive testing, and various international conventions have recognized this right (Wehling 2015a). Others have argued that ignorance undermines self-governance (Bortolotti 2012; Harris and Keywood 2001).
When ignoring information exposes others to risk (or imminent harm), Mill's harm principle may be invoked (Brink 2014). Not picking up one's HIV test results may put future sexual partners or an unborn child at risk, because if the disease is treated, it is far less likely to be transmitted. A hard-nosed welfare theorist would simply sum up the utilities of all possible consequences and—akin to the notion of “efficient breach of contract” (Cooter and Ulen 2008:262–268)—entertain the notion of “efficient ignorance”: Provided the (expected) damage to the victim is smaller than the (present) gain for the person ignoring the information, society should approve of ignorance. It could do so, for instance, by exempting individuals who forego the opportunity to acquire that information from liability. Most non-economists, however, find the concept of “efficient breach” repugnant (Lewinsohn-Zamir 2012). They are likely to see efficient ignorance in the same light, especially when the commodity in question is life and limb.
A distinction that is key to Mill's harm principle—that between consensual and nonconsensual harm—would also be a nonissue for the same adamant welfare theorist. Returning to our example of the unclaimed HIV test result, deliberate ignorance may cause consensual harm (to a consenting sexual partner aware of the risk) or nonconsensual harm (Brink 2014). The welfare theorist would reason that a consenting individual has done so either because that person is indifferent to the risk or the individual has consented by receiving compensation (sex, to continue our example). Again, most people would part company with this argument, though they might accept truly voluntary consent as a justification for not claiming an HIV test result.
In other cases, the welfare balance seems straightforward. If there is a risk of liability, an individual may wish to forgo information that institutions (e.g., employers, courts) or society at large will want to be known. The opposite may be true in jury decision making. An individual juror may be curious (Loewenstein 1994) or expect some private reward for finding out specific information (Kang et al. 2009). Society, however, wants courts to be impartial and therefore enforces ignorance (e.g., by barring character evidence3).
If the information to be deliberately ignored is unsolicited, the normative question shifts from the legitimacy of not acquiring or using available information to the right to protect oneself against information intrusion. Many diagnostic tests inevitably produce surplus medical information that “more often than not, would have been left undiscovered” because the abnormality would not have bothered the patient during her lifetime. The problem is that once, say, a microcarcinoma has been discovered, it “cannot easily be ignored” (Volk and Ubel 2011:487), either by worried patients or by doctors faced with a litigious environment. More generally, in a medical environment that encourages excessive, often ineffective, and sometimes harmful medical care (Welch 2015), a right not to know may, paradoxically, be a fundamental right of the fully informed patient. Pondering the decision (not) to know before the information is available puts people in a double bind: They have to work out how much they want to know a piece of information before knowing what it conveys (Rosenbaum 2015). Once the information is known, the choice to ignore it may, for psychological as well as institutional reasons, be very difficult.
The normative assessment of instances of deliberate ignorance is even more complex when the decision (not) to seek or use knowledge is taken on behalf of someone else. As an example, consider predictive genetic testing in childhood (Bloch and Hayden 1990), when one person's right (desire) to know clashes with another's right (desire) not to know. For instance, a mother may not want to know who adopted her child, but the adopted child may want to know who is her biological mother.
To conclude, there is no ready-made answer to the question of when deliberate ignorance is beneficial, rational, or ethically appropriate. Each class of instances must be assessed on its own merits. As we will see shortly, several variants of strategic ignorance can be modeled as the rational behavior of a utility-maximizing agent. A rational (Bayesian) agent may even pay money not to see cost-free information, counter to Good's (1967) advice (see also Kadane et al. 2008; Pedersen and Wheeler 2013), and institutional arrangements (e.g., in the courtroom) may enforce deliberate ignorance in the service of impartiality. Of course, there is also a sinister side to deliberate ignorance, such as when it is used to evade responsibility, escape liability, or defend anti-intellectualism.
Finally, let us emphasize that the normative benchmark for the ethics of deliberate ignorance need not be utilitarian or consequentialist. Arguments extolling the desirability of (more) knowledge appear so intuitively persuasive because they invoke a very different normative ideal. Ever since the Enlightenment, knowledge has not only had instrumental but also moral value. Our understanding is that deliberate ignorance is not per se rational or irrational, ethical or unethical. Instead, deliberate ignorance is a cognitive tool whose success—measured in terms of individual or collective welfare—requires renewed analysis of both the actor and the environment (Arkes et al. 2016; Todd et al. 2012). Such an analysis of the ecological rationality of deliberate ignorance may also add a new dimension to the motto of the Enlightenment, Kant's (1784) sapere aude: dare to use your own reason. The struggle for personal freedom and self-determination requires emancipation through knowledge and the courage to use one's own reason. In a world in which knowledge (information) is not unconditionally advantageous, however, using one's own reason can also mean choosing not to know.
Research on the psychology of deliberate ignorance is in its infancy. Our objective in the first part of this article was to demonstrate that it is an endeavor worth pursuing and to offer a taxonomy or initial structure to categorize the dazzling variety of cases of deliberate ignorance. In addition, we sought to complement the is with a discussion of the ought: How ought one think about individuals’ choosing not to acquire information, even though that information is available? Our treatment is but a first step; many more are necessary.
Since deliberately ignoring information involves choice, the comprehensive choice theory in economics appears to offer an encompassing theoretical framework for deliberate ignorance. Canonical economic models take preferences as given and aim to explain choices by properties of the opportunity structure (see Trimmer et al., this volume). Furthermore, economic agents are assumed to optimize (i.e., to act as if they weigh marginal cost against marginal benefit). Yet if this framework is to be adopted for deliberate ignorance, it needs to specify all expected benefits from (not) acquiring information as well as all expected costs.
What role does information play in an economic framework? According to the classic economics of information perspective, individuals derive utility not from information per se but from its potential material consequences (Stigler 1961). Recent findings, however, have led to a different view: beliefs and information, the time of information, and even its avoidance can be a source of pleasure and pain (Berns 2006; Grant et al. 1998; Karlsson et al. 2009; Kreps and Porteus 1978). Furthermore, the utility that individuals derive from an outcome may depend on their anticipatory feelings (e.g., anxiety, hope) about it (i.e., anticipatory utility; Eliaz and Spiegler 2006; Loewenstein 1987) or the anticipated emotional responses (e.g., disappointment) to information (e.g., bad news; Fels 2015). This might help explain individual time preferences (e.g., someone may wish to bring forward an unpleasant experience to shorten the period of dread, but delay a pleasant experience to savor the anticipation of it).
An economic framework accommodates individual-specific aspects of the decision maker that may shape the choice (not) to know. These include the individual attitude to risk (the prospect of obtaining a piece of information can be seen as equivalent to entering a risky gamble for an anticipated payoff), the individual degree of patience, and the individual anticipation of strategic actions taken by other interested actors. Moreover, it accommodates environment-specific aspects, such as availability of an effective cure (Fels 2015; Hilbe and Schmid, this volume).
Despite these obvious advantages, we do not believe that an economic framework is sufficient to explain and predict deliberate ignorance for the following reasons:
Deviating radically from this approach is the thesis that individuals, unable to implement complex processes, rely instead on heuristics. One reason to posit that at least some types of deliberate ignorance are best understood in terms of heuristics is the observed impact of emotions (Schooler, this volume; Suter et al. 2015, 2016). In affect-rich contexts, one or a few top-ranked reasons, concerns, or motives—rather than an extensive (compensatory) cost-benefit calculus—may determine the choice to know or not to know.
Would the use of a heuristic process rather than expected utility maximization render the choice of deliberate ignorance irrational? Some researchers have conceptualized the heuristics that people use as error prone (e.g., Kahneman 2011). Others hold that even if people could implement a complex utility-maximization calculus, they often prefer to use heuristics to save mental effort, at the price of sacrificing some accuracy (e.g., Payne et al. 1993). Still another view suggests that heuristic processing of reasons, concerns, and motives can result in choices that are adaptive and ecologically rational (Gigerenzer et al. 2011). To evaluate acts of deliberate ignorance as advantageous or disadvantageous, it is necessary to examine how instrumental those acts are in achieving the person's functional goals, rather than evaluating whether they rely on utility maximization calculus and its exacting assumptions.
As we seek to theorize the phenomenon of deliberate ignorance, it is important to look at potential parallels between deliberate ignorance and forgetting (see Schooler as well as Trimmer et al., this volume). Forgetting is a process through which previously encoded information is discarded, and is integral to the efficacy of the human memory. Forgetting fosters decision processes, such as the accuracy of inference heuristics (Schooler and Hertwig 2005), and serves key adaptive functions, including emotion regulation, through the selective forgetting of negative memories at the moment of both encoding and retrieval (see Nørby 2015). Are the adaptive functions of forgetting different from (some of) the functions of deliberate ignorance? We are not aware of an encompassing memory theory that could generate all adaptive functions of memory loss (Nørby 2015), nor are we aware of any encompassing theory of deliberate ignorance that could generate its various functions.
As a theory is constructed, specific hypotheses will need to be generated. These hypotheses, in turn, can be tested using (a) survey data to measure the prevalence of and preferences for deliberate ignorance, (b) experiments to measure the reality of specific types of deliberate ignorance, and (c) field data. All three approaches may enrich the scientific community's knowledge of personality dimensions (e.g., risk and moral attitudes, curiosity, sensitizer vs. repressor coping styles, aspiration levels) and environmental factors (e.g., availability of medical treatment) that predict people's information preferences. For instance, age appears to be a key factor (Hertwig et al., submitted), as older people are more inclined to choose not to know. Deliberate ignorance may thus be a mental tool that older people use to prune negativity from their lives (Carstensen 2006; Carstensen et al. 1999).
Work on the psychology of deliberate ignorance is in its infancy, and thus it is premature to derive policy implications (see, however, Teichman et al. and Zamir and Yair, this volume). Some types of deliberate ignorance appear to have immediate prescriptive implications as an impartiality and fairness device (see MacCoun as well as Bierbrauer, this volume). For instance, if decision makers (e.g., jurors, hiring committees) agree that deliberations may be biased by certain information, then insulating themselves from this information is a reasonable course of action. A deliberate veil of ignorance may be a tool worth harnessing systematically across a wide range of institutional selection processes.
In addition, as suggested earlier, to maneuver through our informationally laden environment, the ability to select certain bits of information while deliberately ignoring others might be crucial to acquire. If so, the building blocks of this competence, and how they could be taught to citizens of all ages, need to be studied. Reverse engineering may help us begin to understand the methods used by those who design information: How do they manage to get people hooked? What strategies are necessary to resist them and maintain the level of agency and autonomy that most people want and need?
The desire not to know is poorly understood and, in our view, not simply an “anomaly in human behavior” (Case et al. 2005:134). It is prevalent, and nuanced psychological theories are required to understand it. The phenomenon of deliberate ignorance also raises important questions. Answering these questions promises a deeper understanding of how people reckon with uncertainty and may, indeed, prefer it at times to certainty. We believe that the study of deliberate ignorance may become a new scientific frontier of great importance. If so, it would represent a promising opportunity for multiple disciplines to work together to examine the cognitive and emotional underpinnings, rationality, and ethics, as well as the sociocultural, institutional, and political implications of deliberate ignorance.
The original article benefited enormously from helpful comments by Gordon Brown, Dagmar Ellerbrock, Werner Güth, Yaakov Kareev, Alexander Koch, Joachim Krueger, Tomás Lejarraga, Georg Nöldeke, Arthur Paul Pedersen, and Jan K. Woike. We are grateful to Susannah Goss and Valerie Chase for editing the manuscript and to Katja Münz for conducting the literature search. This research was supported by Grant HE 2768/7-2 from the German Research Foundation (DFG) to Ralph Hertwig.