There are two ways to be fooled. One is to believe what isn’t true; the other is to refuse to believe what is true.
—Søren Kierkegaard
Aristotle famously opened his treatise on First Philosophy with the claim, “All men by nature desire to know.”1 For Aristotle, this desire is rooted in our very biology, in our delight in our sensory systems and our cognitive faculties. It is a useful desire: knowledge is advantageous. From an evolutionary perspective, we would say that knowledge has survival value. It seems obvious that there is advantage in coupling a capacity for knowledge with a keen desire to learn: understanding means that right responses need not depend on luck. Conversely, ignorance and resistance to knowledge are maladaptive traits; each is a serious liability in general, and in some circumstances, a fatal one.
It is both natural and desirable, therefore, to attempt to enlarge the boundaries of our knowledge. The idea that our ignorance might be intentional, that we might choose to construct our ignorance, seems odd at the least, and at most a perversion of our natural cognitive desire and joy. As I observed earlier, however, boundaries may be natural to the terrain, or they may be artificial; and that is true of boundaries between the knowledge and ignorance. And, just as a wall marking the boundary of my property may be erected by me or by my neighbor, so my ignorance may be shielded through my own deliberate construction or through the works of others.
My general term for intentional ignorance is nescience. Different from what we do not yet know, different from what we can never know, nescience designates what we or others have determined we are not to know. I will identify five major forms of nescience, which are distinguished by factors that motivate a decision to barricade the boundary. They are: rational ignorance, strategic ignorance, willful ignorance, secrecy, and forbidden knowledge.2 We will examine each in turn. Finally, we will consider how ignorance may be constructed inadvertently.
Rational ignorance is perhaps the easiest to describe. There are occasions when I make the more-or-less conscious decision that something is not worth knowing—at least for me, at least not now. I browse the shelves in a bookstore, select a thick volume, ponder reading it, and then put it back. You see the long, fine-print, legal statement of policy you must agree to before installing the software and you decide, without reading it, to click “agree.” An eager student, finding it difficult to choose among available courses, first rules out certain subjects. To put the point in economic terms (which is the field in which the concept of rational ignorance originated): there are times when we believe that the investment in learning X would outweigh the benefit of knowing X. In such circumstances, the reasonable decision is to forgo learning, to decide not to know. The ignorance that we choose to retain is called “rational” ignorance.3 The judgment is often comparative: it is better to learn Y than X. The key element is that the individual makes a judgment about learning based on the perceived costs of learning in relation to the perceived benefits of knowing.
This narrowly utilitarian formulation ignores the possibility of learning for its own sake, leaving unaccounted the intrinsic value of knowledge. It appears to commodify knowledge, as though learning is a simple exchange in the marketplace. But the human condition makes epistemic choices inevitable: equipped with different aptitudes, we pursue our interests, purposes, and plans within an unyielding finitude. When we do not feel the press of time, it may seem that we purely choose what to know; but when time weighs on us, we realize that choosing what to know is choosing what not to know by default. A young person might happily look forward to a lifetime of reading in which possibilities seem endless; but demanding responsibilities or the weight of years might cause even an avid reader to triage reading selections. I once calculated that if one read and assimilated a book every day for eighty years—which seems an outer limit—one would have absorbed 29,220 books. Impressive, yet it would represent a small fraction (currently about one-tenth) of the books published in the United States in one year—to say nothing of other countries or all the journals, newspapers, magazines, blogs, reports, and other forms of knowable publications in many languages. Where to begin?
As available knowledge burgeons around us, choices of rational ignorance loom larger for individuals and for our society. In a “knowledge-based” society, both the pressure to know and the difficulty of deciding what is worth knowing increase. Though we may soften this dilemma by thinking we can always “pick up” the knowledge at a later point, in most situations, we are not deciding merely what we will learn now or sometime later or whenever we have time. We are deciding what we will never learn. We have chosen rational ignorance.
Knowing how to make such decisions wisely is, therefore, an increasingly valuable personal skill.4 It requires judgment, because such choices are not clear-cut. Often one cannot really understand what one is rejecting, what it would mean to know it, or what will be the liabilities of one’s ignorance of it. Perhaps the book I chose not to read would have been transformative. Perhaps you will one day regret not reading the user agreement for your software. Just as with the choices of other activities, these choices are never made in isolation; they are made in comparison with other uses of our time, with other potential knowledge.
Usually, the choice is autonomous and made by individuals only for themselves: I may think it would be interesting to know the Latin names of all the plants in my garden, but I soon decide it is not worth the time and effort to learn the information. I am invited to attend a workshop to learn all the features of my office voicemail system—and I quickly think, “I have better things to do.” These are personal choices. Sometimes, however, the decision for rational ignorance is made by one party for another (individual or group). Consider this example: school district officials decide their students no longer need to learn cursive writing. Again, judgment is required, and unintended consequences may arise for rational ignorance: perhaps students who never learn cursive writing will be unable to read certain historical documents without remedial training.5
The term “rational” is seldom purely descriptive; it usually carries normative force. That ambiguity is present in “rational ignorance” as well. The descriptive sense includes the possibility that we may mistakenly or imprudently choose ignorance; the normative implies that we have a plausible justification. We need both senses of the concept. Though we may reject the narrow cost–benefit interpretations of knowledge, it is inescapable that we must choose to bypass some opportunities to learn. Such choices should be made wisely. Simple found ignorance, when learning is recognized as a live option, becomes intentional, rational ignorance.
Strategic ignorance is also calculated, but in this case the intent is to use ignorance as an advantage. (It would be more accurate to call it tactical ignorance, since it is usually a tactic to advance a larger objective or purpose—but the term is now embedded in the literature.) There are situations in which one’s not-knowing something is advantageous, making ignorance an asset rather than a liability. For example, an official who remains ignorant of a questionable internal office matter has “deniability” regarding it and may prefer that state to being implicated. The criminal attorney who tells her client, “I don’t want to know whether you did it or not,” is trying to preserve the greatest latitude for the defense (and perhaps also to avoid subornation of perjury). These are strategic uses for a barrier to knowledge.
What drives the reaction in such situations is avoidance of the responsibility of knowing; it is the preservation of a future excuse, like pocketing a “Get Out of Jail Free” card. In chapter 4, I questioned whether it is virtuous to attempt to protect one’s innocence; the same moral question applies regarding the tactic of seeking to avoid situations of moral liability. So, one might wonder whether the deliberate cultivation of exculpability is itself morally culpable. But in many systems of law and policy it is not illegal or violative; professionally, it may be savvy and even wise. Strategic ignorance is not a trait of character or an epistemic phobia; it is episodic and anticipatory, directed toward problems of knowing something in a particular context. Nonetheless, even when looking at the case level, typical cases do seem to involve “gaming the system,” in that the individual who chooses ignorance for strategic reasons often has more than an inkling of the content of the information being refused.6
There are, however, other kinds of cases of strategic ignorance that are morally comfortable. Suppose I am up for a promotion and my colleagues must, in my absence, consider my qualifications; afterward, one gossipy friend offers to fill me in. Realizing that in any event these people will remain my coworkers, I might prefer not to know who said what, believing it might affect our interactions; and so I might rebuff my friend’s offer of juicy revelations in order to protect collegial relations with strategic ignorance. Or, to take a less convoluted situation: I do not want to read any previews of a mystery thriller, even the teaser on its back cover; I prefer to begin in total ignorance of the plot, guarding my lack of foreknowledge for maximal impact later. Both these examples see ignorance as an asset, a tactical advantage to achieve or protect a larger goal. The motives here are not avoiding responsibility, but preserving a comfortable collegiality and preventing ruinous spoilers, respectively.
In the iconic image, Justice is blindfolded, not blind. It suggests that her ignorance is not a liability, but a strategy. From ancient times, strategic ignorance has been used to promote fairness and a just outcome. The judge and jury are screened from the knowledge of particulars that may otherwise bias judgment. John Rawls’s “veil of ignorance” (chapter 4) was employed strategically for precisely this purpose in selecting social principles that are just. We might set this notion against its equally venerable opposite: does fairness require strategic ignorance or full knowledge? The biblical God of judgment is omniscient: “Even before there is a word on my tongue, Behold, O Lord, you know it all.”7 Various philosophers, seeking a nontheistic morality, have developed such God-substitutes as an “Impartial Spectator” or “Ideal Observer,” who possess all relevant knowledge, a condition that is deemed necessary for objective, wise judgments.8
These opposing conceptions may reflect the gap between human realities and ideal projections. As the Psalmist says, “Such knowledge is too wonderful for me; it is too high, I cannot attain to it.”9 Since no human can be omniscient, or even be assured of possessing all knowledge relevant to a case, a strategy is required: the removal of prejudice and the prevention of bias are essential—and assuring the ignorance of particulars is an effective tactic.10 This is not a straightforward matter: what information should be kept from a jury to assure a fair trial is hotly contested; in order to impose punishment wisely, judges may seek to know particulars through a presentence investigation; and many investigative panels want to determine for themselves the relevance of information. Errors and injustice can easily result from the withholding of salient facts.
Maintaining ignorance is, then, a strategy for preserving deniability and innocence, for keeping options open, for avoiding responsibility, but also for assuring fairness and just decisions. What unites all these cases and distinguishes them from rational ignorance and other forms is the tactical use of ignorance to gain a benefit, the decision to act with what we might call “ignorance aforethought.”
Willful ignorance is a more complex matter. Although all five types of nescience are intentional and therefore “willful” in one sense, this variety typically stresses the role of the will in maintaining one’s ignorance of a specific subject, rather than calculative reason. This is not a matter of laziness or distaste for learning in general. A person is commonly called willfully ignorant about a matter when he persistently ignores the topic despite its likely salience and even resists learning about it or assimilating facts that bear on it. The concept is variable along two dimensions: (1) the degree of awareness or decisiveness one has about one’s will to remain ignorant, and (2) the vigor of one’s negative response to the subject and learning about it, ranging from complacent avoidance to rejection and hostility. As these factors increase, maintaining ignorance requires a stronger will, takes greater mental effort or psychological energy, and generates acrimony. Cases in which both decisiveness and resistance are strong usually involve fear of the truth. Learning the truth can be difficult; embracing it can be even more difficult.11 Better not to know.
A tidy example of willful ignorance is the wife who ignores indications and rumors suggesting her spouse’s infidelity. But there are many significantly nuanced possibilities in such a scenario. The wife, for a variety of reasons, may have chosen in full self-awareness to ignore her spouse’s waywardness. Or, the wife may not be aware of either the supposed indications or her resultant attitude. Perhaps they are repressed or subconscious, though still operative. Furthermore, the wife may be well aware of the possible infidelity but choose to avoid investigating the situation; or she may be vigorously resistant to learning anything about her spouse’s infidelities and hostile to would-be informants because knowing the truth might “ruin everything.” In all these psychological nuances, the wife displays a willful ignorance.
These subtleties point to another aspect of the complexity of the concept: in many cases, willful ignorance involves self-deception. The wife may be deceiving herself—about what she knows, about what she wills, or about how she is responding. Alone among types of ignorance, willful ignorance connects being ignorant of something with ignoring that thing. As I noted earlier, ignoring X seems paradoxical because it implies that one is sufficiently aware of X to engage in ignoring it. Ignoring involves a refusal of attention.12 Self-deception notoriously involves, prima facie, a bifurcation of the self: there is the self who is aware (the self who deceives), and the self who is unaware (the self who is deceived). We cannot stop to examine these issues—they have generated a fascinating literature—but we can observe that self-deception and denial also require mental energy. Even in the central cases of willful ignorance in which the lack of knowledge is quite genuine, if the will is exercised at all, it is directed toward a specific subject or epistemic object—and that requires at least a minimum level of awareness, else why would the will not-to-know be directed toward the topic that it is?
Not only individuals, but large groups as well can be willfully ignorant, and their resistance often reflects bias, prejudice, privilege, or ideological commitment. Racism and xenophobia, for example, may be characterized and sustained by willful ignorance about members of the targeted group. Philosopher Charles Mills has examined the source of racial ignorance, arguing, “White ignorance has been able to flourish all these years because a white epistemology of ignorance has safeguarded it against the dangers of an illuminating blackness or redness, protecting those who for ‘racial’ reasons have needed not to know.”13 Similarly, fundamentalists in several major religions resist acquiring accurate portrayals of gays, lesbians, and transsexuals. A self-reliant individual might persist in disparaging homeless people with crude stereotypes, but ignore articles that give an informed picture. Students who see themselves as mathematically incompetent may resist quantitative learning that is in fact well within their reach. Technophobes may refuse even to try to learn how to operate common devices. Willful ignorance is, of course, a serious problem for teachers, who encounter it in classrooms as active resistance to learning. Philosopher Jennifer Logue has written, “Re-evaluating ignorance as neither a simple nor innocent lack of knowledge but as an active force of both psychic and social consequence might help us to engage the resistance with which we are often met when dealing with ‘difficult’ subjects like racism, sexism, or heterosexism in educational settings.”14
The willfully ignorant may prefer to repeat false knowledge, even to wear their ignorance like a badge, rather than to entertain unsettling truths. They may resist information that contradicts their prejudices, frantically discredit evidence, and reject attempts to inform—even if, at some level, they may suspect they are wrong. They are, we say, “closed-minded.”
Fear seems to me the deeper motivator than bias or prejudice; it is active even when the other factors are missing. Consider a mother who is so upset about her son’s military service that she refuses to discuss it while he remains on active duty, rejecting all information about his assignments and experiences. Except for wanting to know that he is alive, she remains willfully ignorant. This is not bias or prejudice at work, but it is fear. Or, imagine a father with a headstrong, teenage daughter; regarding her activities, her friends, and even her whereabouts, he is willfully ignorant. Again, this is not a matter of prejudice or false knowledge, but it is still not benign: it may be motivated by fear as well, though it is surely an abdication of parental responsibility. “Putting one’s head in the sand” is a cliché for the attitude of willful ignorance.
Willful ignorance has become quite topical among writers because it appears to be in fashion in society. Perhaps the startling accessibility of information makes ignorance of important matters seem more likely willful: How could he not know? But there is little doubt that we are witnessing a wave of reprehensible, willful ignorance among political leaders as well as citizens. All the marks are there: fervent commitment to an ideology, the mantric rehearsal of false knowledge and slogans, resistance to evidence that challenges beliefs, absence of open-minded curiosity, and outright hostility to those who offer different claims, often tending to personal abuse. There are alarming signs that a more radical epistemology is developing in which data, facts, knowledge, and truth itself are discounted in favor of ardent assertion, conformity to a comfortable ideology, and the right to believe whatever one chooses. The masterful and blunt Harry Frankfurt called the effluvium of this rhetorical disinterest in truth “bullshit.”15 Those who take the view that repeatedly broadcasting any strong claim is as good as uttering the truth are even more difficult to reach than the willfully ignorant or the closed-minded, who still at least espouse a claim to truth. The cognitive irritation of genuine contradictions occurs only in those who have a regard for truth. The others openly do not worry about contradiction; they are free, for example, to dismiss scientific knowledge while embracing the technology on which it is based.
The concept of willful ignorance seems to be loaded negatively in not being motivated by “rational” or strategic considerations; in its complacency or hostility to knowledge; and in the analysis of its sources in bias, prejudice, and fear. And if one judges from its use in contemporary discourse and in scholarly accounts, it is a cognitive dereliction with ethical import, a personal and public epistemic vice. I will discuss some of the ethical implications in the next two chapters, but first we should consider whether there are any examples of justifiable willful ignorance. I think the answer is yes. One sort of case would be maintaining willful ignorance about the particulars of a traumatic event in order to avoid deepening the trauma and to permit healing. Suppose a terrible tragedy occurs that costs a loved one her life—and the event was filmed. Willful ignorance—refusing to watch the film, to hear the details of damage to her body, to follow the specifics of the investigation into precise causes of the event—might be justifiable. A grief therapist might recommend it as therapeutic, even as necessary, to cope with such tragedy.
I want to be cautious on this point, though. Recommending the adoption of willful ignorance creates a very slippery slope. The current debate over “triggers” for students—advanced notice of potentially disturbing content, themes, or experiences in their assignments—began with defensible cases: someone who had been raped might need the protection of a warning that a particularly vivid and horrendous rape scene is included in the required reading or viewing, and thus be excused from that assignment or given an alternative; or moral vegetarians and those with queasy stomachs might deserve to be forewarned about the slaughterhouse scenes included in a film for a class on animal rights. But if a system of rights to such triggers is established, the slope becomes slippery because: (1) the student must judge in ignorance the impact of course content; (2) the instructor must judge in ignorance the potential for students’ latent traumas and likelihood of distress; and (3) the range of trauma that is relevant is undefinable (does once being bitten by a dog count?). Moreover, the benefits of willful ignorance tend to be overestimated by those who exhibit it. The grief therapist in the above example might well caution that, while dwelling on the details of the accident, repeatedly viewing the film, and so on, would not be healthy, at some point, the process of acceptance and healing may involve letting go of the psychological barriers to discussion of the tragic event. Even when willful ignorance is justifiable as therapeutic, its value is often temporary.
Cultures recognize spheres of privacy; many elevate it to the status of a right. We need a safe space in which our thoughts, plans, and actions may be formed and reviewed without the immediate scrutiny and judgment of others. Privacy may be claimed by individuals, families, corporations, and other entities. It is required for the development of intimacy, and so sought by couples in love. I define intimacy as mutual and replete self-disclosure over time. Such an unfolding of the self would not be possible if it were fully public as it proceeded. Indeed, a sphere of privacy may be required for the formation of identity.16
A domain of privacy erects an epistemic boundary that others should not seek to cross. To assert a sphere of privacy is thus to leave those outside the sphere in ignorance. Strictly speaking, privacy is not always nescience because it need not be intended or pursued, but it becomes such when it is affirmed or actively safeguarded. The same is true for confidentiality, which derives from privacy. Private matters, when shared with certain professionals or other confidants, are given a confidentiality that is assured, affirmed, and protected similarly to privacy. Those outside the sphere “have no business” seeking to know private or confidential information and are forbidden from taking direct actions to reveal it; they should rest content in their proper ignorance. Normally, only the subject to whom the protected information pertains has the right to disclose it. Should someone else within the sphere divulge such matters, it would be a violation, a betrayal of epistemic trust.
Of course, there may be issues of public interest that justify such a violation. Whistle-blowing and various forms of reporting to authorities are examples, and many such acts of disclosure are not optional: they are legally if not morally required. Courts may subpoena private or confidential documents and compel testimony (though professional confidentiality is recognized for specific relationships). Thus the boundary crossing may be a choice for the individual whose privacy is at stake, but not for others with whom the information was shared. Nonetheless, under legitimate authority, the normally forbidden disclosure may become obligatory.
Secrecy differs from privacy. It involves purposeful acts to keep others from knowing, and it imposes ignorance without the presumption that what is kept secret is justifiably private and outside public interest. One could conduct secret treaty negotiations, for instance, or secretly stash money from a bank robbery. While privacy has a quite general target of ignorance (namely, all other individuals), secrecy tends to have a more specific target: a boy might keep something secret from his mother, though he tells his friends; an employee may keep a health problem secret from her employer, though it is known to family and friends. Of course, the target may be as broad as the general public, as in the case of secret treaty negotiations. One can share a secret, but the epistemic bonds that are created differ subtly from those created within a sphere of privacy, intimacy, or confidentiality; a primary difference is that those kept ignorant in secrecy may, in fact, have legitimate interest in knowing the secret.17 Disagreement over the right to know tinges many arguments between teenagers and their parents, and between politicians and journalists: the debates often turn on whether the matter is one of privacy, confidentiality, or secrecy.
When an individual makes unusually strenuous efforts to protect his privacy, especially about rather insignificant aspects of his life, others do indeed construe it as a form of secrecy. Someone who erects high walls around his property, who rejects normal public interactions, who deflects all talk of home life and background, is bound to arouse the sense of secrecy in others.
The simplest technique of secrecy is withholding information: an employee simply keeps to herself the information that she has interest in leaving the firm; she talks openly of her attendance at the convention, but fails to mention the contact she made there. But because withholding information by itself is not foolproof, she feels the need to take specific steps to assure secrecy; she may engage in concealment—hiding the letter offering her a job elsewhere or shredding evidence of her recent interviews. Methods and means of concealment may become quite elaborate (think of stealth bombers), but the effect is simply to hide information from others. Another technique, even more implicative of cunning, is deception. The intent to mislead can be done verbally (by lying), quantitatively (through statistical or graphic misrepresentation), visually (through image manipulation), and in still other ways. Not all secrecy or concealment involves deception, but all deception entails a secret.
The techniques of secrecy exploit the power differential between those who know and those who are ignorant—in this case, those who are intentionally denied knowledge. Repressive governments exploit this power by establishing expansive zones of secrecy that employ the protective measures of withholding, concealing, and deceiving. About a decade ago, I visited the University of Tartu, in Tartu, Estonia. I was told that during the forty years in which Estonia was part of the USSR, no maps of the city were published, and any stray foreign guest was forbidden from staying overnight in the city. The reason for these measures was the presence of a strategic Soviet airbase nearby. The University of Tartu has a quite distinguished history as one of the oldest universities of Northern Europe, and holds an amazingly large, rich, and rare library collection with strong holdings in the seventeenth through the nineteenth centuries. (Its librarians twice heroically preserved the collection against Kremlin edicts.) But since an entire generation of outsiders never had free access to the library, few outside scholars are now aware of the startling treasures the library contains: rare incunabula, first editions from the Age of Discovery, early scientific journals, Immanuel Kant’s death mask, and even a handwritten dinner invitation from Thomas Jefferson. It is but one remarkable case of a constructed boundary, a zone of secrecy that kept the rest of the world in ignorance.18
Even whole industries may suppress research, create doubt or uncertainty, and impose public ignorance. For decades, the tobacco industry notoriously bolstered such a zone of public ignorance regarding the effects of smoking. When that barrier began to crack, the industry combined the suppression of knowledge with a campaign to spread misinformation. Confusion about the truth, it was hoped, would prevent knowledge of the hidden.19 The soft-drink industry now stands accused of the same tactics.20
Epistemic communities operate on presumptions of trust, which reflect the fundamental assumption of human communication: that what is said is truthful, or at least sincerely believed. Trust is gradually eroded in environments that are rife with secrecy. Wherever ignorance is imposed, freedom is effectively diminished; an uninformed or misinformed agent cannot think or act in full awareness. Spreading bullshit or misinformation, inciting doubt, not only disrespects and misleads; it induces cynicism in the public. The continual poisoning of the well of public knowledge—in political discourse, Internet postings, advertising—is surely a factor in our contemporary culture of ignorance.
This is not to condemn all secrets: a benign secrecy is needed for happy surprises, for the play of various games, and for protection of things of value. But I reassert the advice of Sissela Bok that we should be alert to the hazards of relying on secrecy, wary of employing it for paternalistic motives, and should seek to expand transparency in our words, deeds, and policies.21
Forbidden knowledge is our final and most dramatic form of nescience. It is the construction of a barrier at the boundary of ignorance, sealing a zone marked “Verboten!” To cross this boundary would be a sin, a violation, a danger, a shameful or hubristic knowing. Paradigmatic is the Lord God of Genesis drawing such a boundary in the Garden—actually twice, counting the tree of life, where the warning signal was an angel with a flaming sword. But the creation of forbidden zones is widespread in all cultures and recurrent in history. Taboos, censorship, systematic suppression of research, the Vatican’s Index Librorum Prohibitorum—all establish forms of forbidden knowledge, or (which is the same thing) required ignorance.
The concept of the forbidden implies an authority or power that draws, sanctions, and probably enforces the boundary. Religious and governmental authorities are certainly the most common sources of the commands, edicts, and laws that set such boundaries. Their pronouncements redound in the culture, and educators may be enjoined to assure that prohibited matters are absent from the curriculum. The justification—in cases where one is offered—is that what is forbidden is disgusting, defiling, dangerous, or immediately harmful.
Sexuality is an obvious and important example in Western culture. The view that sex is shameful resulted in centuries of imposed ignorance. Knowledge of sexual response, even of sexual anatomy and physiology—especially female sexuality—was long forbidden. Medical texts that discussed masturbation or described female genitalia, for example, were shockingly, often hilariously, and sadly in error until well into the twentieth century.22 Since homosexuality was condemned as a perversion, even studying it was forbidden and thus risky for the few who dared; references to it needed to be oblique and euphemistic. A topic like pederasty has been even more perilous. In 1873, John Addington Symonds, wrote his courageous, pioneering study of homosexuality and pederasty in the Classical era under the delicate title On a Problem in Greek Ethics. He is at pains to reinforce respectability in his subtitle: Being an Inquiry into the Phenomenon of Sexual Inversion, Addressed Especially to Medical Psychologists and Jurists.23 He wants to remove any whiff of prurience. Moreover, he waited a decade before printing only ten copies that he circulated privately; it was published without attribution three years later. Similarly courageous, if mincing, treatment was required for studies of prostitution and pornography.
Many scientists and scholars who have attempted to study sexuality and other forbidden topics have faced the problem of negotiating safe passage across the border into the forbidden. Classic works in the history of science display their author’s tactics: gratitude for and dedication to a prominent patron; pious profession of religious faith and commendation of the work soli Deo Gloria (“to the glory of God alone”); the use of fig leaves and euphemisms; and elaborate efforts to evince an exaggerated professionalism. When even these techniques seemed risky, authors might use pseudonyms, esoteric writing, or publish secretly (using secrecy defensively against the authorities).24 Researchers today may still require special scholarly protections to study certain topics, such as sexual response, sex workers, child pornography, or psychedelic drugs.
All such practices construct and regulate ignorance. These forms of forbidden knowledge are established with a variety of motives that range from paternalistically benevolent to viciously self-aggrandizing. Censors, for example may be motivated by heartfelt concern for those who are denied access, or by manipulative self-protection. Censorship shows the asymmetry of knowledge (the censor presumes to know better than the public, and may know the content determined to be unfit for others) and of power (assuming the secrecy or prohibition is effective). If one accepts the authority of the agent that forbids knowledge, the judgment may become internalized as one’s own. Sometimes the result is that a zone of ignorance is ignored for years, even centuries; but, at other times, the very act of designating it “forbidden” piques curiosity and spurs stealthy forays across the border to snatch knowledge or experience of what has been disallowed. Banning a book is notoriously good for sales.
There are cases of forbidden knowledge in which it is difficult to pinpoint the authority that imposed the barrier. A forbidden zone may simply evolve as part of the ethos of a culture, enforced only by social habits, as though by some implicit agreement—more neglected than explicitly forbidden. In the West, for example, knowledge of alternative medical practices, research in certain lines of technological innovation, recipes for the meat of many nondomesticated animals—these are not formally forbidden, but they have been discouraged and neglected through social habit and corporate practice.
Consider a benign example: most people in the United States know little or nothing about the cherimoya. Also known as the “custard apple” or “the ice cream fruit,” it is the fruit of the large shrub or tree, Annona cherimola. Mark Twain supposedly expressed the view of many authorities when he described the cherimoya as “the most delicious fruit known to men.”25 Though this fruit, which has long flourished in the Andes and Central America, is now widely grown around the world—even in Southern California—widespread ignorance continues. It remains an unknown; were local grocers to stock it, few consumers would know how to select one or how to serve it. The cherimoya awaits the sort of corporate welcome and marketing that introduced the pineapple, banana, and other “exotic fruits” to the populace of the continental United States.
There are also cases, much rarer, in which a group declares a zone of forbidden knowledge with application to itself; these are prohibitions against inquiry. We might call these cases of contractual ignorance. The sort of case I have in mind has occurred in biological research: at the Asilomar Conference of June 1975, an influential agreement was reached by biological scientists and others that imposed restrictions on recombinant DNA research. The example is noteworthy because it is a rare case of the community of scientists declaring for themselves a no-inquiry zone and establishing protocols for acceptable research. The danger they foresaw was the possibility of creating and letting loose—whether by accident or malevolence—a virus or other organism that would be lethal and uncontrollable.26 The prospect of genetic engineering raises similar concerns with the public and even within the research community. Recently, the development and distribution of CRISPR—an inexpensive, convenient method for manipulating the germline of any organism, including humans—has raised calls for “another Asilomar.”27
The same concern has been raised regarding developments in artificial intelligence and virtual reality technologies—though no Asilomar-type, clear-cut, influential agreements have been reached. In 2000, Bill Joy, an inventor and founder of Sun Microsystems, called “the Edison of the Internet,” published an article in Wired Magazine titled “Why the Future Doesn’t Need Us,”28 in which he expressed profound reservations about the future implications of our technological developments, especially in robotics, artificial intelligence, and the dramatic extension of the human lifespan. He even quoted Thoreau, saying that we will be “rich in proportion to the number of things which we can afford to let alone.”29 He concluded that “we must now choose between the pursuit of unrestricted and undirected growth through science and technology and the clear accompanying dangers.”30 Thus far, his call for a no-inquiry zone, a consensual restraint on technological development, has been unsuccessful—although no less a scientist than Stephen Hawking among others has warned of the dangers to the human race posed by increasingly intelligent androids.
When unrestrained, knowledge naturally tends to application. The temptation to create—even just to confirm one’s knowledge—may render that knowledge dangerous. As with the recombinant DNA concerns at Asilomar, the condition that triggers a call for a contractual ignorance in the scientific community is not only the worry of deleterious effects; it is the concern that a self-replicating cycle of deleterious effects beyond human control can be created. No wonder everyone cites the myth of the forbidden and the tragedy of revelation: Pandora’s Box.
Forbiddance can be used to maintain secrecy, not just to protect from sin and danger; but not all forbidden knowledge involves secrecy. The Asilomar declaration was quite public, and it forbade certain forms of research. (Anyone who would defy the agreement and conduct forbidden research, however, would likely keep it secret.) In cases of secrecy, someone (or some group) possesses the knowledge already; but sometimes the forbidden has contents not even known by those who forbid it.
Nescience requires the intentional construction of ignorance, but it is also possible to construct ignorance unintentionally, or with dim awareness, or with awareness only after the fact. That can happen when the ignorance is produced as an unintended by-product of an intentional activity. Pursuing our own preferences may today become such a process, and it is the source of considerable public ignorance.
If you live in a fortunate environment that is open and rich with options, it is natural to express your preferences across a wide range of choices, depending on your resources. You can buy the products you like, hear your favorite music, pursue the activities you enjoy, associate with groups you like, and watch your preferred programs. Your habits of choice form a “lifestyle.” This pattern of human social action is nothing new. What has changed, however, are two factors: (1) the range of live options has increased enormously—there are many more products, programs, and other possibilities for consumption, investment, and enjoyment; and (2) we have developed technology that identifies and delivers preference-based options—determining and presenting whatever fits your pattern of choices and screening out options that do not. Together, these developments allow us to live a “preferred” life, experiencing only what we prefer, ignoring all else. Without direct intention, we erect epistemic barriers. We become prisoners in a self-reinforcing cave of ignorance—an ignorance we comfortably share with like-minded peers.
Today, online retailers promote “suggested” items based on our past purchases. We conveniently use preset buttons and “Favorites” bars to find our “personal places.” Individuals may habituate themselves to news outlets that reflect their personal viewpoint on the world—Fox or MSNBC, the New York Times or the Wall Street Journal. When we add these technological assists to physical controls like gated communities, exclusive clubs, professional associations, and so on, we may come to have thorough control over our epistemic input. We screen out and remain ignorant of whatever we do not learn through those selected experiences. We banish the Other.
Preference-driven technology creates a cognitive comfort zone, but from an epistemic perspective it is a system structured to serve confirmation bias, our tendency to seek and privilege information that confirms our preconceptions, and to skew interpretations toward those preconceptions. Our natural default pattern is to hunt for confirmation of our beliefs, rather than to test them and to seek the truth. In chapter 5, I mentioned that increased specialization creates epistemic communities that cannot easily communicate and contributes to public ignorance. Preference-based information also creates affinity communities that structure public ignorance. It fuels the growth of ideologies and their “true believers,” who can interpret and judge the world only through the lens of unassailable beliefs. As this process grows, confirmation bias becomes more extreme: any contrary evidence is considered threatening and denied out of hand. The reinforcement provided by nodding, like-minded believers causes beliefs to become more extreme, more firmly held, and “hardened.” For the most closed-minded and cynical, it ultimately does not matter if the truth is contrary to one’s cherished ideology: one has the right to believe whatever, in much the same way that one can stoutly affirm, “Chocolate ice cream is best.”
This self-imprisoning effect may happen without intent or awareness, although self-deception seems to be involved at some stage. Indisputably, the practice of restricting one’s experience to the reinforcement of prior beliefs protects and increases ignorance.
There are many significant practical problems that result from this process, but it is the ethical dimension that is my concern here. The blinkered true believers, the ideologues, are no longer persuadable, and persuadability—openness to rethinking beliefs in light of evidence and argument—is a central norm of epistemic communities, and (as we shall in the next chapter), a key epistemic virtue of individuals. To lose persuadability is, to use Lee McIntyre’s phrase, to “disrespect truth.”31