Human reason has this peculiar fate that in one species of its knowledge it is burdened by questions which, as prescribed by the very nature of reason itself, it is not able to ignore, but which, as transcending all its powers, it is also not able to answer.
—Immanuel Kant
Immanuel Kant distinguished between a boundary (Grenze) and a limit (Schranke): “Bounds (in extended beings) always presuppose a space existing outside a certain definite place and inclosing it; limits do not require this, but are mere negations which affect a quantity so far as it is not absolutely complete.”1 It is his distinction I have in mind in thinking of ignorance as the limit of knowledge. A boundary encloses and incorporates a place; it implies that which it excludes, that which lies outside, on the other side of the border. We can map our ignorance when we know the location of the boundary, which also means we mark what is beyond the border. But when something has a limit, there is no intimation of what lies beyond except as a negativity. There is only a sense of something having run its course, finish and exhaustion, the end, whether in completion or incompleteness.
There is such a limit to what we can know. In that respect, ignorance is the negativity of the knowable. What is knowable in practice and what is knowable in principle: both have their limits. These limits apply to us as individuals, both at any given moment and over any interval, including our lifetime; they apply to all humankind, at any historical moment and in our span as a species. Knowing about limits to our knowledge, to the extent we can, is a form of metaknowledge. It involves knowing, in a general sense, what sorts of things are knowable and what sorts are unknowable; and it aspires to know when everything is known regarding a particular domain.
Completion is one way to reach a limit. It is a form of perfection. The collector strives for completion of her collection, the full set that represents its elusive limit. Omniscience would be the outermost limit of the knowable: everything would be known that could be known. Omniscience is, of course, one of the traditional perfections of God: the comprehension of the totality of truth.
There was a time when scientific minds spoke of the ideal of a “completed science,” a science that would explain all phenomena. One interpretation of this ideal was cartographic: scientific investigation maps reality, and that map would be complete when it mapped everything—presumably including the map itself. But, of course, an ideal atlas of this sort is a myth, an impossibility. A map that included everything exactly would be isomorphic with reality, a duplicate of reality itself. A more plausible interpretation is explanatory completeness, yet that too would require more than we can deliver: knowledge of all the laws of nature, all the entities, individuals, and states of the world. Nonetheless, if we could achieve this ideal, we would have reached the limits of knowledge—we would have explained all that is explainable.
A second way to reach a limit is simply to come to an end of resources or options, to finish what is available, to exhaust capacity. This has no implication of completeness or perfection; indeed, it suggests incompleteness. It implies an actualization constrained to a subset, not a full set. For instance, there is a maximum number of miles I can drive my car on a full tank of gas; I have reached the limit when the tank is empty. The genealogical reconstruction of my family tree has a limit: it is the earliest relevant surviving document. (Note that I haven’t driven all the miles possible or identified all my ancestors.) In chapter 6, I proposed an estimated limit to the maximum number of books one could read in a lifetime. (If my goal were to read all of Trollope’s novels, however, I could complete the task—and reach a limit in the first sense.)
Both these senses are applicable to knowledge. My concern here will not be to list the specifics of our limits to the knowable—an impossible task that would only exemplify my point. It is rather to identify the types of limits and their sources, to identify the ways in which the knowledge acquirable by individuals, and even by the grandest of epistemic communities, is forever limited and incomplete.
When we speak of knowledge, it seems a possession, an object that persists (though an abstract entity). When we think of knowing, however, it seems to be a private mental state. It is a gerund, more than a participle: if I’m asked, “What are you doing?” it would be odd to reply, “I’m knowing X.” Knowing is obscured from our inspection, so we detect it through other cognitive processes, such as recalling, identifying, expressing, applying, and so on. Knowledge presupposes knowing, and knowing is a state that occurs in time. The temporality of our knowing limits what we can know. There is a time before and after we live, and a time before and after we know. Whenever we live, whenever we know, much of the past and future are unknown and unknowable to us.
For thousands of years, humans have encoded their experience and recorded it in durable objects. These texts, images, recordings, and objects preserve past experience for transmission to the future. Over time, we developed institutions that collect, store, catalog, and share this precious cultural legacy, from physical libraries and museums to their digitized analogues.
We have not, unfortunately, been able to preserve all aspects of experience equally well. Language, and especially written language, permitted the encoding of all sorts of experience; but any language is limited by its syntax and its vocabulary. “The limits of my language means the limits of my world,” said Ludwig Wittgenstein.2 And rendering experience into language involves transformation: we can develop a rich vocabulary to designate smells, for example, but rendering them in language does not preserve the smells.3 Similarly, forms of notation were developed to record salient aspects of music, but notation has no sound; it took many centuries before we could capture sounds. Some smells, sounds, and sights are gone forever. And motion is only suggested by static drawings. Preserved “moving pictures” and recorded musical performances have been available to us only for little more than a century. The sense of touch still lacks a technology of preservation; and holding on to tastes requires reassembling from preserved recipes and ingredients.
Since I live when I do, I fortunately can know many things about the past; my knowing the past includes not only knowing that but also cases of knowing what it is like. I can see images of my grandfather from before I was born; I can enjoy musical performances recorded decades before I lived. But there are limits. Though I may know important facts about my great-great-grandfather, I will never see his face. None of us will never hear Chopin or Mozart or Bach at the keyboard; never inhale the “Great Stink” of London or the fragrance of extinct flowers; never watch the debut of Euripides’s plays or see Nijinsky dance. Barring time travel, the unrecorded past will remain but a dim conjecture.
History must be reconstructed from ruins because what survives is often incomplete. We must interpret Aristotle’s thought from the one-third or so of his works that have, by chance, survived. Unrecorded facts, records that have perished, facts kept secret—all lie beyond the reach of our knowing. The destruction of nearly all the 1890 United States Census records by fire is an unfortunate and permanent loss of information. Luck is a determinant of what can be known about the past (despite the strictures of epistemology, which deny that genuine knowledge can derive from luck). Our knowledge of the past and therefore our ignorance of it—even more than we might know—are matters of luck. Knowable facts about the past, not just sensations, are therefore limited. We will never know, for example, the identity of the last female gladiator to die in Rome’s Colosseum.
Sometimes human actions are responsible for the loss. As I write these words, the self-declared Islamic State (ISIS) is determinedly destroying archaeological treasures, tombstones, and local historical records. We may have recorded images, but the objects are destroyed. The Nazis notoriously destroyed many works by artists they considered “decadent.” And, of course, beyond such perishing, there is all of “prehuman history,” a time from which we can have no human artifacts, a time we can imagine only from grand inferences and extrapolations based mostly on geological and paleontological studies.
But hold on! Doesn’t it seem brash to declare anything permanently unknowable? Who can say that new evidence won’t appear? Think of the recent discovery of Richard III’s remains and how much was learned from what was previously thought to be lost forever. Recall the impact of the discovery of one of Aristotle’s lost works, The Constitution of the Athenians, on papyri in the late 1800s. How can one confidently assert that any facts are lost forever?
There is certainly some truth in this point. Innovations in technology can open up new lines of inquiry and confirmation—DNA analysis is a prime example. We are, of course, becoming more sophisticated in our methods of detection, recovery, reconstruction, and preservation, and also in our techniques of inference from disparate data and simulation. But we cannot recover all the facts of the past, create records that were never made, or capture works that were obliterated. An enormous number of facts about the past, including many of great significance, will surely lie beyond our reach forever.
The epistemic loss produced by the flow of time derives from ontological loss: knowledge has limits because worlds disappear. Some contemporary physicists propose that our universe is a hologram, in which no information is lost. This is a vision that sees the unity of the universe in the prehension of events, the past and future embedded in each other. But even if the conservation of information is true, information is not knowledge.
All loss, especially loss without memory, can be terrifying. For the philosopher Alfred North Whitehead, loss is the ultimate evil, beyond our ability to prevent or ameliorate. He writes, “The world is thus faced by the paradox that, at least in its higher actualities, it craves for novelty and yet is haunted by terror at the loss of the past, with its familiarities and its loved ones.”4 It may seem odd that Whitehead, a philosopher who made process more basic than substance, should find the prospect of losing the past in the flux so horrifying. He found the solution, however, as many thinkers have, in theological construction: he imagined God as a salvational deity, who takes “a tender care that nothing be lost.”5 The salvation is both ontological and epistemic, preserving existents and their truths. In the long history of philosophy, God is an epistemic deus ex machina at least as often as an ontological deus ex machina.
The future is also unknowable—but for different reasons. We can predict many aspects of the future (to predict is to foretell; etymologically, “to say before”), but predictions are elaborations of our knowledge of the present, not knowledge of the future. The facts of the future remain but possibilities from our perspective—even if there are no alternative possibilities. Even the most ardent of determinists, who believe the future is theoretically predictable from complete knowledge of the present and of the laws of nature, must admit that our individual and collective knowledge of the present state of the world is woefully incomplete—and the state of the world is continually in flux. Moreover, underlying it all is the assumption that the laws of nature will not change.
In addition, there are facts that await a framework of conceptualization that does not yet exist. Galileo could not have known that solar flares produce bursts of radiation, because the framework of theoretical concepts that define electromagnetic radiation was not developed until centuries later. Hippocrates could not have known about vaccinations, because the entire realm of microbiology, the identification of pathogens and antigens, and the technique itself awaited discovery for two millennia. It appears a certainty that new theoretical paradigms, new concepts, will arise in the future—indeed, in the future beheld from any given moment. Not only do we not know what these will be, we cannot even imagine many of them. Like our ability to see with the beam of headlights as we drive on a dark road, our ability to know what lies ahead is limited. Once again, only the prospect of time travel—and then, only time travel in which the traveler’s identity, especially memory, is not altered by the trip—would challenge these limits.
The opacity of the future is also the unknowability of our personal future. We cannot know our future self—rather, selves—for life surprises us. We are ignorant of things that will have an impact on our life and ways in which they will alter us. The recognition of this limit may be a source of trepidation: it confronts us with the fragility of our happy life and even our identity. But it can also be a source of hope: it is an argument advanced against suicide, especially those cases not motivated by terminal illness. One who contemplates suicide would deny the unknowability of the future, robbing a future self of its choices and its life.6
Finally, our knowing is obviously limited by our life span. As individual knowers, each of us has our own past and, as our time moves on, much of our personal history is lost to us. I mean “lost” in the sense that it has moved from the merely forgotten to the irretrievable; it has become unknowable. Even further beyond our cognitive reach and more profoundly unknowable to us are the events that will occur after we die. We can learn a lot in the time we have; but death is a sure and final limit of coming to know. Human knowing flourishes and vanishes with human life.
Besides the absolute limit of our life span, our biology sets other important, individual limitations to knowledge. One is the capacity of the human brain. Even our roughly one hundred billion neurons can process and store only so much information. As a species, we transcend these individual limits by a division of cognitive labor resulting in a specialization of knowledge—which we then may store for others’ access. But, of course, there are limits to this too. And we should not rush to assume that storing digitized information is our salvation from ignorance. Unfortunately, having access to information is not the same as knowing. Ask any student who owns a textbook. Just because one has access to vast Internet resources does not mean that one knows everything on the web.7
Moreover, we forget much of what we learn. This is not simply a biological flaw; it is a benefit too—psychologically and physiologically. The rare cases of people who remember everything, who cannot forget, reveal what an affliction it is to be unable to forget.8 If we remembered every sensation we experienced, our cognitive processes would soon be overwhelmed. Forgetting is, in a sense, a cognitive ability. But we no longer know what we cannot remember.
Another important set of limitations is inherent in the very biological systems we use to gain knowledge: our sensory systems. All such systems operate within parameters. Take sight, our dominant sensory system: our eyes register only a tiny range of the electromagnetic spectrum as visible light, roughly wavelengths between 380 and 760 nanometers. Other creatures have eyes that see beyond this range: bees, for example, see a bit further into the ultraviolet range; so-called bee purple designates a color we humans cannot experience. In addition, the average human can use our complex network of three types of photoreceptor cells to discriminate about ten million different shades. But there is (contested) evidence that some humans are tetrachromats, and the effect of an extra type of photoreceptor means these people might actually see up to a hundred million different shades. The mantis shrimp has a visual system with at least twelve types of different photoreceptors—one of the most complex visual systems known. The point is that humans with normal chromatic vision are ignorant of the color experiences of creatures with greater discrimination.
In addition, our vision functions only within other constraints. We can see objects only within a certain distance and of a certain size. Our retinas have blind spots. All these limits apply even to someone with “perfect” human eyesight—and that is a minority. It is well known how inferior our olfactory and auditory systems are compared to that of many other creatures. And we lack whole sensory systems that other animals possess, including especially systems used for navigation, like the pigeon’s ability to sense magnetic fields.
Thus, the parameters of our sensory systems, the mediation involved in perception, and the particular sensory systems we use all impose limits on our experience, and hence on what we can know. As Immanuel Kant, who carefully charted the limits of human reason, revealed, we can know that we have such limits, and that we ourselves structure our experience of the world; but we cannot know things-in-themselves, cannot know just what lies beyond or beneath our cognitive reach.
In the history of philosophy, interestingly, skepticism has arisen from the faults of our senses—illusions, variations, inconsistencies, debilitations—more often than from their parameters. But faults are detected and have meaning only by reference to correctness or accuracy. Parameters would limit knowledge even if our systems were fault-free.
In addition, to the extent that our knowledge is constituted in our ability to express it, we are limited by the resources of our language(s), syntactic and semantic. All symbol systems select salient aspects of experience; they become internalized as the lenses through which we experience the world. Some languages have only three different color terms; some modern languages do not distinguish green from blue—yet their speakers all have human eyes. All that is distinctive about humanity and enables our learning—our brain, our perceptiveness, our reason, our language—also serves to structure and to limit our knowledge as well.
The attempt to state what we know, to frame our knowledge in propositions, can sometimes result in tangles that reveal our limits or that are self-limiting. Perhaps the most troubling are cases in which we can never exemplify a true claim. I have noted that such claims are true but “noninstantiable” (chapter 3), which is to say that we cannot cite a single instance. Nicholas Rescher has analyzed this phenomenon in two related books and has given it the name vagrant reference.9 Consider these examples, poached from Rescher with slight modifications:
And there are the formulations that underlie this chapter:
“Name one,” we might ask in response to each of these. But any attempt to cite an instance or example will be self-defeating. The problem with these propositions is that an existential claim is made about something that is described obliquely and—although it may be true—the reference is made in such a way that it precludes identification of any item that meets the description. There are two loci of epistemic concern: one is the necessity that we must remain permanently ignorant of instances that fit such descriptions; the other is the limits on our ability to verify the propositions themselves. In some cases (such as (e) and (f)), the propositions may admit of logical proofs; but an empirical approach is ruled out for all.10 These are not merely problems of formulation; these are not cases of linguistic gymnastics. Consider this noninstantiable claim: “There are at least two hundred unreported rapes in this city every year.” Such a claim is genuinely cognitive and factive, but it is laden with inexorable ignorance.
There are many other circumstances in which we can justifiably claim knowledge of a general sort without being able to specify instances. As an example, imagine that a deadly disease has taken 25,000 lives each year among approximately 200,000 cases of infection. A vaccine is discovered and a program of mandatory inoculation and education is mounted. In the following year, there are only 2,000 cases of infection, of which 200 are fatalities. Assuming a discounting of other factors, we have reason to claim that the program dramatically reduced infection by 180,000 cases and saved over 24,000 lives that year alone. But whose lives were saved? Whose infection was prevented? We cannot identify anyone specifically—and we will never know. Ironically, our knowledge may sometimes range over great and broad statistical generalizations, while we remain ignorant of the individual cases that comprise that profile.
Proving causal relationships would, of course, be difficult. Identifying all the causal factors in play that produce any actual state of affairs in a society is an infinite task and a misplaced hope. But in the matter of “lives saved” and similar cases, we would need to identify all the causal links in a hypothetical state of affairs as well: what exactly would have happened if there had been no inoculation program, and to whom. I’ve already discussed the erosion of our knowledge of the past, but that discussion was about the actual past. To retrieve and account for the lost possibilities, the what-if-this-had-happened-instead scenarios, lies beyond our knowledge as well. Though we can speculate, we are facing both reconstructive and predictive problems and their inherent limits. Noninstantiability is thus as a conceptual limit to knowledge.
Counterfactual conditions may mark yet another limit to our knowledge. In a reflective moment, we may ask, “How different would our country be today if President Kennedy had not been assassinated?” or “If I could alter one crucial early decision I made, what would my life be like now?” This sort of “what if” question specifies scenarios that deviate from actual events. The exploration of such hypothetical scenarios has become a literary and historical genre called “alternate (or alternative) history.” Attempts to answer these questions range from the historically serious, in which careful and scholarly responses are framed, to the creative, in which the question is used simply as a premise for imaginative fiction. Even for the most earnest inquiries, however, the outcome is not truth, but plausibility. We can only speculate—wistfully or gratefully or with other feelings—about the impact of “what might have been.” We can know that certain things are possible, but we cannot know all that would flow from them, were they actual. Counterfactuals invite us to go beyond what is known, yet they embody conceptual limits to what can be known.
James Frederick Ferrier, the Scot whose Institutes of Metaphysic I cited in chapter 1, asserted the following as nearly the most important proposition of his tract: “We can be ignorant only of what can possibly be known; in other words, there can an ignorance only of that of which there can be knowledge.” For Ferrier, if we are ignorant of X, X must be knowable—and he expands this to include things knowable in practice and in principle, not only to humans but to other “orders of intelligence.” What are not knowable, he says, are necessarily (logically) false propositions like “a part is greater than the whole” or “two plus two equals five.”11 Such propositions are conceptually self-contradictory; they do not represent a limit to knowledge in either the completion or exhaustion interpretation. But since they lie outside all possible knowledge, we cannot be ignorant of them.
I turn next to limits that arise from the obstacles to predictive accuracy: chance, forms of indeterminacy, and free choice. We will find inherent limits there as well.
In the first half of the twentieth century, the vision and the dream of a complete knowledge of the world were dashed forever. The unkindest cut was that the blows came from within the very disciplines that had advanced the model of a completed science: physics and mathematics.
The theorists who developed quantum mechanics postulated and then demonstrated that the subatomic world was not fully knowable. Quanta behave in ways that are inherently random and therefore unpredictable. Ernest Rutherford’s work on radioactive decay showed that we cannot predict which atoms in a sample will release radiation and transmute into a stable element. When any particular atom decays is not a function of how long it has been charged; the release of radiation is random, yet regular en masse: we can plot the rate of decay across large numbers of atoms. In 1907, Rutherford introduced the concept of half-life, the time span in which one-half of the atoms on average will lose their charge. Here again we confront a truth that we cannot instantiate—but for different, empirical reasons: we cannot know which atoms will decay.
Werner Heisenberg demonstrated in 1927 that it is impossible to measure the position and momentum of a particle simultaneously; the more precisely one measures one variable, the more the other is distorted. The uncertainty principle (or indeterminacy principle) presents the paradox of measurement: the observer’s attempt to measure is an intervention that alters the phenomenon itself; in this case, as accuracy is increased for one variable, the other is pushed beyond reach. Thus, measurement, the fundamental activity of quantifying the world, both generates and limits our knowledge.
In 1935, Erwin Schrödinger described the superposition of states, in which two conflicting states occur simultaneously—a bizarre aspect of the quantum world. But when an observer views or makes accessible such a phenomenon, the superposition is lost and the state becomes simply one or the other. His illustrative thought experiment, “Schrödinger’s cat,” has become a meme of popular culture. A live cat is placed in a steel container with a lethal substance that can be triggered by the radioactive decay of a single atom. We cannot know whether an atom has decayed, so the cat is both dead and alive—until we open the container to check, of course, when we can determine whether the cat is alive or dead. In such cases, there is no single outcome unless the observation is made.
The world revealed by quantum physics presents not merely bizarre and elusive phenomena but inherent and ineluctable limits to human knowledge. While there are some scientists who hold out hope that these apparently random, inaccessible, paradoxical phenomena can be tamed into traditional causal frameworks, they are a fading minority. Still more weirdness arises and seems to thwart even basic assumptions about the world. This year, quantum scientists claimed to have shown that future events may determine events in the past. A team of Australian physicists announced they have demonstrated that what happened to certain particles in the past depends on observation and measurement of them in the future—a kind of backward flow of causality. One of the researchers declared: “It proves that measurement is everything. At the quantum level, reality does not exist if you are not looking at it.”12 (I find it difficult in this arena to separate the ontological from the epistemological—claims about what exists from claims about whether we can know what exists—and this time-warping claim will no doubt receive scrutiny by other scientists.)
In 1931, Kurt Gödel published his incompleteness theorems. He demonstrated that, for any formal system at least as complicated as arithmetic, there will be theorems known to be true on independent grounds that cannot be derived within the system; there will always be at least one true but unprovable proposition. Any consistent formal system that is sufficiently powerful to comprehend arithmetic will, therefore, always be incomplete; we will always encounter formally undecidable propositions.13 Mathematics is the language of science, and the ultimate goal of scientific research, it was assumed, was to explain phenomena as instances of laws that can be expressed mathematically. Gödel’s proofs—never refuted—show that any science modeled as a formal system must always be incomplete. We cannot capture everything in any single explanatory system, however complex we make it.
Taken together, these intellectual developments exploded the idealized model of human knowledge: “completed” science, especially scientific knowledge as represented in the mathematical precision of physics. If the physical world contains discontinuous movement (“quantum leaps”); if basic particles behave randomly; if the observer, in attempting to obtain knowledge, necessarily alters the phenomena, sometimes destructively; and if all mathematical explanatory systems are necessarily incomplete—we confront profound limits to our knowledge, to what is knowable even in theory.
In his study of ignorance, Nicholas Rescher makes important observations about the limitations of human knowledge. Understanding his claims requires definitional stipulations for several terms. Facts are actual aspects of the world’s state of affairs; they are features of reality. Propositions are claims regarding facts. Statements are propositions formulated in a language (the same proposition may be formulated in different languages, different statements). Facts may thus be represented through statements; truth is a property of correct statements. But facts “outrun linguistic limits.” Facts about even one physical object are inexhaustible. Rescher says, “Its susceptibility to further elaborate detail—and to potential changes of mind regarding this further detail—is built into our very conception of a ‘real thing.’”14 Facts are infinite in number because the detail of the world is inexhaustible.
Though we humans are finite beings, we can know universal truths. We can also know generalizations about vast swatches of the world. And we can know many facts about individual things. But we can never know fully all the individual entities about which we generalize. Indeed, we can never fully know even one individual. Its facts are infinite, an infinitude that protects its opacity.
Though our knowledge does expand, the possibilities for further expansion are now contested. As science advances, its epistemic structure becomes more elaborated and filigreed. Though progress is always possible, Rescher says, results are more and more difficult to achieve. “In moving onward we must be ever more prolix and make use of ever more elaborate symbol complexes so that greater demands in time, effort, and resources are unavoidable.”15 Going further, the science writer Peter Horgan has argued that indeed we live in the twilight era of scientific discovery. The title of his controversial book states his conclusion: The End of Science: Facing the Limits of Knowledge in the Twilight of the Scientific Age.16 Horgan believes that the fundamentals of scientific knowledge, our contemporary understanding of the universe, will not change dramatically: the nature of the solar system, the evolution of life, the natural elements of the world, and so on, are well and truly known. We possess a well-confirmed and deeply embedded understanding of the world. Only details remain. The time for profound discoveries is past. Yes, technology and other forms of applied science will continue to make major strides. Horgan thinks the most transformative form of applied science will be human immortality, the conquering of aging. But even this would not alter our understanding of the universe in any deep way. Most of what is knowable about the natural world and relevant to human life is already known; and at the frontiers of physics, scientists are approaching the limits of what is knowable. Since a completed science—a “theory of everything”—is impossible, contemporary physicists now offer only speculative metaphysical theories that defy confirmation or falsification (theories Horgan calls “ironic” and “theological”). Human beings have a limited capacity to understand the universe of which we are a tiny part—and we are now approaching it.
Rescher believes that although science will advance more slowly and major discoveries will be fewer, real progress will continue—and there will still be surprises.17 He observes that previous generations have also thought they understood all the important stuff—and they were wrong. Horgan accuses Rescher of trying to retract a “depressing scenario” with “a happy coda.” Horgan repeats a reviewer’s metaphor: Rescher is whistling in the dark.18
I agree with Rescher on the inexhaustibility of facts and our inability to know all the facts about even a single object. But I am at odds with both Rescher and Horgan on the potential for new knowledge—and these two points are not unrelated. First, I believe both men underestimate the effects of the interpenetration of scientific theory and technology, and the potential for discovery that results from this dynamism.19 Second, while it is true that most discoveries complexify our understanding, now and then a discovery unites, connects, and simplifies. Newton’s discovery of the laws of gravitation is the obvious, paradigmatic example. It is an essential part of Horgan’s thesis that such coherence-bringing breakthroughs are behind us. I see this, however, as an attempt to map our unknown unknowns from too low an altitude. It is simply not possible to fix the nature of unknown unknowns at such a level of specificity. Horgan and perhaps Rescher claim to know the range of the remaining unknown unknowns. Disguised as an affirmation of limits to our knowledge, these claims actually constitute an overreaching of what one can claim to know.
Regardless of varying prognoses for scientific progress, it is generally agreed that human knowledge will always be incomplete. We might consider more closely, however, what complete knowledge would be. Omniscience is the term for possessing complete knowledge, for the state of knowing all that is knowable, for possessing all truth. It is the annihilation of all ignorance, the freedom from all epistemic limits.
For monotheistic religions, omniscience is typically one of the divine perfections, an aspect of the perfect being of God. In Abrahamic religions, it is a crucial property, directing God’s omnipotence, enabling God’s providence, and informing God’s wisdom and justice. These perfections are mutually supporting, for to be omniscient but impotent would be a kind of hell: knowing everything yet being unable to do anything about it. To explore the concept of omniscience, however, let us set aside theological presuppositions and other divine attributes unless and until we are led to them by the necessity of understanding.
First, it is clarifying to distinguish between (1) omniscience as the capacity to know any knowable truth one chooses, and (2) omniscience as the state of actually knowing all that can be known. Thomas Aquinas added a feature to the second interpretation: this actual knowing is nondiscursive, meaning that everything is known simultaneously—it is not the thinking of one thing, then another, and so on.20 That may be considered a necessary condition, since otherwise a discursive omniscience would seem to require an infinite amount of time to know all the facts about even one momentary state of the world. But this has radical implications: it seems to imply that omniscience is not an infinite set of justified true beliefs, but rather an immediate, intuitive comprehension: knowing without believing. Under most interpretations, omniscience entails not only propositional knowledge (knowing that) but also direct knowledge (knowing what it is like) and profound understanding (although most analytic epistemologists treat omniscience purely in terms of propositional knowledge).21
Actual omniscience would mean that a single intelligence—for convenience and theistic neutrality, call it “Omni”—would know every detail of all the states of the world; the thoughts and feelings of all sentient creatures; all relationships, properties, laws, theories; all languages, texts, artworks, and communications; all subjects; all that each person knew and did not know; and more. For all these objects of knowledge, Omni would also know their past and their future. The shadowy implications of God’s foreknowledge for humanity’s free will have generated an entire literature of theological and philosophical gyrations. (Does foreknowledge imply predestination? Does free will repudiate omniscience?) In trying to reconcile omniscience and free will, some have argued, for example, that God can learn new facts as autonomous humans make their choices.22 But whether Omni would be able to learn or not, it surely would never forget—which means it has no need to remember.
There are further entailments. Omniscience implies the knowledge of all possibilities for every epistemic object at every moment, not just their actualities. So, whatever knowledge Omni has of this world must be expanded modally to all possible worlds. Moreover, Omni would possess no false knowledge, since it would know that false beliefs were false, though it would know of the false knowledge held by others. That means Omni is infallible, since no epistemic errors are possible.23 In addition, such a being would necessarily have total self-knowledge, metaknowledge, and transparency: Omni knows it is omniscient. And that, in turn, implies that it knows that there are no other truths to know. Omni must know there are no unknown unknowns.
Some philosophers have argued that because the truths of the world change, Omni’s knowledge must change with them: earlier it was true that I was going to write this sentence; now it is true that I have written it. From a related concern, some have argued that Omni’s knowledge must incorporate indexical truths, for example de se truths (truths about oneself affirmed in the first person), such as “I am writing this sentence.” The indexical term “I,” like “here” and “now,” shifts reference according to the context of utterance. “I am writing this sentence” is a different truth from “Dan DeNicola is writing this sentence.” Still others have argued that omniscience is impossible because it wrongly implies that there is a complete set of all truths. That implication is false because, no matter how we circumscribe the (supposed) set of all truths, there will be additional truths about the set of all its subsets that we must then include, and so on ad infinitum.24 These arguments are presented as challenges to the concept of omniscience and its relation to other perfections. They are introduced and usually discussed in the framework of propositional knowledge, but there may be analogues for the framework of experiential knowledge and understanding. Suppose Omni understands all things: it follows that Omni must understand what it is to be ignorant, what it is to be me and to be ignorant of specific things, and what it is to learn. But these experiences and this understanding are incompatible with Omni’s omniscience. (We pass by the question of whether a good God could truly know what it is to be evil.)
The point of all this—a piling on of epistemic attributes, implications, assumptions, and challenges—is to show that omniscience is a contested and unstable concept. It is threatened by mutability, perspectival knowledge, incompatibility with other attributes, and even logical incoherence. It is the outer limit to knowledge that seems impossible to reach by imagining a set of propositions that are justified, true beliefs—even an infinite set. The difficulty may arise from focus on the completion model of limit, rather than on the exhaustion model.
But the age-old ascription of omniscience to God meant that the universe was thoroughly and completely known by a single intelligence: the Mind of God. All that was knowable was known—and all was knowable to God. The death of God, therefore, carries the implication that the universe is no longer thoroughly known; indeed, there is no longer any assurance of its knowability. The universally unknown, the range of unknown unknowns, and the domain of the unknowable become an epistemic black hole, massive and forbidding.25
For finite humans, what is knowable is limited: some things are unknowable in practice, others in principle, and even all that is knowable in practice is not knowable for finite, mortal creatures. What can we learn, what can we infer from our ignorance?
Making inferences from what is not known or from the absence of evidence or proof has long been considered a fallacy in logic textbooks. It is formally called argumentum ad ignorantiam, an argument from ignorance or an appeal to ignorance. In its basic structure, the argument may claim that because a claim has not been disproved, it is true; or because some purported fact has not been proved (or because there is no evidence of its truth), the alleged fact is false. For example, an art dealer claims that because a painting has not been proved to be a forgery, it is genuine. An inquisitor concludes that a man is a traitor, because there is no evidence he is loyal. A believer says the Loch Ness monster must exist, because no one has disproved its existence. The first assumes truth because contrary evidence is absent; the last two put the burden on proving the negative (“Prove there is no monster!”).
In the austere formulations of logic texts and in the stark reasoning of these examples, the problematic inference is easy to see. The fallacy is obvious. But in many real-world situations, the use of this sort of argument actually seems valid. In medical research, a drug may be deemed safe for human use because there is no evidence of side effects—no evidence that it is unsafe. In court, we judge a person “not guilty” when we do not know of any evidence of guilt. These are judgments of practice, of course, and subject to practical conditions, but the arguments on which they rest are appeals to ignorance, nonetheless.
These judgments seem to have implications about the knowledge base and the sampling on which they rest. Suppose I have a bag with ten balls of various kinds, and you ask, “Is my tennis ball in that bag?’ I look at each of the ten balls in turn and say, “It is not here.” I found no ball that is your tennis ball, so you would certainly judge that I am entitled to that conclusion; the inference is unquestionably valid. But now suppose I have one hundred balls in the bag and examine only ten, and I say, “Your ball is not in the bag—and that’s true because I have found no ball that is your tennis ball.” The inference is now dubious.
Sorting out the epistemic and contextual factors in which argumentum ad ignorantiam is a legitimate type of argumentation is quite complex and beyond my scope here. I can only point to a comprehensive treatment of the range of such arguments by Douglas Walton.26 But I can observe that there are indeed situations in which our ignorance is proper evidence for knowledge claims. And I can commend a general wariness of such arguments: their broad use enables willful ignorance and conspiracy theorists. Fanatics have ignored evidence of President Obama’s Christian beliefs and claim that there is no proof that he is not a Muslim. Doubters have demanded that researchers prove there is no link between vaccines and autism. (And researchers reply that there is no evidence of such a link—itself another argumentum ad ignorantiam.) Members of Congress have asked the Secretary of State to prove that Iran will not violate a nuclear treaty. The demand for negative proof frequently issues from one who is committed to a belief as irrefutable, immune from evidence. It is a demand that is meant to close dialogue; it is not expected to be met; and, unless there is a specifiable, finite database to draw on, it never will be met.