Important Truths
To divide true beliefs that potentially rise to the level of knowledge from those that do not, theories of knowledge identify a dimension for making the division. Other conditions may also need to be satisfied, but this dimension does the initial work in qualifying a true belief as a plausible candidate for knowledge
According to justification-based theories, the relevant dimension is the strength of the person’s evidence. According to reliability theories, it is the degree of reliability of the processes generating the belief. For tracking theories, it is how extensive the range of counterfactual situations is in which the person’s belief would track the truth.
Whatever the proposed dimension, there is no nonarbitrary way of specifying a precise point along it at which a true belief becomes a credible candidate for knowledge. Proponents of justification-based theories may insist that for an individual to know P, it has to be more probable than not on her evidence, or if they are willing to concede there is little we know, they may contend that the probability has to be 1, but between these extremes, it is not possible to identify an exact probability greater than .5 and less than 1 as the tipping point. It is similarly not possible for reliabilists to designate a precise degree of reliability as the knowledge boundary. Likewise for tracking theories. Exactly how extensive must be the range of counterfactual situations in which the person’s belief would track the truth? Extensive enough to make the belief into a serious candidate for knowledge.
I have been arguing that the relevant dimension is not evidence, reliability, or truth tracking but rather information about the situation in which P is true. On this view too, there is no way to specify a precise threshold. How much information must one have in order for a true belief to qualify as knowledge? Enough.
On the other hand, there is a negative test. When someone knows P, there aren’t important truths she lacks. There are always gaps in one’s information, but when one knows, the missing information isn’t important.
If there is a dispute about whether someone knows something, this is not a test that will necessarily resolve it. Then again, it is not the job of a theory of knowledge to adjudicate hard cases. Its task, rather, is to explain how and why knowledge intuitions arise and in so doing address various questions and puzzles that arise about knowledge. Among the intuitions to be explained, however, are those about hard cases. A theory of knowledge ought to be able to shed light on what issues are at stake in hard cases and why intuitions about them are apt to vary.
Disputes over whether someone knows something arise in a variety of ways. If two spectators agree that an individual S believes P but disagree whether P is true, it won’t be surprising if they have different views about whether she knows P. Likewise, if they agree that P is true but disagree whether she really believes it or disagree about what additional information she has, it again shouldn’t be surprising if they reach different conclusions about whether she knows. But in addition, their disagreement1 can be the result of different takes on whether a gap in her information is important, and often it is this that makes hard cases hard.
Disagreements over the importance of something are never easy to resolve, in part because there are different ways in which something can be important. Whenever issues of importance arise, it is always appropriate to ask, important in what respect? It is no different when the issue is the importance of missing information.
Knowledge is true belief plus adequate information. So, the kind of importance relevant to assessments of knowledge is importance with respect to the subject’s having adequate information. If a truth strikes us as one that the subject needs to be aware of in order to have adequate information about the situation, then the truth is important in the relevant sense. Her not being aware of it would incline us to think she doesn’t know.
What kinds of truths tend to strike us as being important in this sense? Whatever their other limitations, intuitions about knowledge stories are a resource for addressing this question. Collectively they constitute an inventory of sorts about the kinds of truths that observers think a subject needs to be aware of if she is to have knowledge.
An especially common type of knowledge story involves a subject who has a true belief but is unaware of some shortcoming that afflicts either her or her belief. Some of the stories, for example, narrate a scenario in which S has evidence for her true belief P but is unaware of something that would undermine this evidence. Although her belief P is true, in many situations this kind of gap in her information would cause problems of accuracy. Hence, the missing information is apt to strike us as important, that is, as something S should be aware of if she is to have an adequate grasp of her situation.
Or consider stories in which S doesn’t realize that the methods or faculties that have led her to believe P are unreliable in the situation in question. Even if the story makes clear that her belief P is true, this again can seem to be an important gap in her information, since in similar circumstances relying on such methods or faculties would lead to error.
In analogous ways, S’s being unaware of other kinds of intellectual shortcomings can strike onlookers as potentially important. Although there is a variety of such shortcomings, it can be tempting to fasten upon stories involving a particular kind of shortcoming and to try to build an entire theory of knowledge around them. Stories in which S has a true belief P but is unaware of the untrustworthiness of the faculties or methods producing the belief motivate reliability or truth-tracking theories. Stories in which S is unaware of truths that constitute key evidence for P make justification-based approaches seem natural, whereas stories in which S is unaware of relevant truths that are widely available to others in her community encourage theories with a social dimension.
Each of these theories has undeniable appeal, because each captures a large class of cases in which the subject seems to lack knowledge despite having a true belief. They also share a limitation, however. Each seizes upon a specific intellectual drawback and tries to make it into the defining characteristic of knowledge, taking precedence over all the others.
My approach, by contrast, is ecumenical. There are any number of intellectual shortcomings that can afflict S or her belief such that if she is unaware of them, she will lack knowledge. None has a privileged status.
Proponents of other approaches, of course, maintain that it is the shortcoming itself—the unreliability of the process causing the belief or the defectiveness of the justification or whatever—rather than S’s lack of awareness of it that is the problem. One way of pressing this point is to imagine that S becomes aware of the deficiency but nonetheless continues to believe P. Wouldn’t we still be inclined to say she lacks knowledge?
For example, retell the barn story so that George becomes aware that the region in which he is driving his car is populated with barn facades so realistic that from the road it is impossible to distinguish them from genuine barns. Yet as he sits in his car and stares out his window, he believes he is looking at a real barn. Since he has happened to stop in front of one of the few genuine barns remaining in the region, his belief is correct, but most listeners will be no more inclined than in the original case to think that he knows.
Agreed, but this is now a new story, and in this new story we are led once again to think George lacks important information. It’s just different information from the original story. In the original he lacked information about the facades. As the story has been retold, the important information he lacks is how the thing he sees from his car window looks close up and from the rear and sides.
Why do we think that it is important for him to have such information? There are various possible rationales. Here is one: in the retold story George is aware that there are highly realistic barn facades in the region, and this defeats the presumptive credibility of his observations from the car window; as a result, if he is to be justified in believing he is looking at a barn, he needs to get out of the car and observe the thing close-up and from all sides, whereas in the original story it was reasonable for him to trust his observations from his car’s window. Here is another: getting out of the car and observing the barnlike thing close-up is the only way in these circumstances for him to have a reliably produced belief about whether what he is seeing is in fact a barn. In other words, some of the rationales for thinking the missing information is important track the explanations that justification theorists, reliability theorists, or proponents of other approaches would give for why George lacks knowledge. These approaches provide a directory to the sorts of gaps that are apt to strike observers as important.
We needn’t choose among these approaches, however. We can be ecumenical. Any and all—reliability accounts, tracking accounts, proper function accounts, justification-based accounts, etc.—can be helpful in identifying the kinds of truths we think the subject must be aware of if she is to have adequate information.
Let’s pursue the point here a bit further, since it might be objected that although this reply works well enough for the revised barn story in which George becomes aware that there are facades in the region, it merely pushes the issue back a level. To illustrate, imagine a case in which a subject S is aware of as many surrounding truths about the situation as one wishes. A reliabilist will still maintain that if S’s true belief P was not reliably produced, it is not an instance of knowledge. In particular, it doesn’t matter if she herself is aware that her belief P is not the product of a reliable process. Ditto for justification theorists, tracking theorists, proper function theorists, and others. They too will claim that no matter how many surrounding truths S acquires, including truths to the effect that her justification is defective or her belief wouldn’t track the truth or was not the product of properly functioning faculties or whatever, she does not have knowledge if her belief P has the shortcoming in question.
Not so fast, however. It may not be as easy as one imagines for there to be circumstances in which S believes as many surrounding truths as possible and yet still has the shortcoming in question. We need to fill in some of the details of the case as a way of testing whether this really is possible. In addition, even if in the end we decide that such a case is possible, once we have fully digested in some detail what it would look like, it may no longer seem so obvious that S lacks knowledge. I will be describing just such a case in the next chapter and using it to illustrate differences between the view that knowledge is to be understood in terms of adequate information and other views.
Before doing so, however, there is another issue to be surfaced, which is that practical as well as intellectual considerations can influence assessments about whether a truth is important enough that S must be aware it if she is to have knowledge.
One of the core tenets of pragmatist epistemology is that there is no purely intellectual measure for how important a piece of information is. Information about the atomic numbers of various chemical elements is generally regarded as more important than information about how many grains of salt there are in a shaker of salt, but not because the former is intrinsically more significant and not solely for intellectual reasons either. Its importance derives rather from complex links with our lives in all their detail, and in particular, with the full range of human concerns and values.
Intuitions about whether someone has enough information for knowledge are likewise influenced by such links. Consider a foreman who works in a factory that makes highly noxious chemicals and who is responsible for the air purification equipment at the factory. He is aware that the equipment is exceedingly well engineered and aware also there has never been a problem with it during the two years it has been in operation. His job nonetheless requires him to inspect the equipment daily, which he does on his morning rounds. There is a warning light on the equipment that is green when the equipment is working properly and red when it is not purifying the air. The foreman is unaware that the warning mechanism has just shorted so that even if there were a problem with the equipment, the green light would remain on. He could have disconnected the purifier elements to check whether the warning light is functioning properly, but he does not. He looks at the light, sees that it is green, and believes that the equipment is purifying the air just as it always has for the past two years. Moreover, he is correct. The equipment is working perfectly.
Some listeners in hearing this story will be reluctant to grant that the foreman knows that the equipment is purifying the air, despite the fact that the equipment is in fact doing so and he has lots of evidence to this effect. Why? Because the story has been told to make the gap in the foreman’s information about the warning light seem important. After all, he is responsible for the equipment, and the stakes of his being mistaken are potentially very high.
Consider now another story that is in most ways analogous, only the stakes are not nearly so high. The same foreman before going to work at the factory each day uses an electric toaster to toast a piece of bread. The toaster has a small light that is green if the toaster is working properly and red if the coils are not heating sufficiently to toast bread. The make of his toaster has an extremely high reliability rating, and the foreman has never had a problem with it in the two years he has owned it. Unbeknownst to him, however, the warning mechanism in the toaster has just shorted in such a way that the light would be green even if the heating coils were not working. As he begins to prepare his breakfast, he could have gone to the trouble of disconnecting the coils to test whether the warning light is functioning properly, but he does not. He pushes the handle of the toaster down, the green light illuminates, and he starts to prepare his coffee, believing that his bread is toasting as usual, and indeed it is. The heating coils are toasting the bread just as they have day in and day out for the past two years.
Those who are reluctant to grant that in the first story the foreman knows may be more willing to do so here. If so, why? Because the story has been told to make clear that the foreman has lots of evidence about the reliability of the toaster and to make clear as well that the stakes of his being wrong are minor. As a result, the gap in his information may not seem all that significant.2
In Knowledge and Practical Interests, Jason Stanley pre-sents an assortment of examples to illustrate that standards of knowledge tend to become more demanding as the practical stakes go up.3 Not everyone has been persuaded by his examples but many have, and the view that knowledge is best thought of in terms of adequate information helps explain why.
Whenever S believes P and P is true, there are inevitably gaps in her information about the situation. The question is whether the gaps are important enough to prevent her from having enough information to know. The knowledge stories of contemporary epistemology help answer this question by creating an inventory of truths which, when missing, strike many observers as being important in this sense. A good number of the stories are ones in which the subject is unaware of something that has the potential to have adverse effects on her performance as an intellectual being, but the stories now being considered are ones that draw attention to her being unaware of something that has the potential to have adverse effects on her performance as an agent. If a gap in her information could result in her producing (or not preventing) harm, there is a presumption that the gap is important. Indeed, all else being equal, the greater the potential for harm, the stronger the presumption.
All else is not always equal, however. Closeness also counts. When an individual S believes that there is a red ball on the mat in the hallway, there is no end to the truths associated with the situation, but some, even if otherwise consequential, seem too remote to be important when appraising whether she has enough information to know.4
The red ball on the mat in the hallway was purchased at a Target store two months ago; several years ago an identical ball (size, color, brand) was shoplifted from the same Target store; the shoplifter, whose father was a candidate for mayor at the time, was caught and prosecuted; and the father lost the election as a result of the negative publicity, which in turn led to major changes in city policies. These are consequential truths, but we are unlikely to regard S’s ignorance of them as preventing her from having enough information to know there is a red ball on the mat in the hallway, since as far as we can tell, they are not involved in any significant way either with her believing that there is a red ball on the mat or with this belief’s being true.
Some propositions are so closely connected with P that it is neither possible for P to be true without their also being true nor possible for S to believe P without also believing them. At the other extreme are truths so distant conceptually, spatially, temporally, or causally that they play no discernible role in her believing P or in P’s being true. Between these two extremes are truths that play some role, greater or less, either in P’s being true or in S’s believing P. The more direct and essential we regard these roles as being, then all else being equal, the more we will be inclined to regard S’s lack of awareness of them as potentially interfering with her having adequate enough information for knowledge.
Once again, however, all else is not always equal. As discussed earlier, in circumstances of narrow knowledge, little seems to matter beyond the truth of the belief itself. Being aware of additional truths thus tends to strike us as unnecessary, however close or distant they may be. The quiz show contestant who is asked the date of the Battle of Marathon and replies straight away “490 BCE” may not be aware of neighboring truths about the battle, and yet we still may be inclined to grant that she at least knows the date.5
Knowledge stories push and pull intuitions about, but the pushes and pulls do not affect everyone in the same way—hence, disputes about knowledge arise. There is, however, a standard dynamic at play in these disputes. For those who think that S has a true belief P but does not know, it will seem that there are truths she lacks that are important for her to have if she is to have adequate information about the situation in question, where the standards of what counts as adequate can be affected by the practical stakes of her being right or wrong. At the extreme are those with skeptical leanings who will be quick to regard almost any gap in her information as important. By contrast, for those who think that she does know, it will seem that the truths she lacks are not important enough to prevent her from having adequate information about P. Indeed, it may even seem that the circumstances are such that little beyond the truth of P itself matters. Its truth swamps everything else.
Some may wish for a theory of knowledge that takes definitive stands on such disputes, but my aim is neither to adjudicate hard cases nor to offer a theory of importance. It is rather to illustrate that it is our takes on the importance of missing information, however formed, that generate intuitions about knowledge stories, and to illustrate also that intuitions about such stories can conflict, and when they do, the complexity and variety of factors affecting them can make it difficult to resolve the disputes.
This is a less confrontational approach than some might desire, but it has the advantage of not pretending that intuitions about knowledge are clean data to which theories must be adequate. Intuitions about knowledge just are varied and they just are messy. The task for a theory of knowledge is not to clean up the mess but to explain it.6
This is not to say that there aren’t cases where the adequate information view clearly points in a different direction from other approaches, but it is in extreme cases where these differences are most evident. I now turn to such a case.