14 Fallibility’s Payoff
Skeptical worries often emerge from the recognition that the evidence for a contention does not ensure its truth.1 It might therefore seem that, having scuttled the truth requirement on epistemic acceptability, I can evade such worries. Sadly, I cannot. For the evidence for a contention typically does not ensure that it is true enough either. In this respect, veritists and nonveritists are in the same boat. The way out might seem obvious. Acknowledging our epistemic vulnerability, we should endorse fallibilism, thereby evading skepticism.
I begin this chapter by investigating fallibilism with respect to knowledge. On most construals, it seems to be a concession to epistemic frailty. Our evidence is almost inevitably too sparse to provide the level of justification we would like. We need to live with this. I suggest that such a position is less stable and more problematic than it might first appear. I go on to consider fallibilism with respect to understanding. I argue that, far from being a concession to frailty, it is a source of strength. It both prompts and enables us to deepen our understanding in ways that an infallibilist position would not. I urge that the capacity to make mistakes is an epistemic achievement rather than a failing. I then discuss how actual errors sometimes mark epistemic advances. Finally, I argue that a fallibilist theory of understanding is epistemically richer than an infallibilist one would be. Weaving in mechanisms for detecting, correcting, and compensating for errors strengthens the fabric of understanding.
Fallibilism
Although many philosophers claim to be fallibilists, it is not entirely clear exactly what fallibilism amounts to. It teeters between skepticism and dogmatism. To acknowledge that despite the evidence I could still be wrong seems an entirely appropriate confession of intellectual humility. It recognizes a gap between the support my reasons supply and what it would take to make me infallible. But an epistemic agent’s claim that she could be wrong is more complicated and less obvious than it first appears.
The contention that we are infallible seems unacceptably arrogant. But epistemologists who accept ‘When you know, you cannot be wrong’ and maintain that epistemic agents sometimes know are committed to the view that those agents are infallible with respect to what they know. What does it take to underwrite infallibility? Two candidates are plausible: either the premises entail the conclusion or the premises give the conclusion a probability of 1. Both are simultaneously too weak and too strong. Entailment alone cannot secure infallibility. What is required is entailment from known premises. Unless the premises are known, they convey no right to be sure; for all an epistemic agent can tell, one of her premises might be false. Entailment from a false premise surely conveys no right to be sure. If the agent has to ensure that the premises are known, a regress looms. Something similar occurs if we seek to ground infallibility in probability. It is not enough that the evidence give the conclusion a probability of 1; that evidence must itself be epistemically secure. What justifies treating an item as evidence? Does each bit of evidence also have a probability of 1? Here too a regress looms.
Luckily, we need not attempt to block the regresses. For the standards in question are too demanding in any case. To avoid skepticism about the external world, we must settle for less. Evidentialists maintain that evidence must probabilify a conclusion (Adler, 2002); but there is a threshold—perhaps a movable threshold (S. Cohen, 2014)—with a probability of less than 1. Reliabilists maintain that the belief that constitutes knowledge must be produced or sustained by a reliable process—just how reliable may vary with context (DeRose, 2011). But they admit that even an epistemically reliable process sometimes fails. There is always a gap between the justifiers and the conclusion they support.
A critical question concerns how to interpret the ‘might’ in ‘I still might be wrong’. The inability to secure an entailment shows that there is a logical or conceptual possibility of error. Regardless of the strength of my evidence or the reliability of my method, it is still logically or conceptually possible that I am wrong. This no doubt is true, but what should we make of it?
Perhaps whenever there is the remotest possibility of error, I should suspend judgment. If this is what fallibilism contends, it amounts to (or comes perilously close to) skepticism. If, on the other hand, fallibilism maintains that assuming the gap is not too broad, it makes no difference that I might be wrong, it is hard to distinguish fallibilism from dogmatism. The fallibilist and the dogmatist then treat the very same contentions as knowledge, and for the very same reasons. Evidently, the only difference is that the fallibilist displays a sort of mock modesty, while still claiming to know.
If the fallibilist sides with the skeptic, her predicament may be worse. However reliable her methods and however solid her evidence, everyone concedes that some chance of error remains. Ought the agent therefore suspend judgment? To accept a contention is to be willing and able to use it as a premise in cognitively serious assertoric reasoning or as a basis for cognitively serious action (L. J. Cohen, 1992). Suspension of judgment deprives the epistemic agent of that resource. Sometimes such deprivation is entirely reasonable; to refuse to use a contention as a premise in assertoric reasoning or as a basis for action is a mark of epistemic responsibility if the contention’s epistemic standing is doubtful. The detection of what appeared to be a distinctive sort of gravitational wave was taken in 2014 to support the hypothesis that the universe began with a Big Bang ripple of cosmic inflation. But astrophysicists recognized that there was a slight chance that the observed wave pattern was due to cosmic dust. (This turned out to be true.) Because they thought the region where the wave was detected was relatively dust free, the dust alternative was considered improbable. Still, it was probable enough that an epistemically responsible astrophysicist would have to suspended judgment. She could, were she so inclined, use the cosmic inflation hypothesis in hypothetical reasoning, but until the controversy was resolved (should we say until the dust settled?), she should use neither hypothesis assertorically. Suspending judgment in a case like this seems entirely reasonable. But widespread suspension of judgment is more difficult and costly than it first appears. Action requires assuming that things are one way or another. Suspending judgment leaves a person paralyzed. She has no reason to favor any particular course of action. She ceases to be an agent.
Many epistemologists urge that a mere logical, conceptual, or metaphysical possibility does not put a contention in epistemic jeopardy (Adler, 2002). So the mere logical, conceptual, or metaphysical possibility that we are wrong about p gives us no reason to suspend judgment about whether p. The possibility that concerns us, they maintain, is epistemic. And some logical, conceptual, and metaphysical possibilities are not epistemic possibilities. The question fallibilism poses, then, is whether it is possible to know that p, even though it is epistemically possible that ~p. Some, such as David Lewis (1999b) and Fred Dretske (2000), argue that we need not and ought not concede this. Our evidence, they contend, regularly supplies conclusive reasons for knowledge claims. When it does, there is no epistemic possibility that we are wrong. In such cases, we are infallible. When we are infallible, we ought to be dogmatic. For there is no real (that is, epistemic) possibility of error.
The difficulty is that we are seldom if ever in a position to rule out all potential defeaters to a knowledge claim. As Lewis and Dretske recognize, to arrive at the position they advocate requires circumscribing the range of potential defeaters that we need to accommodate. We need only, they maintain, consider what occurs in ‘nearby’ possible worlds or is among the relevant alternatives. Remote possibilities and irrelevant alternatives can be ignored. But a little reflection on the fates of our peers—similarly situated epistemic agents who turned out to be wrong—undermines confidence in this conclusion (Adler, 1981).2 The most familiar examples are lottery cases. The holder of the winning ticket had as good reason to think that she would lose as the losers did. Her winning the lottery was surely a remote possibility.3 But she was wrong. Since she did not know she would lose, and the other ticket holders were in the same epistemic position, it is hard to see how they could know that they would lose. Less fortunate cases include Vogel’s (1990) car theft victims. They had as good reason as their peers to think that they knew where their cars were parked, but, their cars having been stolen, they were wrong. Those who were wrong were as justified as those who were right. That’s the lesson of the gap. We move too quickly when we dismiss some metaphysical, logical, or conceptual possibilities as too remote to qualify as epistemically possible, or as sufficiently epistemically possible to matter.
Dogmatism is hard to square with epistemic misfortune. Fallibilism recognizes that epistemic misfortune is always possible. This suggests that we construe fallibilism as contending that an epistemic agent can know that p, even though her justification is fallible, where for justification to be fallible is for it to be the case either that her justification for p is not conclusive, or that the justification for the evidence that p is not conclusive. According to fallibilism, Leite says, “it is appropriate to attribute knowledge to someone just in case you take that person to have decisive, specific evidence against those possibilities of error which you take to have some reason in their favor” (2004, 247).
Leite here only characterizes what is required to appropriately attribute knowledge. More is needed. For the fact that an epistemically responsible ascriber has no reason to think that a possibility of error obtains does not guarantee that no such possibility obtains. Nor is there any guarantee that two epistemically responsible ascribers, surveying the same situation, would agree about what possibilities of error have some reason in their favor. So it is not straightforward to infer from the appropriateness of ascribing knowledge to a subject to the fact that the subject actually knows (see Stroud, 1984).
Still, Leite’s characterization can be plausibly elaborated. We might say that an epistemic agent has knowledge that p just in case she in fact has decisive, specific evidence against the possibilities of error that have some reason in their favor. To know, then, does not require eliminating all possibilities of error; it only requires eliminating those possibilities that there is some reason to believe obtain. Possibilities that there is no reason to believe, although real, can simply be ignored.
The difficulty is that sometimes such possibilities are realized. Even if there is just one chance in a million that p is false, when that chance is realized, p is false. So the agent’s claim to knowledge is defeated despite the fact that she had just as good evidence for her claim, had decisive, specific evidence for the possibilities of error that she takes to have some reason in their favor, and used just as reliable methods as her epistemically luckier peers. Rather than conclude that the existence of such situations casts doubt on the adequacy of their positions, many epistemologists simply consign them to the realm of epistemic misfortune. If they don’t happen too often, such epistemologists maintain, there’s no problem. Knowledge involves justification and truth. Where we have adequate justification and our belief is true, we know. Where we have adequate justification and our belief is false, we do not. It is regrettable that adequate justification does not ensure truth. But we can live with an element of luck in our epistemology. In the absence of bad luck, we still know (see Pritchard, 2005).
Do the dogmatist and the fallibilist differ in anything other than self-presentation? Both set the level of justification required for knowledge as less than a probability of 1. Both are vulnerable to epistemic misfortune, but seem not to be daunted by that vulnerability. Still, there are differences. Fallibilism, unlike dogmatism, is prey to a variant of Moore’s paradox (Moore, 1991); dogmatism, unlike fallibilism, is prey to Kripke’s dogmatism paradox (Kripke, 2011a).
Moore’s paradox is an assertion of the form
p but I do not believe that p
or
I believe that p, but ~p.
There is a good deal of controversy concerning exactly what kind of infelicity a Moore’s paradoxical assertion involves (see Green & Williams, 2007). It is, however, widely acknowledged that for Moore’s paradox to occur, the sentence must be asserted, not merely entertained or supposed; and the attitude the agent self-ascribes or disavows is belief.
The schema that concerns us,
I know that p, but it is possible that ~p,
is a variant on ‘I believe that p but ~p’. Let us call this the knowledge variant. The first conjunct of the knowledge variant is stronger than its counterpart in the canonical Moore’s paradox, since in asserting it, its utterer ascribes to himself truth and justification, as well as belief. Still, belief is a central component of what he self-ascribes. The second conjunct—‘it is possible that ~p’—is weaker than its counterpart in the canonical form of the paradox. It does not contend that p is false, only that it might be. But like the canonical version of Moore’s paradox, the knowledge variant gives epistemic assurance in one breath and takes it back in the next.
Canonical Moore’s paradoxes are first personal. There is something deeply untoward about ‘I believe that p, but ~p’. There is nothing untoward about ‘Meg believes that p, but ~p’. The knowledge variant is different. Because knowledge is factive, ‘Meg knows that p, but it is possible that ~p’ is as problematic as its first-personal counterpart. Such statements are concessive knowledge attributions (see Dougherty & Rysiew, 2009). The first conjunct attributes knowledge; the second concedes the possibility of error. The divergence from canonical cases may be reason to deny that concessive knowledge attributions are strictly instances of Moore’s paradox, but they are at least close cousins. Let us call Meg’s epistemic predicament quasi-Moore’s paradoxical.
To appreciate the difficulty, consider what an epistemic agent should do in a quasi-Moore’s paradoxical situation. If Meg knows that p, she can responsibly assert that p, give her full assurance to others that p, and use p as a premise in any inference or as a basis for any action where it is relevant, regardless of the stakes. If she knows that p, she has a right to be sure that p. But if there is some chance that she is wrong in thinking that p, her situation is different. If the chance is small enough, perhaps she need not suspend judgment, but minimally she should hedge her bets, refrain from relying on p in inferences or actions when the stakes are very high, and convey her degree of uncertainty about p to her interlocutors. The problem is that she cannot do both. There seems to be no coherent course of thought or action open to the subject in the knowledge variant of Moore’s paradoxical situations. If this is what fallibilism commits us to, we are in a bind.
It might seem, then, that we should revert to dogmatism. Maybe we should set a satisfiable threshold on the justification required for knowledge, and declare that when that threshold is reached, our evidence is conclusive. We should simply deny that there is any real (epistemic) possibility that we still might be wrong. But dogmatism faces a (putative) paradox of its own (Kripke, 2011a).
If S knows that p, then p is true.
If p is true, then any evidence against p is misleading.
So to avoid being misled and to preserve her knowledge, S should disregard any evidence against p.
Therefore, S should be dogmatic about p.
By itself, this is not strictly a paradox. It is merely an unwelcome conclusion. It becomes a paradox when we connect knowledge with rationality. Since rationality requires responsiveness to evidence, on the recommended approach, knowledge and rationality diverge. Dogmatism requires closing our minds to new evidence, lest knowledge will be lost. Rationality requires keeping an open mind.
Neither fallibilism nor dogmatism with respect to knowledge seems wholly satisfactory. Dogmatism is intellectually arrogant. Fallibilism’s alleged intellectual humility is hard to get a handle on. The admission that, despite the evidence, an epistemic agent still could be wrong, seems correct. The problem is to see how to construe it as anything other than either an invitation to skeptical paralysis or an expression of mock modesty. Here is where the move from a focus on knowledge to a focus on understanding pays dividends.
Intellectual Humility
Despite the ‘ism’, I suggest, fallibilism is not primarily a doctrine, but a stance—a stance of intellectual humility. The stance is not a passive concession in the face of epistemic frailty. Rather, it is an active orientation toward a domain of inquiry and our prospects of understanding it. Unlike the dogmatist, the fallibilist does not simply dismiss wholly justified or reliably generated false beliefs as bad luck. To do so is to assume that there is no insight to be gleaned from the failures. That is shortsighted.
Suppose a medication for multiple sclerosis is considerably more effective than any available alternative, but it has serious side effects in 2 percent of the cases. A physician would reasonably and responsibly prescribe the medication. She might even say that she knows that it is safe and effective. But she would not stop there. She would also warn her patients about the potential side effects, how to recognize them, and what to do if they occur. That would improve their understanding of the treatment. This is so not only for the patients who actually experience the side effects. All the patients understand their medical situation better if they are aware that the effective medication exposes them to risks and learn the ways to recognize deleterious side effects for what they are. Moreover, if the side effects are severe, researchers would presumably attempt to figure out their cause, to identify the subset of MS patients whose members are vulnerable to them, and to see whether they can find a way to eliminate them or moderate their severity. They too understand the medication better if they recognize the risk. No one would simply say ‘Bad luck!’ or ‘Better luck next time!’ to members of the 2 percent and move on. Such a response would not only be heartless, it would sacrifice useful information. The recognition that they do not know why or to whom the side effects will occur is epistemically valuable.
It might seem that the entire payoff for exercising intellectual humility is that by doing so we either uncover previously undetected errors, close previously open gaps in our understanding, or provide additional assurance that our conclusions were correct. This is surely one sort of payoff. But, I will argue, there is another important epistemic benefit. In taking the possibility of error seriously, we treat it as itself worthy of attention. We put ourselves in a position to identify the potential fault lines in our currently accepted account. If current understanding of the phenomena is wrong, where is the error apt to be located? What are the weakest links in the argument? If there is an error, what would show it? If there is no error, what accounts for the gap? By addressing such issues, we learn more about the phenomena, our assumptions about the phenomena, and our methods for investigating such things. One might think that this is simply changing the subject. Rather than asking about the epistemic status of p, we are now asking something else. This is incorrect. To ascertain p’s epistemic status, we need to determine how it figures in a network of epistemic commitments. That depends on how, where, and why it is vulnerable. Before looking at the epistemic benefits of the possibility of error, let us consider the epistemic benefits of actual error.
Error as a Mark of Success
According to Davidson (2001), the identity of a belief derives from its place in a rich, textured array of relevant, true beliefs. A person who lacks the requisite background cannot have a particular belief. So, for example, unless Paul understands a good deal of physics, he cannot rightly or wrongly believe that electrons lack mass. He would not know what an electron is or why the question of its mass even arises. It might seem that he could garner the relevant knowledge via testimony. But being ignorant of physics, although Paul can hear the words ‘electrons have mass’ and recognize the utterance as information-conveying testimony, the utterance is to him like testimony in a language he does not speak. He cannot draw nontrivial inferences, incorporate it into his picture of the world, or use it as a basis for action. At best, he can parrot the statement and perhaps draw trivial logical inferences from it.
Ordinarily, few if any specific beliefs are required to back a given contention. One can approach an issue from different directions. So apart from very general beliefs, such as ‘an electron is some sort of subatomic particle’, no specific beliefs are required to equip the agent to form beliefs about a topic. Moreover, there are bound to be undecidable cases. It is not clear how sparse a complement of relevant beliefs a person can have and still, rightly or wrongly, believe that p. Indeed, the answer might depend on what sort of belief is in question. If little is known or reasonably believed about the topic, a relatively sparse constellation of relevant beliefs might suffice. If plenty is known, then the agent’s constellation of beliefs might have to be considerably denser. Nevertheless, a fairly wide and dense cluster of relevant beliefs is, Davidson maintains, required to anchor each particular belief.
Davidson’s point turns on the fact that most of a person’s beliefs about a topic are trivial. He recognizes that there can be profound disagreements about such matters as whether electrons have sharply demarcated boundaries, with no resolution in sight. The assurance that the parties to the dispute are all talking about the same thing (and that that thing is the electron) is based on their believing a lot of the same electron-relevant trivia. They believe, for example, that electrons are subatomic particles, that they are small, that somehow they are constituents of material objects, that they are more plentiful than artichokes, that they are not cats, and so forth. The number of a person’s trivial true beliefs about a topic swamps the number of her controversial beliefs about it.
Davidson insists that most of the beliefs that constitute the background must be true. I disagree. Rather, sufficiently many of the agent’s relevant beliefs must be at least true enough and reasonably accepted. If we insist that for Fred to have beliefs about quarks, most of his beliefs about quantum mechanics must be true, we ask too much. Perhaps most (or, at least, enough) of what is currently accepted in quantum mechanics is true enough. But it would be unsurprising if many of the contentions of current theory turn out to be not exactly true.
I suggest that truth per se is not required. One reason is that at the cutting edge of inquiry, and often well inside the cutting edge, an epistemic agent’s opinions may be at best true enough—true in the ways that matter and to the extent that they matter for their effective functioning in her cluster of epistemic commitments. Another reason is that truth is the wrong currency for assessing the opinions of babies and nonhuman animals, as well as for assessing adult opinions whose modes of representation are not truth apt.
Still, the idea that the identity conditions of beliefs and opinions depend on their place in a cluster of reasonably accurate cognitive commitments seems right. If Paul is ignorant of physics, he has no idea what an electron is or what it takes for a particle to have or lack mass. Too great an ignorance of a topic leaves a person out of touch with its subject matter. He can have no views about it. If Paul cannot tell an electron from a quark, then nothing in his cognitive system equips him to have views about the one but not the other.
It follows that to be in a position to make a mistake marks a significant epistemic achievement. Only someone who has some understanding of a topic has the resources to have mistaken beliefs about it. Only someone who understands a good deal about it has the resources to make a significant mistake. Only because Fred is cognizant about quantum mechanics can he harbor the erroneous opinion that quarks and antiquarks exchange charms. (They exchange muons.) Being in a position to have erroneous beliefs about a topic requires a significant measure of understanding of that topic.
Learning from Our Mistakes
From early childhood, we’re advised to learn from our mistakes. Perhaps this means that we should learn not to make a particular mistake again. Once you’ve discovered that you get burned when you touch a hot stove, you should permanently refrain from touching hot stoves. Karl Popper (2002) considers this the key to science. Science, he urges, can never prove anything. It can only disprove. Thus, he concludes, scientific progress consists in formulating theories and attempting to disprove them. Each time we generate a disproof, we make progress. We know that that particular theory is false. If we do not make that mistake again, we get closer to the truth. What we learn from our mistake, then, is that things are not the way the mistaken theory says they are.
Where the mistake concerns an isolated fact, with few and weak connections to the agent’s other epistemic commitments, that the proposition is false may be all that is available to learn. Joe’s mistaken opinion that King Philip I of Spain was King Philip II of Portugal is a mistake of this kind. (Philip II of Spain was Philip I of Portugal.) Similarly when mistakes are wild guesses. When a student discovers that he is mistaken in thinking that the value of Avogadro’s number is 17, he is not much closer to knowing the actual value of Avogadro’s number. Still, learning from mistakes through a process of elimination is exceedingly labor-intensive, and unlikely to get very far. There are indefinitely many ways the world might be. So the idea that we will get to the truth by sequentially eliminating hypotheses that have shown themselves to be mistaken is not promising. Neither individually nor as a species will we survive long enough to eliminate all the false alternatives.
This characterization of the issue treats every error as a stab in the dark. When we understand little about a topic, we are not in a position to learn much from a mistake. But our situation is rarely so bleak as this suggests. Usually, we understand enough that the range of plausible alternatives is restricted. If Maria realizes there are only three configurations in which a particular protein might fold, eliminating one of them yields considerable information about the protein—it folds in one of the remaining two ways. If Bill is aware that only five candidates are running for office, the news that one has withdrawn again yields considerably more information than if he thought that any one of the nearly 7 billion people on earth might be elected.
But even if we grant that our cognitive situation is frequently better than that of the student who is wildly guessing values for Avogadro’s number, it might seem that our prospects of learning much from a mistake are bleak. The problem is a consequence of holism. Quine argues, correctly I believe, that “statements about the external world face the tribunal of experience not individually, but only as a corporate body” (1961, 41). This means, as Walden says, “We never properly get evidence for a proposition; we get evidence for a proposition relative to methodological prescriptions about what counts as evidence for what.… These methodological prescriptions are also part of our theory, and thus … they are actively confirmed or denied along with the rest of the theory” (2007). The entire cluster of factual and methodological commitments is confirmed or disconfirmed together. It follows that what we learn when we discover that we have made a mistake is that something in a wide constellation of commitments is wrong. But the discovery of the error alone does not target any particular member of the constellation.
What this bleak assessment overlooks is that typically not all of the elements of the constellation are equally vulnerable. Since constellations of epistemic commitments overlap, some elements of the constellation that contains a mistake may be independently supported by their roles in other constellations that have been confirmed. Then we have prima facie reason to believe that they are not the locus of error. The more solidly supported our background assumptions, the more precise the focus a mistake can provide.
At the opposite extreme from random guesses are cases where we are quite confident that we know what will happen, but turn out to be wrong. In such cases, our mistaken conviction is a telling error. An experiment that to our surprise fails to confirm a seemingly well-established theory is such a case. Let me briefly describe such an experiment (Suzuki et al., 2007). A plasmid is a circular DNA molecule found in bacteria. Inserting mutations into plasmids is a common technique in genetic engineering. Bacteria containing those plasmids then transfer the mutation to other bacteria through conjugation, a process by which bacteria exchange nuclear material. That is the established, well-confirmed background against which the following experiment took place. A bacteriologist in Marcin Filutowicz’s laboratory attempted to insert two different mutations into a single bacterial plasmid. Given the previous successes and the then current understanding of bacteriology, he had good reason to believe he would succeed. Introducing mutations into plasmids is a boringly routine task. But his belief was mistaken. No matter how often he tried, and no matter how carefully he proceeded, he could not produce a live bacterium containing the two mutations. This constituted a surprising experimental failure. In light of the then current understanding of bacteriology, the procedure ought to have worked.
A Popperian might conclude that what we learn from this mistake is that one cannot introduce the two mutations into the same plasmid and obtain a live bacterium. A Quinean might conclude that there is a mistake somewhere in the cluster of assumptions that led bacteriologists to believe that two mutations could be introduced at once. Both would be right. We do learn these things. But, I suggest, we learn something more.
Introducing either mutation alone did not produce the untoward result. The background assumptions and methodology were the same whether one or two mutations were introduced. So rather than simply concluding that something must be wrong somewhere in the vast constellation of substantive and methodological assumptions, Filutowicz was in a position to zero in on the error—to recognize that it arose from thinking that if you can successfully introduce each of the mutations, you can successfully introduce both. That is, given the depth and breadth of the well-established background commitments, he could use the mistake to probe the current understanding of plasmids.
The mistake put him in a position to consider more carefully what occurs within a plasmid when mutations are introduced. Having evidence that the two mutations together did something that neither did alone, he hypothesized that when the two mutations were introduced together, they caused the plasmid to overreplicate and destroy the containing bacterium. He confirmed the hypothesis and went on to devise a bacterium that suppressed the overreplication, thereby creating a ‘Trojan Horse’. The new bacterium contains the normally overreplicating plasmid. In conjugation, it passes on the propensity to overreplicate to bacteria that cannot survive the process. This yields an antibacterial agent that evidently does not trigger antibiotic resistance.
Because so much was understood about bacteriology and about the process of introducing mutations into plasmids, once the error was discovered, the mistaken assumption was extremely informative. Filutowicz and his associates did not just learn that you cannot introduce the two mutations at once, or that there is something wrong somewhere in the cluster of assumptions that led them to think that they could do so. Because the mistake focused attention on the replicating behavior of plasmids, it opened the way to new and fruitful insights. If it turns out that pathogenic bacteria are incapable of or less prone to developing resistance to the ‘Trojan Horse’, then the fruits of the scientist’s mistaken belief will be far more valuable than the insights that would have been gained had his original expectation about the experimental outcome been true.
One other case worth mentioning is the Michelson—Morley experiment. At the end of the nineteenth century, physicists recognized that light consists of waves. They assumed that waves require a medium of propagation. Since light waves travel through space, they posited that space was filled with a medium, the luminiferous ether. The objective of the experiment was to measure the flow of ether across the Earth as the Earth travels around the sun. Although the experiment was designed and conducted with exquisite care, the result was null. No ether drift was detected. Over a period of years, the experiment was redesigned and ever more sensitive inferometers were used, to no avail. The belief that light consists of waves in an ethereal medium is mistaken.
This familiar story may seem to have the same moral as the previous one. But the payoff is different. The cognitive consequences of the mistaken belief that Filutowicz and his colleagues shared were local and limited. Because the tightly woven web of commitments they relied on consisted in large part of independently confirmed strands, they could reasonably quickly identify the erroneous belief and see what routes of inquiry it opened up. The rest of the commitments largely held firm when the mistake was corrected. The Michelson—Morley experiment was different. There too, scientists had a tightly woven fabric of established cognitive commitments that led them to believe that the experiment would succeed. Their failure eventually convinced them that luminiferous ether does not exist. But this did not, and could not, lead to a local and limited revision of their understanding. It tore the fabric of cognitive commitments apart. For if light waves do not require a medium of propagation, light is radically different from what science supposed. And if that is so, then many of their other assumptions about matter and energy had to be revised as well. I do not claim that the Michelson—Morley experiment was a ‘crucial experiment’ that by itself falsified Newtonian physics. Rather, its effect was Socratic. It made manifest to the scientific community the extent to which they did not understand what they thought they understood.
If an opinion is supported by a tightly woven tapestry of reasons, and the opinion turns out to be erroneous, more than that particular opinion must be revised or rejected. The question arises: How could it be wrong? What have we been missing, or overlooking, or underestimating, or misconstruing? The realization that this is not the way things are in a particular area can often afford avenues of insight into the way things are. At the very least, it enables us to focus attention on particular aspects of our system of commitments. It not only motivates us to ask questions we would not otherwise ask, it often also provides the resources for answering them.
The human propensity for error is typically regarded as a regrettable weakness. Certainly the propensity to make careless mistakes is a weakness. So is the propensity to jump rashly to erroneous conclusions. But, I have suggested, the propensity to make thoughtful, educated mistakes may be a strength. This is not to say that every error is felicitous. Some, like wild guesses, provide virtually no information beyond the fact that things are not the way we supposed. Some, although well grounded, are simply off the mark. Some are cognitively (and often morally) culpable. They fail to take account of available information and therefore lead us in the wrong direction. But sometimes, we get things wrong in epistemically fruitful ways. Once discovered, such errors provide not just incentives but resources for serious, focused, effective inquiry. By revealing not only that but also where we have got things wrong, they point us in the direction of advancing our understanding. We are lucky we are disposed to make such mistakes.
The Possibility of Error
We have seen that the capacity to be mistaken about something is an epistemic achievement and that actual errors can embody and engender insights that investigators would otherwise miss. But the issue for fallibilism is whether there is any epistemic value to recognizing the possibility of error. Let us grant that little is to be gleaned from the unfocused possibility of error that global skepticism presents. That Fred would be wrong in a demon world shows only that the content of his belief is not an obviously necessary truth. He probably recognizes that anyway. But the capacity to identify local fault lines—to see where, even if skepticism is false, errors might creep in—is a valuable epistemic asset.
Suppose that rather than simply taking a test result at face value, investigators seriously entertain the possibility that it is a false positive. They may already know the probability of its being a false positive. Let’s say that it is 3 percent. The issue before them is whether the case before them is one of the 3 percent. How might they tell? In focusing on this question, they put themselves in a position to learn more about what the test detects and how exactly it does so, what range of conditions it is sensitive to, what the probabilities of the various conditions are, whether there are circumstances in which the test is particularly likely to yield false positives or subjects for whom it is particularly likely to give misleading results. Answering such questions does not always require amassing additional evidence. Sometimes the answers can be found simply by reexamining the data at hand. By, for example, using sophisticated regression techniques to focus on what the data discloses about the 3 percent, they may be able to extract additional information. The idea that the only thing the evidence for p discloses is whether p is the case is shortsighted. It blinds epistemic agents to additional information embedded in the evidence.
It is fashionable to interpret modal claims as truths about other possible worlds. So to contend that a hypothesis could be false is to say that there exists a possible world where it is false. For our purposes, this interpretation is unhelpful. Rather, we should recognize that the possibility of error is a property the hypothesis has in the actual world. That property—a vulnerability—derives from the nature and strength of the support the hypothesis gets from the account and the evidence for it. What it portends depends on the weave of the fabric of commitments that constitute the account and on the contribution of the hypothesis to the strength of the fabric. If an erroneous contention is tightly woven into a fabric of epistemic commitments, an epistemic agent could be wrong about it only by being wrong about many other things. The contention that species evolved is supported not just by biological evidence (from genetics, anatomy, physiology, ecology, etc.) but also by the findings of other sciences. If it were false, geology would have to be badly wrong about how fossils are formed and embedded in sedimentary rock, chemistry would have to be badly wrong about organic compounds, and physics would have to be badly wrong about carbon dating. Not easily could science be so wrong as to discredit the claim that species evolved. If, on the other hand, a contention is a mere tassel, loosely dangling from a fabric of understanding, it could easily be wrong. Urban legends are a case in point. It is said that alligators, having been brought from Florida as babies and flushed down the toilets when they grew too big, now roam the New York City sewers. We would not have to be wrong by much to be wrong about that. Our only reason for believing it is that we had it from a putatively reliable source. On learning that it is wrong, we readily give it up and perhaps downgrade our assessment of the reliability of our source.
The value of recognizing where we might be wrong is not only, or even mainly, prudential. It is not just that an agent attuned to the possibilities of error can buttress her defenses. For possibilities of error disclose aspects of the topography of the space of reasons. We appreciate how the network of commitments hangs together when we recognize how it might fall apart, how easily it might fall apart, and what the consequences of its doing so would be.
Appreciating the possibility of error is not just ruefully admitting, ‘Well, despite my evidence, I could still be wrong’. It involves recognizing the ramifications of potential errors—the damage they would do to a network of epistemic commitments. That recognition provides incentives to double check results, shore up findings, hedge bets. It also provides incentives to develop methods for doing such things. A first stab at double checking is to rerun the same test, redo the same calculation, reexamine at the same evidence. This yields a measure of assurance, for it functions as a stay against carelessness. But to run the same test twice does not protect against underlying misconceptions. So investigators do well to address their conclusion from different perspectives: do different tests, perform different calculations, run different regressions, scrutinize the data from different perspectives. Granted, a result that withstands such reconsideration might still be incorrect. But if investigators demonstrate that a finding stands up to diverse forms of testing, they ensure that they would have to be wrong about a lot to be wrong about it. Because they have woven it into their fabric of commitments, such a result is stronger than the result they would obtain by simply amassing more evidence using the original method.
If we recognize that intellectual humility is an epistemic asset, don’t we still face a variant of Moore’s paradox? The understanding variant is:
My understanding of φ embeds my accepting that p, but it is possible that ~p.
Is this a problem? Let us temporarily set aside the worry that in uttering the understanding variant I give my assurance with one breath and take it back with the next. Still there is the question of what I should do. We saw that the subject in the quasi-Moorean predicament brought on by the knowledge variant was in a bind. There seemed to be no coherent course of reasoning or action available to her. The subject in the situation described by the understanding variant has no such problem. Since she accepts that p, she can use p in inference and action when her ends are cognitive. But because her understanding of φ embeds an appreciation of her epistemic vulnerability in accepting that p, it also gives her reason to double check, to hedge her bets, to be sensitive to intimations of error, and to convey her level of epistemic insecurity to her interlocutors. To say that she can use p for inference and action does not entail that she can or should do so blindly.
A consequence of fallibilism is that concessive knowledge attributions are often true. When I know that p, there remains some possibility that ~p. When my understanding of φ embeds a commitment to p, there remains some possibility that ~p. If we disregard the possibility of error, we lose valuable information. So why are concessive knowledge attributions untoward? I suggest that the reason is Gricean. The second maxim of quantity is “Do not make your contribution more informative than is required” (Grice, 1989, 26; see also Dougherty & Rysiew, 2009). If everyone knows, and everyone knows that everyone knows that empirical claims are fallible, then the fallibility of an empirical claim goes without saying. To mention that I might be wrong is to highlight the possibility of error—to make it salient. I ought only say that I might be wrong in circumstances where I consider p’s epistemic standing especially vulnerable. But in such a case, I ought not claim to know that p. What goes without saying ought not be said.
I have argued that fallibilism is not a higher-order stance toward our understanding of a topic. It is, or should be, woven into the fabric of understanding. By recognizing the possibility of error, we can exploit our epistemic vulnerability and gain insight into our understanding of a topic. Rather than being a weakness, our vulnerability to error is a strength.
Notes
1. This chapter was made possible through the support of a grant from the Intellectual Humility Project of St. Louis University and the John Templeton Foundation. The opinions expressed here are those of the author and do not necessarily reflect the views of the Intellectual Humility Project or the John Templeton Foundation.
2. Adler’s position evolved. So the skepticism he evinces in Adler (1981) is not present in his later theory.
3. Pritchard disagrees. But to get the result that the probability of winning a fair lottery with long odds is not remote, he relies on a very strong safety principle. “Safety III: For all agents … if an agent knows a contingent proposition φ, then in nearly all (if not all) nearby possible worlds in which she forms her belief about φ in the same way as she forms the belief in the actual world, that agent only believes that φ when φ is true” (2005,163). If knowledge requires Safety III, we have very little knowledge. Moreover, it seems at least odd to say that although winning the lottery is very improbable, it is also very possible. To make such a claim seems to unduly downplay the force of the evidence.