TWENTY-FOUR

Quine’s Question Mark

Logic chases truth up the tree of grammar.

(W. V. Quine, 70, 35)

Willard Van Orman Quine was born in Akron, Ohio, on “anti-Christmas,” June 25, 1908. He died on Christmas 2000. Beginning a lifelong affiliation with Harvard University, Quine wrote a dissertation on Principia Mathematica under the supervision of Alfred North Whitehead. Although Quine made contributions to computer science, he continued to use his 1927 Remington typewriter. As a logician, he “had an operation on it” to change a few keys to accommodate special symbols. “I found I could do without the second period, the second comma—and the question mark.” A reporter asked, “You don’t miss the question mark?” to which Quine answered, “Well, you see, I deal in certainties.”

I think paradoxes are riddles that overload the audience with good answers. Since a riddle adopts the form of a question, I doubt that Quine’s modified typewriter can fluently formulate paradoxes. However, Quine is responsible for the most influential definition of “paradox.”

CARTER’S DOOMSDAY ARGUMENT

The first sense of “paradox” listed by the Oxford English Dictionary is “A statement or tenet contrary to received opinion or expectation.” Quine thinks this overlooks the central role of argument. In “The Ways of Paradox,” Quine develops the idea that “a paradox is just any conclusion that at first sounds absurd but that has an argument to sustain it.” (1976, 1) The doomsdayer’s “The end is near!” is a “tenet contrary to received opinion.” But it is a paradox only if backed with a good argument.

Surprisingly, such an argument has arisen from the interaction between science and philosophy encouraged by Quine. The cosmologist Brandon Carter (1974) notes that in the absence of any evidence that I am special, I should regard myself as being located in the same segment of history as the average man. Since the population has been growing exponentially, most people have recent birth dates. Therefore, I should assign a surprisingly high probability to the hypothesis that I am writing near the end of human history.

Does Carter’s argument “sustain” the surprising conclusion? The philosopher John Leslie defends Carter’s argument at book-length. In The End of the World, Leslie contends that the doomsday argument gives us extra reason to respond to threats of human extinction.

I used to think the doomsday argument commits a fallacy that I could diagnose on a Sunday afternoon. But each apparent refutation was followed by a reply that left the doomsday argument essentially intact. After a month of Sundays, the resilient doomsday argument earned my grudging respect (though not my assent). I think this robustness is what Quine is driving at when he talks of an argument sustaining a surprising conclusion.

VERIDICAL AND FALSIDICAL PARADOXES

As may be surmised from the positive associations of “sustain,” Quine believes some paradoxical arguments have true conclusions. His illustration of a veridical paradox is drawn from The Pirates of Penzance. The protagonist, Frederic, is 21 years old and yet has had only five birthdays. Although this seems like a contradiction, we see how it must be true after being informed that Frederic was born on February 29. Leap years make it possible to be age 4n on one’s n-th birthday. Quine pictures veridical paradoxes as lines of reasoning that are eventually vindicated.

Quine does not mean that all sustaining arguments are sound, for he thinks many paradoxes are false conclusions. He calls these “falsidical paradoxes.” Nor does Quine think that a sustaining argument must be deductively valid. For Quine thinks that all sustaining arguments for falsidical paradoxes are fallacious.

Quine thinks antinomies differ by producing “a self-contradiction by accepted ways of reasoning.” (1976, 5) Recall Russell’s antinomy about the set that contains all and only those sets that do not contain themselves. Is this set a member of itself? Quine says this paradox “establishes that some tacit and trusted pattern of reasoning must be made explicit and henceforward avoided or revised.” Quine recommends that we “inactivate” the antinomy by adopting grammatical rules that prevent it from being formulated. He extends this proposed ban to semantic paradoxes such as the liar by requiring all uses of “true” to be relativized to a language. “Violations of this restriction would be treated as meaningless, or ungrammatical, rather than as true or false sentences.” (1976, 8) Quine admits that it seems repressive to ban talk of simple “truth.” But he predicts that, in time, the sense of artificiality will disappear and the self-referential antinomies will become falsidical paradoxes. Set theorists will think Russell’s antinomy commits a fallacy just as mathematicians now think Zeno’s bisection paradox simply mishandles the concept of a convergent series.

But hold on Professor Quine! Weren’t falsidical paradoxes supposed to have false conclusions? If the endpoints of Russell’s antinomy are meaningless, then they cannot be false; nor can they even be conclusions that are sustained by arguments. All conclusions are meaningful statements. Quine’s definition would therefore imply that the antinomies of self-reference are not genuine paradoxes.

Recall that Jean Buridan studied contingent liar paradoxes: Mr. Straight asserts “The next thing Mr. Crooked says is true” and Mr. Crooked says “What Straight said is false.” If Crooked said, “118 countries received a visit from Quine,” then both statements would have been true. If the liar paradox is meaningless (as I believe Quine is correct in contending), then the contingent liars show that meaninglessness is sometimes undetectable by the speaker. The internal rationality of the speaker is not enough to guarantee that his utterances are meaningful.

If all sustaining arguments had to be deductively valid, then there would be no inductive paradoxes. But Quine must concede that there are paradoxes in which the surprising conclusion only purports to be probable. Consider the following instance of the birthday paradox. Professor Statistics predicts that two of his students share a birthday from the premise that there are forty students in the class. At first, the professor’s conclusion merely seems rash. The paradox emerges when Professor Statistics divulges his reasoning: “‘There are forty students’ gives my conclusion a probability of 89.1 percent. To see why, picture a calendar with 365 days on it. Mark your birthday. A second student now marks his birthday. She has a probability 364/365 of marking an empty day. The third person to mark the calendar has a 363/365 chance of marking an empty day. The chance that N people manage to mark an empty day is 1 - (365 × (365 - 1) × (365 - 2) ...× (365 - (N-1)/(365N). So when there are 23 people there is a 50.7 percent chance of a shared birthday. When N = 40, the formula implies that the probability of a shared birthday is 89.1 percent.”

But having gone through all this, further suppose Professor Statistics has been unlucky: none of the forty students shares a birthday. His paradoxical prediction turns out to be false even though it was backed by a true premise and an appropriate rule of inference.

What makes the professor’s prediction paradoxical is the reasoning behind it. The reasoning does not need to be perfect. Like most purveyors of the birthday paradox, Professor Statistics skated over the fact that some years have more than 365 days. Nor does he consider travelers who lost their birthdays while crossing the International Date Line. (An old man could die without having any birthdays.) The paradox survives because these omissions are insignificant.

We could try to force each paradoxical induction into the deductive mold by treating it as an enthymeme (an argument with an unstated premise or conclusion). Each induction would have the tacit premise “If the stated premises are true, then the conclusion is true.” This ploy renders all inductions, both good and bad, deductively valid. Questions about reasoning are turned into questions about the truth of this postulated conditional. This maneuver does not resolve the narrowness of a purely deductive definition of “paradox.” For consider the original inductive arguments that are not treated as disguised deductions. All of them remain paradoxical.

THE NEW RIDDLE OF INDUCTION

Formulations of inductive paradoxes are more likely to become outdated. A valid deductive argument remains valid whatever new information comes along. But the cogency of inductive reasoning is affected by the addition of new premises. I was taught Nelson Goodman’s (1906-1998) “new riddle of induction” from Brian Skyrms’s Choice and Chance. Skyrms reviews how John Stuart Mill inaugurated the formal study of induction by codifying the sort of inference patterns experimentalists love. Just as Aristotle codified argument patterns that ensure deductive validity, Mill searches for argument patterns that make the conclusion probable given the premises. Here is a simple example: “All past Fs are Gs, therefore, the next F will be G.” In 1946 Goodman published an objection to the whole enterprise of inductive logic. It attracted scant attention. In 1954, he repackaged the idea. Goodman’s new presentation borrowed “gruebleen” from James Joyce’s novel Finnegans Wake. In Skyrms’s formulation, “grue” means “is green and observed before time 2000 or is blue and observed during or after 2000.” Suppose all the examined emeralds before 2000 are green. What should the “grue” speaker expect in the year 2000? Given the rule “All past Fs are Gs, therefore, the next F will be G,” he should predict that the next observed emerald will be grue. This means the “grue” speaker is predicting that the emerald will be blue! Goodman’s riddle is an inductive antinomy:

Green thesisGrue antithesis
All emeralds before 2000 have been green.All emeralds before 2000 have been grue.
So, the emerald seen in 2000 will be green.So, the emerald seen in 2000 will be grue.

The opposed predictions share the same argument form and are based on the same data.

My first thought was that the grue thesis prevails because “green” is the more basic predicate. After all, green was used to define grue. But Goodman points out that green can be defined in terms of grue and another term, bleen. Let “bleen” mean “green and observed before 2000 or blue and observed during or after 2000.” Goodman can then define “green” as “grue and observed before 2000 or bleen and observed at sometime thereafter.”

My second thought was that the green thesis prevails because it did not require a change in the course of nature. But a classmate argued that change was relative. From the perspective of the grue speakers, I was the one postulating a mysterious discontinuity in the year 2000. The grue speakers were expecting grue things to stay grue.

Quieted, I resolved to wait out the antinomy. If, in 2000, the grass came up blue and the bluebell flowers came up green, then I would desert the green party. But if 2000 conformed to my expectations, I would consider the antithesis defeated.

My patience was vindicated. But this disconfirmation of the antithesis is only an insignificant dent in the argument sustaining the new riddle of induction. The antithesis is still a good argument even though I know its conclusion is false. The value of the induction lies in its process of reasoning, not its product. The technique of concocting “gruesome” predicates is readily adapted to arguments that will elude the strategy of patience. Indeed, Goodman never actually specifies the year 2000 in his “definition” of a grue. He only provides a definition schema that employs the temporal variable x.

Quine offers a diagnosis of Goodman’s paradox: induction only works for predicates that correspond to natural kinds. Aristotle believed that just as a butcher cuts at the joints, a scientist classifies in accordance with preexisting divisions. Contrary to a purely conventional view of language, Aristotle thought that some of our vocabulary refers to these natural kinds. Quine thinks this is especially plausible in light of evolutionary theory. Reasoners who sort objects into categories that correspond to natural boundaries will enjoy greater reproductive success. Their predictive success is enhanced artificially by scientific investigation. Part of scientific progress is devising a vocabulary that more closely matches natural divisions. We instinctively prefer “green” over “grue” because “green” comes closer to cutting nature at the joint.

Quine says his solution also solves Carl Hempel’s (1945) raven paradox. Hempel notes that the observation of a black raven is some evidence in favor of “All ravens are black.” Does the observation of a white handkerchief also confirm “All ravens are black”? Here is the case for indoor ornithology:

1.Nicod’s criterion: A universal generalization “All Fs are Gs” is confirmed by “x is an F and a G.”

2.Equivalence condition: Whatever confirms a statement confirms a logically equivalent statement.

3.Therefore, a white handkerchief confirms “All ravens are black.”

“All ravens are black” is equivalent to “All nonblack things are nonravens.” Nicod’s criterion implies that “This is a raven and is black” confirms “All ravens are black.” It also implies that a white handkerchief confirms “All nonblack things are nonwhite.” So by the equivalence condition, the white handkerchief must also confirm “All ravens are black.”

Quine rejects Nicod’s criterion. He restricts confirmation to hypotheses employing natural-kind terms. Therefore, he denies that a white handkerchief confirms “All nonblack things are nonravens.”

THE ANALYTIC/SYNTHETIC DISTINCTION

Daniel Dennett’s The Philosophical Lexicon defines “quine” as a verb: “To deny resolutely the existence of importance of something real or significant.” Quine has quined names, intentions, and the distinction between psychology and epistemology. In 1951 Quine quined the distinction between analytic and synthetic statements. An analytic statement owes its truth-value to the meanings of its words. For instance, “You can receive an unbirthday present on most days of the year” is made true by Humpty Dumpty’s definition of an unbirthday present in Lewis Carroll’s Through the Looking Glass. In contrast, synthetic statements owe their truth-values to the world. “About nine million people share your birthday” is made true by the present population and the law of averages. The analytic/synthetic distinction was first explicitly drawn by Kant. It was almost universally accepted by philosophers until Quine’s article “Two dogmas of empiricism.” He made the distinction controversial by dwelling on the unclarities of the boundary between what is made true by meaning and what is made true by contingent facts.

Some readers may suspect that my riddle theory of paradoxes quines Quine’s distinction between veridical and falsidical paradoxes. If paradoxes are questions, they cannot be true or false. They cannot be proved or refuted. After all, riddles can be neither believed nor disbelieved. The only direct kind of absurdity these riddles manifest is their overabundance of good answers. But remember that on my question-based account, answers to paradoxes can be true or false.

Quine’s definition of paradox implies that wherever there is a paradox, there is an argument for an absurd conclusion. The next two sections present counterexamples to Quine’s implication that all paradoxes are absurdities.

RADICAL TRANSLATION

After the United States entered World War II, Quine left his fellowship at Harvard to become a Navy code breaker. He became intrigued by the problem of translation under adverse circumstances. Consider explorers who had to communicate with aborigines:

On their voyage of discovery to Australia a group of Captain Cook’s sailors captured a young kangaroo and brought the strange creature back on board their ship. No one knew what it was, so some men were sent ashore to ask the natives. When the sailors returned they told their mates, “It’s a kangaroo.” Many years later it was discovered that when the aborigines said “kangaroo” they were not in fact naming the animal, but replying to their questioners, “What did you say?”

(The Observer magazine supplement,
November 25, 1973)

Even if apocryphal, anecdotes about radical mistranslations raise the issue of whether we can ever know that a translation is correct. If the mistranslation were systematic enough, no amount of speech or behavior could reveal the error.

Quine (1960) conjugates this skeptical challenge into a semantic paradox. Suppose an anthropologist sees a rabbit run by and the native says, “Gavagai!” The utterance could be translated as (a) Lo, a rabbit; (b) Lo, an undetached rabbit-part; (c) Lo, an instantiation of the universal rabbithood; (d) Lo, a temporal stage of a rabbit. The anthropologist is free to choose any of (a) to (d) as long as he makes adjustments elsewhere in his translation manual. Quine maintains that there are infinitely many translation manuals that account for all the speech and behavior of the natives.

Is the skeptic right about us being ignorant of the correct translation? Not quite, says Quine. He thinks that when there is no possible empirical difference between the manuals, the issue of correctness does not arise. The underdetermination of the hypotheses by the data renders them indeterminate.

This indeterminacy of translation extends to the problem of interpreting the world. There are infinitely many theories that accommodate all the data we will ever possess. Galileo said that nature is a book written in the language of mathematics. Even if this were true, there are infinitely many mathematical functions that can summarize all the data we could ever acquire.

“What is the translation of ‘Gavagai!’?” has infinitely many rival answers. According to Quine, the problem is that infinitely many of these are equally good answers. Quine’s paradox of radical translation is a counterexample to his own definition of paradox. In addition to showing that absurdity is inessential to paradox, the paradox of radical translation shows that the paradox can be free of arguments and conclusions. “What is the translation of ‘Gavagai’?” has answers obtained by translation, not conclusions derived by arguments.

THE ODD UNIVERSE

Like most logicians, Quine treasures simplicity. He hates to postulate anything more than is needed to explain the data. Quine politely characterizes his preference as a “taste for desert landscapes.” But more outspoken lovers of simplicity warn that you take a risk whenever you postulate a new entity. Minimizing postulates minimizes error. This is especially plausible when unprecedented entities are in question. Abstract objects are discontinuous with what we know best. Safety would be served by avoiding them. Indeed, Quine once co-authored a defense of nominalism with Nelson Goodman. Nominalists reject abstract entities—they think that everything has a position in space or time. Their diet is aimed against philosophical excesses such as Plato’s forms. However, nominalism also winds up prohibiting entities that scientists help themselves to—numbers, geometrical points, sets, etc.

Quine soon felt the pinch. He became persuaded that sets are indispensable for mathematics. To retain mathematics, he relented and swallowed sets. Quine is a pragmatist. Sets earned their way into Quine’s metaphysics by being useful. Principia Mathematica teaches that sets plus logic are enough to reconstruct all of mathematics. In turn, mathematics is essential to theoretical physics. Science and mathematics set the standard for rationality, so Quine feels entitled to believe in whatever is indispensably postulated by scientists. For Quine, metaphysics is an afterthought of science.

Meanwhile, Nelson Goodman kept sharpening the knife of nominalism. In 1951 he published The Structure of Appearances. This book contains a logic of parts and wholes. Goodman denies that there are sets. Instead, there are fusions built up from smaller things. Unlike a set, a fusion has a position in space and time. You can touch a fusion. I’m a fusion. So are you. Goodman’s “calculus of individuals” says that there are only finitely many atomic individuals and that any combination of atoms is an individual. Objects do not need to have all their parts connected, for instance, Alaska and Hawaii are parts of the United States of America. Goodman does not let human intuition dictate what counts as an object; he also thinks that there is the fusion of his ear and the moon.

In a seminar Goodman taught at the University of Pennsylvania around 1965, John Robison pointed out that The Structure of Appearances implies an answer to “Is the number of individuals in the universe odd or even?” Since there are only finitely many atoms and each individual is identical to a combination of atoms, there are exactly as many individuals as there are combinations of atoms. If there are n atoms, there are 2n - 1 combinations of individuals. No matter which number we choose for n, 2n - 1 is an odd number. Therefore, the number of individuals in the universe is odd!

The exclamation point is not for the oddness per se. Aside from those who think the universe is infinite, people agree that the universe contains either an odd number of individuals or an even number of individuals. What they find absurd is that there could be a proof that the number of individuals is odd.

“Is the number of individuals in the universe odd or even?” illustrates the possibility of one good answer being too many. Our expectation is that this question is unanswerable. The lone good answer confounds beliefs about what arguments can accomplish. Here the excess is a top-down judgment. (More commonly, the overabundance is a bottom-up verdict: a good answer clashes with another good answer.)

INTERESTING NUMBERS

Our meta-argumentative expectations can also be frustrated by how something is proved rather than the sheer fact that it is proved. Consider the question of whether all natural numbers are interesting. When G. H. Hardy visited the mathematical genius Ramanujan as he lay dying in a sanitarium, he was at a loss for what to say. So Hardy mentioned that the taxi that he had hired to take him to the sanitarium had the rather dull number 1729. “Oh no, Hardy. It is a captivating one. It is the smallest number that can be expressed in two different ways as a sum of two cubes,” replied Ramanujan. (1,729 = 13 + 123 = 103 + 93). Francois Le Lionnais’s Nombres remarguables shows that many apparently dull numbers are interesting. The first integer for which he can find no remarkable property is 39. Le Lionnais muses that this lack of a remarkable property makes 39 interesting after all. Just as 81 is interesting because it is the smallest square that can be decomposed into a sum of three squares (92 = 12 + 42 + 82), the number 39 is interesting because it is the smallest uninteresting integer.

Mathematicians have generalized Le Lionnais’s comment into a proof that all natural numbers are interesting. If there is an uninteresting number, then there must be a first uninteresting number. But being the first uninteresting number would itself be an interesting property. Therefore, all numbers are interesting.

Maybe each natural number could have some surprising feature that makes it interesting. Often, what appears to be a dull number has proven to be “captivating.” Maybe that is how it always is. There might even be some fancy proof to show that all appearances of dullness are illusory. But it seems strange that we could prove that each and every natural number is interesting by virtue of the least number theorem (which says that if any natural number has a property, then there is a least number that has that property). The argument seems too simple.

Consider the plight of ambivalent mathematicians who independently believe the conclusion of the simple argument. Since the conclusion implies each of the premises, they think the simple argument is valid and believe the premises and the conclusion. Despite granting that the argument is sound, they have difficulty believing that the premises give them extra warrant for the conclusion.

ARE PARADOXES SETS?

At least since Epictetus, many philosophers have said that a paradox is a set of propositions that are individually plausible and yet jointly inconsistent. Notice that this set-based definition of paradox gives us a lower count of paradoxes than definitions that identify paradoxes with arguments or conclusions. Corresponding to a set with n members will be n arguments with the properties that Quine deems sufficient for paradox. For the negation of any member of the set is the conclusion of an argument containing the remaining members as premises. Since members of the original set are jointly inconsistent, the argument will be valid. And since the members are individually plausible, the audience will also find each premise of the argument persuasive.

This convertibility from sets to arguments only holds when the set is finite. If a set contains infinitely many propositions, then an argument does not result when one proposition is negated and the rest are used as premises. For an argument can only have finitely many premises.

The considerations lying behind the jumble arguments of chapter 8 show that even finitely large sets create a problem for the set-based definition of paradox. Each of the first 10,000 assertions in this book are believed by me but I also think they are jointly inconsistent. Yet that set is not really a paradox.

Nicholas Rescher has developed the set-theoretic conception of paradox with encyclopedic systematicity. He packs all of philosophy into the briefcases of paradox resolution.

To meet this corporate goal, Rescher imposes further requirements on the structure of paradoxes. He says that each member of the paradox must be self-consistent. (2001, 8) That way, the rejection of any member of the set is enough to restore consistency. Rescher defends this principle of self-consistency with the generalization that no contradiction is plausible.

The plausibility of contradictions is made poignant by Rescher’s own violation of the consistency requirement. Consider a barber who shaves all and only those who do not shave themselves. Does the barber shave himself? Rescher formulates the set with the following as its first element: “There is—or can be—a barber who answers the specifications of the narrative.” (2001, 144) Rescher says that this member of the set should be rejected: “there is not and cannot be a barber who answers to the specified conditions.” Rescher is definitely correct; it is a theorem of logic that nothing can a bear a relation to all and only the things that do not bear it to themselves. (Thomson 1962, 104) But this means that the Barber paradox’s “aporetic cluster” contains a contradiction (not a mere joint inconsistency as Rescher requires).

All of the direct answers to “Does the barber shave himself?” are strict contradictions. Furthermore, they are indivisible contradictions. The contradiction cannot be divided into self-consistent propositional components in the way “P and not P” can be segregated into a self-consistent P and a self-consistent not P.

Logical paradoxes are counterexamples to the principle that logic alone never implies a solution to a paradox. When a member of the paradox is a logical falsehood, logic does dictate what must be rejected. Since the inference to a logical truth is premiseless, the conclusion cannot be avoided by rejecting a premise. These paradoxes can be solved individualistically, without regard for the larger context of other beliefs. Holism about paradoxes does not hold universally.

PARADOXES WITHOUT PREMISES

Unlike Rescher, Quine can allow for the possibility of paradoxes that are composed of a single proposition. As evident from the argument forms of reductio ad absurdum and conditional proof, there are arguments that support their conclusions solely through inference rules. The need to distinguish between inference rules and premises was made vivid by a dialogue published by Lewis Carroll (1895). Achilles tries to persuade the Tortoise with a syllogism:

(A) Things that are equal to the same are equal to each other.

(B) The two sides of this Triangle are things that are equal to the same.

(Z) The two sides of this Triangle are equal to each other.

The affable Tortoise will grant Achilles any premise he wishes. But the Tortoise insists that Achilles securely link the premises to the conclusion via a further premise:

(C) If A and B are true, Z must be true.

When Achilles adds (C) as an extra premise to (A) and (B), he finds that the Tortoise is still not willing to grant (Z). The Tortoise does not doubt the premises but wants a guarantee that the new set of premises really implies (Z). Accordingly, Achilles adds a second supplementary premise:

(D) If A and B and C are true, Z must be true.

Once again, the Tortoise grants all the premises but insists on a guarantee that the expanded set really implies the conclusion. And once again, Achilles bestows the desired premise: “If A and B and C and D are true, Z must be true.” Achilles will never catch up to the Tortoise’s incremental requests for extra premises.

Notice that what is puzzling here is the sequence of arguments, not any particular argument in the sequence. Why is the Tortoise being unreasonable when he cautiously asks for an extra premise to cement the relationship between the previous premise set and the conclusion? The common solution is to deny that any extra premises are needed to link the premises and the conclusion. They are instead linked by an inference rule.

The need to distinguish between premises and inference rules is compatible with their interchangeability. An axiom that states P is the case can be considered as an inference rule lets us introduce P into a proof without any premises. Carroll’s puzzle does show that a system that contains just axioms cannot have any deductions. A proof system must have some inference rules. However, a system need not have any axioms. Indeed, systems of natural deductions are practical alternatives in logic instruction.

Formally, premiseless paradoxes are surprises deduced from the empty set. If validly deduced, they are veridical paradoxes. The most celebrated example is Kurt Godel’s second incompleteness theorem: a consistent proof system that is strong enough to generate elementary number theory must be incomplete. There could be premiseless antinomies: two accepted inference rules might lead to opposite conclusions. (This would elegantly demonstrate that at least one of the accepted rules must actually be invalid.)

The interchangeability of inference rules and premises shows that the distinction between a substantive mistake and a mistake in reasoning is flexible. Rescher is being arbitrary when he says “Paradox is the product not of a mistake in reasoning but of a defect of substance: a dissonance of endorsements.” (2001, 6-7) Quine is being equally arbitrary when he says that all falsidical paradoxes involve fallacious rules of inference. Often, a mistake that is characterized as a myth (a commonly believed falsehood) can be equally well characterized as a fallacy (an illegitimate but commonly applied inference rule).

As a logician, Quine was the victim of several unpremised paradoxes. In 1937 he published a new foundation of mathematical logic. His system was widely regarded as having advantages over previous systems. However, it was soon discovered to be too weak—in particular, there was no way to derive the axiom of infinity (which affirms the existence of infinite collections). In 1940 Quine strengthened the foundations in his book Mathematical Logic. But Barkley Rosser (1942) proved that Quine’s supplemented system implies the Burali-Forti paradox. This demonstrated that Quine’s book had jointly inconsistent axioms. When logical axioms are jointly inconsistent, then at least one of them is a contradiction. Since Quine chose only plausible axioms, Quine knew firsthand that there are plausible contradictions. (Quine was rescued from the quandary by Hao Wang [1950]. Wang surgically replaces a cancerous axiom with a consistent but industrious alternative.)

GRADUALISM ABOUT PARADOXES

Before Quine challenged the analytic-synthetic distinction, there was a tendency to regard philosophy as being qualitatively different from the sciences. Scientists focus on synthetic statements whereas philosophers focus on analytic statements. Scientists explore reality with observations and experiments. Philosophers map our conceptual scheme through a logical study of semantics.

Quine agreed that philosophers are more apt to use the strategy of semantic ascent: they love to switch the topic from the things that puzzle us to the words we use in describing those puzzling things. “Don’t talk about Truth! Talk about ‘true’!” This strategy works when the words are better understood than the things. Such will be the case when we lack the standard techniques for solving problems that constitute each science. But Quine thinks that the use of semantic ascent is only a rough mark of philosophy. The physicist Albert Einstein engaged in semantic ascent when trying to resolve anomalies about the nature of simultaneity. And metaphysicians sometimes appeal to empirical results to solve philosophical problems.

Quine has fostered this naturalistic turn in philosophy. He maintains that philosophy differs from science in degree rather than kind. Philosophy should heed biology just as biology heeds physics and vice versa. Philosophy takes the further step of trying to organize the results of science into an overall view of the universe. But as we have seen with Brandon Carter’s case for human extinction, cosmologists also take a wide perspective.

Lord Kelvin contrasted the unclarity of metaphysics with the rigor of physics by claiming that “In science there are no paradoxes.” But if you enter “paradox” in a search engine for scientific journals, you get many references to scientific paradoxes. Many of the scientific paradoxes have been solved. But the same can be said about philosophical paradoxes such as those made famous by Zeno. Philosophical progress tends to be self-effacing because, over time, its solutions are incorporated as results in other fields. “Philosophy” is an indexical term akin to “here,” “yesterday,” “news.” Its meaning shifts to cover issues that cannot (yet) be profitably delegated to the sciences.

Philosophy is like an expedition to the horizon. Under one interpretation, the venture is hopeless. We cannot reach the destination because what counts as the horizon constantly shifts. But becoming a pessimist on the basis of this tautology is like adopting a here-and-now philosophy on the strength of “Tomorrow never comes.”

We can reach the horizon when the meaning of philosophy is rooted. Understandably, we look at the history of philosophy from the vantage point of the present. We are impressed by the resiliency of its issues and the broken ambitions of past thinkers. But an accurate measure of progress requires the adoption of an historical perspective. By this, I do not mean simply looking at the past. I mean looking from the past.

The twenty-first-century conception of philosophy will itself become a tonic to the vacuous pessimism of future generations. Given that I have correctly gauged the merits of Carter’s doomsday argument, some philosopher in the distant future will find this book aging away in the remote corner of a library. As he browses, he will be amazed by what philosophers back in 2003 regarded as philosophy. He will know that many of the “paradoxes” discussed in this book are now definitively answered by physics or mathematics (or by some hitherto unconceived field). This future reader will wonder why philosophers tried to answer those questions. As he reads this final sentence, I remind him that he stands at a new horizon, inaccessible to the author of this book.