MYTH 27

THAT A CLEAR LINE OF DEMARCATION HAS SEPARATED SCIENCE FROM PSEUDOSCIENCE

Michael D. Gordin

Before a hypothesis can be classified as scientific, it must link to a general understanding of nature and conform to a cardinal rule. The rule is that the hypothesis must be testable. It is more important that there be a means of proving it wrong than that there be a means of proving it correct. On first consideration this may seem strange, for usually we concern ourselves with verifying that something is true. Scientific hypotheses are different. In fact, if you want to determine whether a hypothesis is scientific or not, look to see if there is a test for proving it wrong. If there is no test for its possible wrongness, then it is not scientific.

—Paul Hewitt, Conceptual Physics (2002)

Quite recently, a new myth has begun to appear in science textbooks. Almost all lower-level textbooks in general science include a section detailing “the scientific method” (see Myth 26), but now you also find explicit discussions of what philosophers have called “the demarcation problem”: how to distinguish science from pseudoscience. Textbooks such as Paul Hewitt’s Conceptual Physics consider the problem to have an obvious solution—in order for a theory to be considered scientific, we apply a bright-line test of “falsifiability.” Whereas in earlier generations the topic seems to have remained implicit, today falsifiability has crowded out all possible contenders for demarcation and is considered an essential lesson for students.

Teaching students how to distinguish “real science” from impostors can reasonably be understood as the central task of science pedagogy. Every student in public and private schools takes several years of science, but only a small fraction of them pursue careers in the sciences. We teach the rest of them so much science so that they will appreciate what it means to be scientific—and, hopefully, become scientifically literate and apply some of those lessons in their lives.1 For such students, the myth of a bright line of demarcation is essential.

The “demarcation problem” received its name in interwar Europe from philosopher Karl Popper (1902–1994), who plays an outsized role in the account that follows, but it has a venerable history—or histories. There is not one demarcation problem but several: how to distinguish correct from incorrect knowledge; how to differentiate science from all those domains (art history, theology, gardening) that are “nonscience”; and how to set science apart from things that look an awful lot like science but for some reason don’t quite fit. It is this last set of supposed impostors, conventionally designated by their opponents as “pseudoscience,” that are the target of the educational myth, which really only emerged explicitly in the United States since the 1980s. Both the timing and the invocation of “falsifiability” stem from the intersection of the philosophy of science with the legal debates over the teaching of creationism in the public schools.

The question of demarcation has been a central preoccupation since the earliest days of science. For example, in the fifth century BCE Hippocratic text “On the Sacred Disease,” the author attacks “the sort of people we now call witch-doctors, faith-healers, quacks and charlatans,” who saddled this moniker on the perfectly explicable and regular disease that moderns will come to call epilepsy.2 Since then, the attempts by philosophers—many of whom engaged in activities we would unhesitatingly consider “science” today—to cordon off science from cuckoo’s eggs have been legion and often quite ingenious.3 They were all failures.

The problem of separating science from pseudoscience is devilishly difficult. One essential characteristic of all those doctrines, labeled as “pseudosciences,” is that they very much resemble sciences, and so superficial characteristics fail to identify them.4 We also cannot define “pseudoscience” as incorrect doctrines, because many theories that we now consider wrong—ether physics, arguments from design—were at one point unquestionably part of science (see Myth 4), which implies that many of the things we now consider to be correct science will eventually be discarded as incorrect. Are advocates of those ideas today pseudoscientific? It seems absurd to say so. The movement back and forth across the border is surprisingly vigorous, and the history of science is littered with fascinating cases (phrenology, mesmerism, acupuncture, parapsychology, and so on).5

As early as 1919, young Popper “wished to distinguish between science and pseudo-science; knowing very well that science often errs, and that pseudo-science may happen to stumble on the truth.”6 He found earlier attempts largely unsatisfactory, mostly because they considered true science to be knowledge claims that were confirmed by empirical evidence. This would never do. Three self-proclaimed “scientific” theories popular among Viennese intellectuals—the historical materialism of Karl Marx (1818–1883), the psychoanalysis of Sigmund Freud (1856–1939), and the individual psychology of Alfred Adler (1870–1937)—never lacked for confirming instances; in fact, it seems that every case presented to them could be interpreted as confirmation. Popper wondered what it would take to invalidate one of these doctrines, and was struck by the experimental confirmation of the general theory of relativity of Albert Einstein (1879–1955). During a 1919 eclipse expedition, Arthur Eddington (1882–1944) measured the bending of starlight by the gravitational field of the sun, thereby confirming the theory. Einstein had declared in 1915, when he published the theory, that if light did not exhibit the correct amount of curvature, then his theory would be incorrect. “Now the impressive thing about this case,” Popper observed, “is the risk involved in a prediction of this kind.” To have a chance at being right, one must gamble at being wrong—that was what it meant to be scientific. As he concluded: “One can sum up all this by saying that the criterion of the scientific status of a theory is its falsifiability, or refutability, or testability.7

That is how falsifiability is usually presented, but this description truncates most of Popper’s reasoning. Popper claimed that he developed these ideas in 1919 right after hearing news of the eclipse expedition, but he only coined the term “problem of demarcation” for it in 1928 or 1929, and he first unveiled the full theory at a lecture in 1953, sponsored by the British Council, on contemporary British philosophy, which he delivered at Peterhouse at Cambridge University. (Popper had left Vienna in 1937 for New Zealand and eventually the United Kingdom to escape the rise of National Socialism).8 This history matters for two reasons: the delay further solidified the exaltation of Einstein and the denigration of Freud, making Popper sound prescient; and it was delivered in English, a few years before Popper’s most significant contribution to epistemology, Logik der Forschung (1934), was translated as The Logic of Scientific Discovery (1957). In the context of Popper’s full theory, falsifiability has a number of features that are rather unattractive to science educators.

For starters, Popper did not believe in truth. The bulk of his essay on falsifiability consists of a critique of the famous philosophy of induction proposed by David Hume (1711–1776). For Popper, there are no “natural laws” and nothing like “truth” in science. Instead, we have a collection of statements that have not yet been proven false. While the boldness of this position is part of its appeal, its radical skepticism is not and thus has been stripped out of typical presentations of the theory.

But even the reduced presentation of Popper’s falsificationism raises serious concerns: namely, it doesn’t work. Recall that Popper had (justified) concerns about how we would ever know that a theory had been confirmed; regrettably, adding a minus sign to a confirming instance does not make epistemological determination any easier. If a negative result sufficed to falsify a theory, then high school students in lab classes would have falsified pretty much everything we believe we know about the natural world. In addition, the minimum we expect of a demarcation criterion is that it group those activities we generally consider sciences in one camp, and set those commonly considered pseudosciences in another. Popper fails here, precisely because science is such a heterogeneous activity, with various methods and practices. For example, the “historical” natural sciences, such as evolutionary biology and geology—where we cannot “run the tape again”—fare poorly under the falsification test.

The situation with inclusion is even worse, as stated most forcefully in a 1983 article by the philosopher of science Larry Laudan (b. 1941):

[Popper’s criterion] has the untoward consequence of countenancing as “scientific” every crank claim that makes ascertainably false assertions. Thus flat Earthers, biblical creationists, proponents of laetrile or orgone boxes, Uri Geller devotees, Bermuda Triangulators, circle squarers, Lysenkoists, charioteers of the gods, perpetuum mobile builders, Big Foot searchers, Loch Nessians, faith healers, polywater dabblers, Rosicrucians, the-world-is-about-to-enders, primal screamers, water diviners, magicians, and astrologers all turn out to be scientific on Popper’s criterion—just so long as they are prepared to indicate some observation, however improbable, which (if it came to pass) would cause them to change their minds.9

Laudan argued that any bright-line semantic criterion à la falsifiability would necessarily fail: demarcation was not a soluble problem. This position has been subjected to furious philosophical counterattack, yet even his critics no longer seek bright lines. Rather, they produce checklists of criteria that render a theory scientific—analogous to the Diagnostic and Statistical Manual (DSM), ubiquitous in psychiatry—or groupings of “family resemblances” (following Popper’s nemesis Ludwig Wittgenstein [1889–1951]) among pseudoscientific doctrines.10 It is almost impossible to find a philosopher of science today who thinks that Popper’s criterion is the ultimate solution of the demarcation problem.

Then why do we persistently encounter this myth? The answer has less to do with philosophy or with science than with the law. Starting in the 1960s, a series of state governments in the United States passed statutes mandating “equal time” in biology courses for “evolution science” (neo-Darwinian natural selection) and “creation science” (an updated flood geology offering a scientific account that accorded closely with the creation story described in Genesis). Opponents countered that these laws introduced religion into the public schools, violating the constitutionally mandated separation of church and state. As one case from Arkansas reached the federal courts, the testimony of many scientists as well as philosophers and historians of science was solicited to determine the validity of the defense that creation science was a legitimate scientific hypothesis and therefore not “religion.” Philosopher of science Michael Ruse (b. 1940) testified about several different demarcation criteria that would exclude scientific creationism, but one in particular impressed Judge William Overton (1939–1987) in his January 5, 1982, decision in McLean v. Arkansas Board of Education. In his five-point list of what makes a doctrine a “science,” the final one reads: “(5) It is falsifiable (Ruse and other science witnesses).”11 Thus, Ruse’s brief sketch of Popper came to serve as a legal metric to determine whether something is scientific.

Although many philosophers of science were happy with the outcome of McLean, Ruse’s arguments were extensively criticized, not least in Laudan’s article cited earlier. Some of those critiques have stuck, and when an updated version of creationism—known as intelligent design (ID)—reached the Pennsylvania courts in 2005, Judge John Jones’s (b. 1955) decision included an extensive discussion of what constituted “science,” but mentioned falsifiability only twice: once in a paraphrase of Overton’s decision, and once in describing how biochemist Michael Behe (b. 1952) redefined the blood-clotting mechanism to evade peer review. Instead of endorsing Popper, legal precedent now enshrines peer-reviewed publications in mainstream journals as the gold standard for demarcation.12 We have moved from epistemology to sociology.

We have no sharp criterion for a simple reason: mimesis. Any time a test is proposed, the fringe advocates will strive to meet it precisely because they believe that they are pursuing proper science and agree about the need for demarcation. Creationists make plenty of falsifiable statements, for example, and now they have peer-reviewed journals. We end up with a symmetric race between the demarcators and those they wish to exclude.13 And since the demarcation criteria have changed over time, those people accused by establishment scientists as being “pseudoscientists” bear little else in common other than their shared demonization.

Yet demarcation remains essential for the enormously high political stakes associated with climate-change denial and other antiregulatory fringe doctrines.14 As sociologist Thomas Gieryn (b. 1950) has noted, although demarcation is a frustrating task for philosophers, for scientists it is an everyday matter: not to read this article, to ignore that email, to dismiss a website. They demarcate through socially trained judgment.15 They do not need the myth; it’s for the rest of us, who graduate from high school science classes to the ranks of registered voters.