8    Science Gone Sideways: Denialists, Pseudoscientists, and Other Charlatans

We turn now from fraud—which is a matter of accepting the standards of science, then intentionally violating them—to the case of denialists and pseudoscientists, who may misunderstand or not care at all about the standards of scientific evidence or, to the extent that they do, not enough to modify or abandon their ideological beliefs.

Many scientists have found it incredible in recent years that their conclusions about empirical topics are being questioned by those who feel free to disagree with them based on nothing more than gut instinct and ideology. This is irrational and dangerous. Denialism about evolution, climate change, and vaccines has been stirred up in recent years by those who have an economic, religious, or political interest in contradicting certain scientific findings. Rather than merely wishing that particular scientific results weren’t true, these groups have resorted to a public relations campaign that has made great strides in undermining the public’s understanding of and respect for science. In part, this strategy has consisted of attempts to “challenge the science” by funding and promoting questionable research—which is almost never subject to peer review—in order to flood news outlets with the appearance of scientific controversy where there is none. The result has been a dangerously successful effort to subvert the credibility of science.

As we saw in the last chapter, the scientific attitude is sometimes betrayed even by scientists. Yet a proportionally greater threat may arise from those who are outside science: those who either willfully or unwittingly misunderstand the process of science, who are prepared to deny scientific results that do not jibe with their ideological beliefs, who are only pretending to do science in order to further their pet theories, who cynically exploit others’ ignorance while they profit, or who fool themselves by being overly gullible. It is important to realize, however, that these errors can be committed both by those who are lying to others (who are doing false science or are rejecting science falsely) and by those who are being lied to (who have not bothered to educate themselves in the skills necessary to form well-warranted beliefs). Whether conscious of their betrayal of good scientific principles or not, in an age where climate change deniers routinely use Facebook and Twitter to spread their benighted ideology, and intelligent design theorists have a website to share their cherry-picked doubts about evolution, we are all responsible for any distance between our reliance on the fruits of science and our sometimes woefully misinformed conception of how scientific beliefs are formed.

The two most pernicious forms of cheating on scientific principles by those who are outside science are denialism and pseudoscience. Although both will be discussed at greater length later in this chapter, let me now define denialism as the refusal to believe in well-warranted scientific theories even when the evidence is overwhelming.1 The most common reason for this is when a scientific theory conflicts with someone’s ideological beliefs (for instance, that climate change is a hoax cooked up by liberals), so they refuse to look at any distasteful evidence. Pseudoscience, by contrast, is when someone seeks the mantle of science to promote a fringe theory about an empirical matter (such as intelligent design), but refuses to change their beliefs even in the face of refutatory evidence or methodological criticism by those who do not already believe in their theory. As we will see, it is difficult to draw a clear line between these practices because they so often overlap in their tactics, but both are united in their repudiation of the scientific attitude.

Everyone has a stake in the justification of science. If our beliefs are being manipulated by those who are seeking to deceive us—especially given that virtually all of us are prewired with cognitive biases that can lead to a slippery slope of gullibility and self-deception—the consequences for scientific credibility are enormous. We may feel justified in believing what we want to believe about empirical matters (falsely judging that if scientists are still investigating there must be a lack of consensus), but if we do this then who do we have to blame but ourselves if the planet is nearly uninhabitable in fifty years? Of course, this is to oversimplify an enormously complex set of psychological circumstances, for there are many shades of awareness, bias, intentionality, and motivation, all of which bear on the nature of belief. As Robert Trivers masterfully demonstrates in his previously cited book The Folly of Fools, the line between deception and self-deception may be thin. Just as scientific researchers may sometimes engage in pathological science due to their own delusions, those who engage in denialism or pseudoscience may believe that they are actually living up to the highest standards of the scientific attitude.

But they are not.2 And neither are those who uncritically accept these dogmas, never bothering to peek over the wall of willful ignorance at the results of good science. In this chapter, I will explore the mistakes of both the liars and those who are overly credulous. For as I’ve said, in a day and age where scientific results are at our fingertips, we all bear some responsibility for the warrant behind our empirical beliefs. And, for science, it is a problem either way. Whether someone has lit the fire of denialism about climate change or is merely stopping by to warm their hands, it is still repudiation of a core value of science.3

In the last chapter, we explored what happens when a scientific researcher cheats on the scientific attitude. In this chapter, we will consider what happens when those who are not doing science—whatever their motive—peddle their convictions to the larger community and cast doubt on the credibility of well-warranted scientific beliefs.

Ideology and Willful Ignorance

Scientists presumably have a commitment to the scientific attitude, which will influence how they formulate and change their beliefs based on empirical evidence. But what about everyone else? Fortunately, many people have respect for science. Even if they do not do science themselves, it is fair to say that most people have respect for the idea that scientific beliefs are especially credible and useful because of the way they have been vetted.4 For others, their primary allegiance is to some sort of ideology. Their beliefs on empirical subjects seem based on fit not with the evidence but rather with their political, religious, or other ideological convictions. When these conflict—and the conclusions of science tread on some sacred topic on which people think that they already know the answer (e.g., whether prayer speeds up healing, whether ESP is possible), this can result in rejection of the scientific attitude.

There has always been a human tendency to believe what we want to believe. Superstition and willful ignorance are not new to the human condition. What is new in recent years is the extent to which people can find a ready supply of “evidence” to support their conspiracy-theory-based, pseudoscientific, denialist, or other outright irrational beliefs in a community of like-minded people on the Internet. The effect that group support can have in hardening one’s (false) convictions has been well known to social psychology for over sixty years.5 In these days of 24/7 partisan cable “news” coverage, not to mention Facebook groups, chat rooms, and personal news feeds, it is increasingly possible for those who wish to do so to live in an “information silo,” where they are seldom confronted by inconvenient facts that conflict with their favored beliefs. In this era of “fake news” it is possible for people not only to avoid views that conflict with their own, but almost to live in an alternative reality, where their preferred views are positively reinforced and opposing views are undermined. Thus political and religious ideologies—even where they tread on empirical matters—are increasingly “fact free” and reflect a stubborn desire to shape reality toward them.

To say that this is a dangerous development for science would be an understatement. In fact, I think it is so dangerous that I wrote an entire book—Respecting Truth: Willful Ignorance in the Internet Age—on the topic of how these forces have conspired to create an increasing lack of respect for the concept of truth in recent years.6 I will not repeat those arguments here, but I would like to trace out their implications for the debate about the distinctiveness of science.

One important topic here is the role of group consensus. We have already seen that in science, consensus is reached only after rigorous vetting and comparison with the evidence. Community scrutiny plays a key role in rooting out individual error. In science, we look to groups not for reinforcement of our preexisting beliefs, but for criticism. With ideological commitments, however, one often finds little appetite for this and people instead turn to groups for agreement.7 Yet this feeds directly into the problem of “confirmation bias” (which we’ve seen is one of the most virulent forms of cognitive bias, where we seek out evidence that confirms our beliefs rather than contradict them). If one wants to find support for a falsehood, it is easier than ever to do so. Thus, in contrast to the way that scientists use groups as a check against error, charlatans use them to reinforce prejudice.

Sagan’s Matrix

In his influential book The Demon-Haunted World: Science as a Candle in the Dark,8 Carl Sagan makes the observation that science can be set apart from pseudoscience and other chicanery by two simple principles: openness and skepticism. As Sagan puts it:

The heart of science is an essential balance between two seemingly contradictory attitudes—an openness to new ideas, no matter how bizarre or counter-intuitive, and the most ruthless skeptical scrutiny of all ideas, old and new.9

By “new ideas,” Sagan means that scientists must not be closed to the power of challenges to their old way of thinking. If scientists are required to base their beliefs on evidence, then they must be open to the possibility that new evidence may change their minds. But, as he goes on to observe, one must not be so open to new ideas that there is no filter. Scientists cannot be gullible and must recognize that “the vast majority of ideas are simply wrong.”10 Thus these two principles must be embraced simultaneously even while they are in tension. With “experiment as the arbiter,” a good scientist is both open and skeptical. Through the critical process, we can sort the wheat from the chaff. As Sagan concludes, “some ideas really are better than others.”11

Some may complain that this account is too simple, and doubtless it is, but I think it captures an essential idea behind the success of science. Yet perhaps the best measure of the depth of Sagan’s insight is to examine its implication for those areas of inquiry that are not open or not skeptical. Let us now dig a little deeper into denialism and pseudoscience. Although he does not discuss denialism per se, Sagan discusses pseudoscience at length, allowing for an intriguing comparison. What is the difference between denialism and pseudoscience?

Sagan says that pseudoscientists are gullible, and I think that most scientists would be hard pressed to disagree.12 If one goes in for crystal healing, astrology, levitation, ESP, dowsing, telekinesis, palmistry, faith healing, and the like,13 one will find little support from most scientists. Yet virtually all of these belief systems make some pretense of scientific credibility through seeking evidential support. What is the problem? It is not that they are not “open to new ideas,” but that in some ways they are “too open.”14 One should not believe something without applying a consistent standard of evidence. Cherry picking a few favorable facts and ignoring others is not good scientific practice. Here Sagan cites favorably the work of CSICOP (Committee for the Scientific Investigation of Claims of the Paranormal—now the Committee for Skeptical Inquiry), the professional skeptical society that investigates “extraordinary” beliefs. If science really is open, such claims deserve a hearing.15 But the problem is that in virtually every case in which real skeptics have investigated such extraordinary claims, the evidence has not held up.16 They are revealed as pseudoscientific not because they are new or fantastical, but because they are believed without sufficient evidence.

This way of looking at pseudoscience allows for a fascinating contrast with denialism. Although, as noted, Sagan does not make this comparison, one wonders whether he might approve of the idea that the problem with denialism is not that it is not skeptical enough, but that it is insufficiently open to new ideas.17 When you’re closed to new ideas—most especially to any evidence that might challenge your ideological beliefs—you are not being scientific. As Sagan writes, “If you’re only skeptical, then no new ideas make it through to you. You never learn anything.”18 Although one desires a much more developed sense of scientific skepticism than Sagan offers here (which I will soon provide), his notion does at least suggest what may be wrong with denialism. The scientific attitude demands that evidence counts because it might change our thinking. But for denialists, no evidence ever seems sufficient for them to change their minds. Through embracing the scientific attitude, science has a mechanism for recovering from its mistakes; denialism does not.

So are pseudoscience and denialism more similar or different? I would argue that they have some similarities (and that their demographics surely overlap) but that it is enlightening to pursue the question of their purported differences. Later in this chapter, I will explore these notions in more depth but for now, as a starting point, let’s offer a 2 × 2 matrix of what might be regarded as a Sagan-esque first cut on the issue.19

Skeptical

Gullible

Open

Science

Pseudoscience

Closed

Denialism

Conspiracy Theories

Notice that in the extra box I offer the possibility of conspiracy theories. These seem both closed and gullible. How is that possible? Consider the example of someone who argues that NASA faked the Moon landing. Is this a closed theory? It would seem so. No evidence provided by Moon rocks, videotape, or any other source is going to be enough to convince disbelievers. This, it should be noted, is not true skepticism but only a kind of selective willingness to consider evidence that fits with one’s hypothesis. Evidence counts, but only relative to one’s prior beliefs. What about the claim that Moon-landing deniers are gullible? Here the “skeptical” standard does not apply at all. Anyone who thinks that the US government is capable of covering up something as enormous as a faked Moon landing must either be extremely gullible or have a faith in governmental competence that belies all recent experience. Here the problem is that one’s beliefs are not subject to sufficient scrutiny. If an idea fits one’s preconceived notions, it is left unexamined.

With conspiracy theories, we thus find an odd mixture of closure and gullibility: complete acceptance of any idea that is consistent with one’s ideology alongside complete rejection of any idea that is not. Conspiracy theories routinely provide insufficient evidence to survive any good scientist’s criticism, yet no refutatory evidence whatsoever seems enough to convince the conspiracy theorist to give up their pet theory. This is charlatanism of the highest order—in some ways, the very opposite of science.

It is always fun to try to work out such hard and fast distinctions as we see in this matrix. I would argue, however, that there is something wrong—or at least incomplete—about it. In practice, denialists are not quite so skeptical and pseudoscientists are not quite so open. Both seem guided by a type of ideological rigidity that eschews true openness or skepticism, and instead seems to have much more in common with conspiracy theories. Although Sagan’s insight is provocative, and can be used as a stalking horse, the real problem with both denialism and pseudoscience is their lack of the scientific attitude.

Denialists Are Not Really Skeptics

Denialists are perhaps the toughest type of charlatans to deal with because so many of them indulge in the fantasy that they are actually embracing the highest standards of scientific rigor, even while they repudiate scientific standards of evidence. On topics like anthropogenic climate change, whether HIV causes AIDS, or whether vaccines cause autism,20 most denialists really don’t have any other science to offer; they just don’t like the science we’ve got.21 They will believe what they want to believe and wait for the evidence to catch up to them. Like their brethren “Birthers” (who do not accept Barack Obama’s birth certificate) or “Truthers” (who think that George W. Bush was a co-conspirator on 9/11), they will look for any excuse to show that their ill-warranted beliefs actually fit the facts better than the more obvious (and likely) rational consensus. While they may not actually care about empirical evidence in a conventional sense (in that no evidence could convince them to give up their beliefs), they nonetheless seem eager to use any existing evidence—no matter how flimsy—to back up their preferred belief.22 But this is all based on a radical misunderstanding or misuse of the role of warrant in scientific belief. As we know, scientific belief does not require proof or certainty, but it had better be able to survive a challenge from refuting evidence and the critical scrutiny of one’s peers. But that is just the problem. Denialist hypotheses seem based on intuition, not fact. If a belief is not based on empirical evidence, how can we convince someone to modify it based on empirical evidence? It is almost as if denialists are making faith-based assertions.

Unsurprisingly, most denialists do not see themselves as denialists and bristle at the name; they prefer to call themselves “skeptics” and see themselves as upholding the highest standards of science, which they feel have been compromised by those who are ready too soon to reach a scientific conclusion before all of the evidence is in. Climate change is not “settled science,” they will tell you. Liberal climate scientists around the world are hyping the data and refusing to consider alternative hypotheses, because they want to create more work for themselves or get more grant money. Denialists customarily claim that the best available evidence is fraudulent or has been tainted by those who are trying to cover something up. This is what makes it so frustrating to deal with denialists. They do not see themselves as ideologues, but as doubters who will not be bamboozled by the poor scientific reasoning of others, when in fact they are the ones who are succumbing to highly improbable conspiracy theories about why the available evidence is insufficient and their own beliefs are warranted despite lack of empirical support. This is why they feel justified in their adamant refusal to change their beliefs. After all, isn’t that what good skeptics are supposed to do? Actually, no.

Skepticism plays an important role in science. When one hears the word “skepticism” one might immediately think of the philosopher’s claim that one cannot know anything; that knowledge requires certainty and that, where certainty is lacking, all belief should be withheld. Call this philosophical skepticism. When one is concerned with nonempirical beliefs—such as in Descartes’s Meditations, where he is concerned with both sensory and rational belief—we could have a nice discussion over whether fallibilism is an appropriate epistemological response to the wider quest for certainty. But, as far as science is concerned, we need not take it this far, for here we are concerned with the value of doubt in obtaining warrant for empirical beliefs.

Are scientists skeptics? I believe that most are, not in the sense that they believe knowledge to be impossible, but in that they must rely on doubt as a crucible to test their own beliefs before they have even been compared to the data. Call this scientific skepticism.23 The ability to critique one’s own work, so that it can be fixed in advance of showing it to anyone else, is an important tool of science. As we have seen, when a scientist offers a theory to the world one thing is certain: it will not be treated gently. Scientists are not usually out to gather only the data that support their theory, because no one else will do that. As Popper stated, the best way to learn whether a theory is any good is to subject it to as much critical scrutiny as possible to see if it fails.

There is a deeply felt sense of skepticism in scientific work. What is distinctive about scientists, however, is that unlike philosophers, they are not limited to reason; they are able to test their theories against empirical evidence.24 Scientists embrace skepticism both by withholding belief in a theory until it has been tested and also by trying to anticipate anything that might be wrong in their methodology. As we have seen, doubt alone is not enough when engaging in empirical inquiry; one must be open to new ideas as well. But doubt is a start. By doubting, one is ensuring that any new ideas are first run through our critical faculties.

What of scientists whose skepticism leads them to reject a widely supported theory—perhaps because of an alternative hypothesis that they think (or hope) might replace it—but with no empirical evidence to back up the idea that the current theory is false or that their own is true? In an important sense, they cease to be scientists. We cannot assess the truth or likelihood of a scientific theory based solely on whether it “seems” right or fits with our ideological preconceptions or intuitions. Wishing that something is true is not acceptable in science. Our theory must be put to the test.25

And this is why I believe that denialists are not entitled to call themselves skeptics in any rightful sense of the word. Philosophical skepticism is when we doubt everything—whether it comes from faith, reason, sensory evidence, or intuition—because we cannot be certain that it is true. Scientific skepticism is when we withhold belief on empirical matters because the evidence does not yet allow us to meet the customarily high standards of justification in science. By contrast, denialism is when we refuse to believe something—even in the face of what most others would take to be compelling evidence—because we do not want it to be true. Denialists may use doubt, but only selectively. Denialists know quite well what they hope to be true, and may even shop for reasons to believe it. When one is in the throes of denial, it may feel a lot like skepticism. One may wonder how others can be so gullible in believing that something like climate change is “true” before all of the data are in. But it should be a warning sign when one feels so self-righteous about a particular belief that it means more than maintaining the consistent standards of evidence that are the hallmark of science.

As Daniel Kahneman so eloquently demonstrates in his book Thinking Fast and Slow, the human mind is wired with all sorts of cognitive biases that can help us to rationalize our preferred beliefs.26 Are these unconscious biases perhaps the basis for denialism even in the face of overwhelming evidence? There is good empirical support to back this up.27 Furthermore, it cannot be overlooked that the phenomenon of “news silos” that we spoke of earlier may exacerbate the problem by giving denialists a feeling of community support for their fringe beliefs. Yet this opens the door to a kind of credulousness that is anathema to real skeptical thinking.

In fact, denialism seems to have much more in common with conspiracy theories than with skepticism. How many times have you heard a conspiracy theorist claim that we have not yet met a sufficiently high standard of evidence to believe a well-documented fact (such as that vaccines do not cause autism), then immediately exhibit complete gullibility that the most unlikely correlations are true (for instance, that the CDC paid the Institute of Medicine to suppress the data on thimerosal)? This fits completely with the denialist pattern: to have impossibly high standards of proof for the things that one does not want to believe and extremely low standards of acceptance for the things that fit one’s ideology. Why does this occur? Because unlike skeptics, denialists’ beliefs are not borne of caring about evidence in the first place; they do not have the scientific attitude. The double standard toward evidence is tolerated because it serves the denialists’ purpose. What they care about most is protecting their beliefs. This is why one sees all of the cheating on scientific standards of evidence, even when empirical matters are under discussion.

The matrix that I concocted from Sagan’s work therefore seems wrong in three important ways about denialism.28 First, it seems wrong to classify denialists as skeptics. They may use evidence selectively and pounce on the tiniest holes in someone else’s theory, but this is not because they are being rigorous; the criteria being used here are ideological, not evidential. To be selective in a biased way is not the same thing as being skeptical. In fact, considering most of the beliefs that denialists prefer to scientific ones, one must conclude that they are really quite gullible.29 Second, it also seems wrong to say that denialists are always closed to new ideas. As we will see in the example of climate change, denialists are plenty open to new ideas—and even empirical evidence—when it supports their preexisting beliefs. Finally, there may be an error in Sagan’s contrast between skepticism and openness. Sagan argues that these two notions must be balanced in scientific reasoning, which implies that they are somehow in conflict. But are they? In Massimo Pigliucci’s Nonsense on Stilts, he observes that

to be skeptical means to harbor reasonable reservations about certain claims. It means to want more evidence before making up one’s mind. Most importantly, it means to keep an attitude of openness, to calibrate one’s beliefs to the available evidence.30

I believe that this is an accurate account of the nature of scientific skepticism. How can one be open-minded enough to suspend one’s belief, yet not be open to new ideas? Skepticism is not about closure; it is about forcing oneself to remain open to the possibility that what seems true may not be. Science is relentlessly critical, but this is because no matter how good one’s evidence, a better theory may await over the next horizon.

Denialism in Action: Climate Change

Perhaps the best example of scientific denialism in recent years is climate change. The theory that our planet is getting progressively hotter because of the release of greenhouse gases caused by human consumption of fossil fuels is massively supported by the scientific evidence.31 There remains, however, great public confusion and resistance to this, as a result of the various monied, political, and media interests that have whipped it into a partisan issue. The sordid story of how those with fossil fuel interests were able to capitalize on the foibles of human reasoning by “manufacturing doubt” where there was none—substituting public relations for scientific rigor—is a chilling tale of the vulnerability of science. The single best book on this is Naomi Oreskes and Erik Conway’s Merchants of Doubt.32 In my own Respecting Truth, I engage in an extended discussion of the epistemological fallout that resulted from public confusion not only over whether global warming is true, but also over whether the vast majority of scientists believe that it is true (which they do).33

Some of the most intellectually irresponsible rhetoric has come from politicians who have tried to impugn the reputation of climate scientists by calling climate change a hoax.34 One sometimes wonders whether they really believe this, or are just “paying the crazy tax” of trying to get elected in an environment in which a frightening percentage of the public believes it; but either way this is a shameful self-stoking cycle. The more politicians lie, the more these lies are reflected in public opinion.

One of the worst perpetrators is US Senator Ted Cruz. At an August 2015 campaign event, sponsored by the Koch Brothers, Cruz said this:

If you look at the satellite data in the last 18 years there has been zero recorded warming. Now the global warming alarmists, that’s a problem for their theories. Their computer models show massive warming that satellites says ain’t happening. We’ve discovered that NOAA, the federal government agencies are cooking the books.35

What’s wrong with this statement? Well, for one thing it isn’t true. This idea of a “global warming hiatus” has been around for years but was recently disproven by Thomas Karl, director of the NOAA’s National Centers for Environmental Information, in an article in Science in June 2015.36 To give Cruz the benefit of the doubt, perhaps he had not known of Karl’s article at the time of his speech. Yet Cruz did not apologize and retract his statement later, after Karl’s article was well publicized. Indeed, in December 2015, Cruz sat for a remarkable interview with NPR, which is so enlightening of the denialist mindset it is worth quoting at length:

Steve Inskeep, Host:  What do you think about what is seen as a broad scientific consensus that there is man-caused climate change?

Ted Cruz:  Well, I believe that public policy should follow the science and follow the data. I am the son of two mathematicians and computer programmers and scientists. In the debate over global warming, far too often politicians in Washington—and for that matter, a number of scientists receiving large government grants—disregard the science and data and instead push political ideology. You and I are both old enough to remember 30, 40 years ago, when, at the time, we were being told by liberal politicians and some scientists that the problem was global cooling.

Inskeep:  There was a moment when some people said that.

Cruz:  That we were facing the threat of an incoming ice age. And their solution to this problem is that we needed massive government control of the economy, the energy sector and every aspect of our lives. But then, as you noted, the data didn’t back that up. So then, many of those same liberal politicians and a number of those same scientists switched their theory to global warming.

Inskeep:  This is a conspiracy, then, in your view.

Cruz:  No, this is liberal politicians who want government power over the economy, the energy sector and every aspect of our lives.

Inskeep:  And almost all the countries in the world have joined in to this approach?

Cruz:  So let me ask you a question, Steve. Is there global warming, yes or no?

Inskeep:  According to the scientists, absolutely.

Cruz:  I’m asking you.

Inskeep:  Sure.

Cruz:  OK, you are incorrect, actually. The scientific evidence doesn’t support global warming. For the last 18 years, the satellite data—we have satellites that monitor the atmosphere. The satellites that actually measure the temperature showed no significant warming whatsoever.

Inskeep:  I’ll just note that NASA analyzes that same data differently. But we can go on.

Cruz:  But no, they don’t. You can go and look at the data. And by the way, this hearing—we have a number of scientists who are testifying about the data. But here’s the key point. Climate change is the perfect pseudoscientific theory for a big government politician who wants more power. Why? Because it is a theory that can never be disproven.

Inskeep:  Do you question the science on other widely accepted issues—for example, evolution?.

Cruz:  Any good scientist questions all science. If you show me a scientist that stops questioning science, I’ll show you someone who isn’t a scientist. And I’ll tell you, Steve. And I’ll tell you why this has shifted. Look in the world of global warming. What is the language they use? They call anyone who questions the science—who even points to the satellite data—they call you a, quote, “denier.” Denier is not the language of science. Denier is the language of religion. It is heretic. You are a blasphemer. It’s treated as a theology. But it’s about power and money. At the end of the day, it’s not complicated. This is liberal politicians who want government power.

Inskeep:  You know that your critics would say that it’s about power and money on your side. Let’s not go there for the moment. But I want to ask about this. I want to ask about facts.

Cruz:  But hold on a second. Whose power—but let’s stop. I mean, if you are going to

Inskeep:  Energy industry, oil industry, Texas

Cruz:  If you’re going to toss an ad hominem.37

There are so many possible things to find fault with here that it is almost a textbook case of poor reasoning: the double standard of evidence, the subtle change of subject when he was pinned down on conspiracy theories, deliberate misunderstanding of what the “openness” of science amounts to, and the schoolyard rhetorical trick of “I know you are, but what am I?” Let us focus, however, on the one empirical claim that was repeated about the alleged eighteen-year pause in global warming. As it turns out, the government’s statistics on climate change suit Cruz just fine when they show something he likes. In this case, it was an (erroneous) IPCC assessment report from 2013 (which has since been corrected).38 This happens sometimes in science; errors are made and they need to be corrected, but not because there is a conspiracy.39 So Cruz is using an outdated, incorrect, discredited graph. But there’s another problem too. Eighteen years is a weird number. Notice that Cruz didn’t say “in the last twenty years” or even “in the last seventeen” or “in the last nineteen.” Why would he be so specific? Here one must think back to what was happening exactly eighteen years prior to 2015: El Nino.

Here we encounter the denialist’s penchant for cherry picking evidence. Despite the fact that fourteen of the last fifteen years had yielded the hottest temperatures of the century, 1998 was a (high) outlier even among those. It showed an astonishingly high pickup in global temperatures for that year only. If you think about the graph that might accompany these data, you can imagine that choosing such a high-temperature year as your base point would make 2015 look relatively cooler. When examined out of context, the eighteen-year gap between 1998 and 2015 made it look like the global temperature had been fairly flat. But it wasn’t. As we now know from Karl’s study, some of those temperature results were not only wrong but, as any scientist can tell you, you’ve also got to look at the whole graph—including the years in between—which show that 1998 was the high outlier and that there has been a steady trend in global warming over the last several decades.40 Even if one uses the old uncorrected graph, Cruz’s reasoning is flawed.

Cherry picking data is a cardinal offense against the scientific attitude, yet it is a common tactic of denialists. Few scientists would make this error. In science, one’s ideas must be subjected to rigorous tests against reality based on previously accepted standards; one can’t just pick and choose as one goes. But to an ideologue like Ted Cruz (and, as it turns out, to many who are not trained to avoid this type of error in reasoning), it may feel perfectly natural to do this. The reason is explained by cognitive psychologists and behavioral economists in their work on confirmation bias and motivated reasoning. As we’ve seen, confirmation bias is when we seek out reasons to think that we are right. Motivated reasoning is when we allow our emotions to influence the interpretation of those reasons relentlessly in favor of what we already think. And both of these are completely natural, built-in cognitive biases that are shared by all humans, even when they have been trained to guard against them. Scientists, given their education in statistics—and the fact that science is a public enterprise in which proof procedures are transparently vetted by a community of scholars who are looking for flaws in one’s reasoning—are much less prone to make these kinds of errors. Those who have been trained in logic, like philosophers and others who take skepticism seriously, can also be expected to recognize these biases and guard against their stealthy erosion of good reasoning. But most denialists? Why in the world should they care?

Of course, few denialists would agree with this assertion, primarily because they would deny that they are denialists. It sounds so much more rigorous and fair-minded to maintain their “skepticism,” which probably accounts for the recent hijacking of this word.41 In fact, some denialists (witness Cruz) go so far as to claim that they are the ones who are really being scientific in the climate change debate. The claim is a familiar one. The science is “not yet settled.” There is “so much more” that we need to know. And isn’t climate change “just a theory”? But the problem is that none of this is based on a good faith commitment to any sort of actual skepticism. It is instead based on a grave misunderstanding of how science actually works coupled with a highly motivated capitulation to cognitive bias. Yes, it is true that the science of climate change is not completely settled. But, as we have seen, that is because no science is ever completely settled. Given the way that science works—which we explored in chapter 2—it is always going to be the case that there are more experiments we can do or tests we can run. But it is a myth to think that one needs complete confirmation or proof before belief is warranted. (And indeed if the denialist rejects this, then why the double standard for their own beliefs?)

As for the claim that climate change is “just a theory”—so any alternative theory “could be true”—one is reluctant to give this much credence. As previously noted, gravity is just a theory. So is the germ theory of disease. As we have seen, some scientific theories are exceptionally robust. But the standard of warrant for scientific belief is not that anything goes until it has been disproven. While it is correct to say that, until it is refuted by the evidence, any theory could technically speaking be true, this does not mean that the theory is justified. Nor is it worth scientists’ time to run every fringe hypothesis to ground. Cranks should not expect to be able to kick the door in and insist that, because science is supposed to be open-ended, their theory must be tested. Science is rightly selective, and the criterion must be warrant, based on the evidence.

Scientific explanation is not made up of correct guesses or having just a few data points. Consider the Flat Earth hypothesis. If it is true, where is the evidence? Since there is none, one is scientifically justified in disbelieving it. Flat Earthers42 are customarily reluctant to say what is wrong with the evidence in favor of heliocentrism, other than that until it has been “proven,” their own theory could be true. But that is not correct reasoning. Even if someone guessed correctly at something that turned out to be true, this is not science. Science is about having a theory that backs up one’s guesses; something that has been tested against—and fits with—the evidence.

Another denialist misunderstanding occurs over how scientists reach consensus. Again, one does not need 100 percent agreement before the field moves on. Those who are asking for complete agreement among the world’s scientists before something is done about climate change are just running out the clock. According to the latest statistics, over 96.2 percent of the world’s climate scientists believe that climate change is occurring and that humans are responsible for it.43 For comparison, note that a similar survey found that only 97 percent of scientists believe in evolution by natural selection, which is a bedrock principle of biology.44 More than one hundred and fifty years after Darwin, we still do not have 100 percent agreement even on evolution! But we don’t need it, because that is not how scientific consensus works. Scientific claims are subjected to group scrutiny and criticism and, despite inevitable dissent, the field makes a choice.45 Some may complain that this still leaves room for doubt and that the “skeptics” could be right (as they sometimes are), but I wouldn’t hold out much hope for denialism about climate change. For one thing, legitimate dissenters must be able to present some empirical evidence as reason for their dissent. Without that, one questions whether they are even scientists. Denialists might complain that they do have evidence and that they are members of a community, yet this is simply tiresome. As we’ve seen, seeking out a community for mutual agreement is not the same thing as seeking out critical scrutiny. No matter how many people you get to agree with you, majority opinion does not trump evidence in a factual matter.46

Couldn’t the scientists nonetheless be wrong? Yes, of course. The history of science has shown that any scientific theory (even Newton’s theory of gravity) could be wrong. But this does not mean that one is a good skeptic merely for disbelieving the well-corroborated conclusions of science. To reject the cascade of scientific evidence that shows that the global temperature is warming, and that humans are almost certainly the cause of it, is not good reasoning even if some long-shot hypothesis comes along in fifty years to show us why we were wrong. Skepticism is in complete conformity with the scientific attitude’s commitment to form one’s beliefs based on fit with the evidence, and then change them as better evidence comes in, but this in no way guarantees truth. Science instead offers justification based on the evidence. Yet this is a mighty thing. With denialism, one decides in advance—on the basis of ideology—what one wants to be true, then selectively filters information based on whether it supports the hypothesis or not. But this does not build warrant. Science may sometimes miss the mark, but its successful track record suggests that there is no superior competitor in discovering the facts about the empirical world.

Indeed, this is why someone like Galileo was not a denialist. Who claimed that he was? In an interview with the Texas Tribune in March, 2015, Ted Cruz said:

Today the global warming alarmists are the equivalent of the flat-Earthers. It used to be accepted scientific wisdom the Earth is flat, and this heretic named Galileo was branded a denier.47

We can make pretty short work of this claim. Galileo believed in the heliocentric model not because of ideology but because of evidence. His telescopic observations of the phases of Venus, craters on the Moon, and the satellites of Jupiter provided a mountain of reasons for believing that the Ptolemaic model of an Earth-centered universe was wrong, and it should have convinced anyone who doubted him. The Catholic Church was the one with ideological beliefs that prevented them from accepting the reality of Galileo’s evidence because of what it would have meant for their celestial model. Galileo most certainly was not a denier. To deny the truth of a false theory when you have the evidence to show that it is false is not denialism; it is science.

What might happen when the lone dissenter does have the evidence? When that person questions the established consensus of science and shows that it is wrong? This challenges the idea that science is a privileged process resulting from group scrutiny of individual work. Can the scientific attitude survive intact if someone goes against the scientific community—which is supposed to be the final arbiter of justificatory warrant—and wins?

What Happens When the “Crank” Is Right?

J Harlen Bretz was a maverick geologist in the early twentieth century, who spent a long career at the University of Chicago, but did his fieldwork in a desolate region of Washington state that he termed the “channeled scablands.” This area is remarkable for its severe “Mars-scape” surface, consisting of washed-out channels surrounded by high bluffs (where gravel and “erratic rocks” are found thousands of feet above sea level), U-shaped canyons with dry falls, and enormous plunge pools with only small waterfalls draining into them. It is, in short, a geography that demands an explanation.

This area had not been well studied before Bretz, but geologists of the time did have their hypotheses. Most agreed that this landscape had been carved by hydrologic forces, but—in keeping with the “uniformitarian” theory of the time—they thought that this must have been the result of relatively small amounts of water acting over long periods of time, such as those that had created the Grand Canyon. Uniformitarianism, which had been the dominant paradigm in geology at least since Charles Lyell’s influential textbook (which inspired Darwin’s theory of evolution by natural selection) is the idea that the geological record can be explained by known natural forces acting over millions of years.48 This was proposed in contrast to the catastrophism of Lyell’s predecessors, who felt that short-term cataclysmic events—perhaps caused by God—had created the geological, fossil, and biological record. Natural forces versus miracles. Erosion versus a catastrophe. It was not hard to figure out which side most scientists would be on.

Although Bretz was himself a uniformitarian (and an atheist), as he stood before the scarred, barren landscape for the first time in 1921, he began to realize that the prior theories must be wrong. Like a detective solving a mystery, Bretz found too many clues that this could not have been the result of steady erosion. What was the alternative? A massive flood. A flood so large that it would have been thirteen miles wide at some points and involved a volume of water so strong it could force the Snake River to flow backward, make U-shaped channels rather than V-shaped ones as far south as the Columbia Gorge, put the “turtle on the fencepost” of gravel found on top of 2,500-foot-high bluffs, and create the enormous plunge pools that were so disproportionate to the tiny falls that fed into them. Where could such an enormous amount of water have come from? Bretz did not know and, for the moment, he framed no hypotheses. It was a puzzle and he vowed merely to follow the evidence wherever it landed him. By the time he came back for the second summer to continue his fieldwork, he was convinced:

Bretz now believed that these geologic features could only have been created by a flood of unimaginable proportions, possibly the largest flood in the history of the world. And this was no claim made as wild speculation. Fact after fact, feature after feature in the landscape had proven to Bretz that his theory provided the only plausible answer for the formation of the channeled scablands.49

The story of Bretz’s work is a fascinating and under-reported topic in the history and philosophy of science, which would repay much greater scholarly attention. John Soennichsen’s Bretz’s Flood is one of the few available resources, but fortunately it is wonderful, both for telling the story of Bretz’s struggles but also providing the intellectual context of the time with discussion of geological positivism, uniformitarianism versus catastrophism, the struggle to make geology more scientific, and how sociological factors can influence scientific explanations. Here I will focus on the narrow issue of what implications Bretz’s story might have for the scientific attitude, when group consensus is against the individual who is right and has the evidence. Does this challenge the idea that the scientific attitude is instantiated at the level of group practice and that science works primarily because the community of scientists corrects the errors of individuals? As I hope to show below, I think that the Bretz episode not only survives this challenge but is in fact a stunning demonstration of the power of the scientific attitude.

It is obvious that Bretz’s work is an endorsement of the scientific attitude at the individual level. He gathered enormous amounts of evidence to back up his hypothesis and critiqued and shaped his theory along the way. Because his theory was in some ways a throwback, he understood that it would be savagely attacked.50 Bretz did not posit any supernatural forces to explain the geological record, but neither could he explain it with currently existing forces acting over a long period of time that uniformitarianism demanded. Instead, he had to propose an enormous catastrophic event that had occurred over a short period, with an unknown source of water. It would sound like something out of the Bible; the push back would be tremendous.

Geology, like other sciences, is a brotherhood of sorts, with much camaraderie and the sharing of data, ideas, and facilities. It is seen as a field in which the work of one individual can inspire another, and through the cooperation of all, fledgling theories can expand and flower into fully matured scientific views shared by the discipline as a whole. As with other brotherhoods, however, squabbles can erupt and family members who don’t adhere to the basic rules can be verbally disciplined or—perhaps worse—ignored altogether.51

What Bretz was proposing sounded like heresy. To Bretz’s credit, he didn’t seem to care; he had the evidence and knew he was right. The others would just have to come around. In this, Bretz reminds one of a modern-day Galileo.52 Those who fought him had not done the fieldwork or seen the landscapes.53 Bretz was stubborn enough to believe in the implications of what he had seen firsthand, and if it did not square with current theories, then those theories must be wrong. This is good testament to Bretz’s adherence to the scientific attitude, but there was trouble ahead.

One problem for Bretz was that he still had no idea what could have caused such a megaflood. The amount of water required would be tremendous. He would be telling a causal story with no cause in sight, and he knew this would present a formidable barrier to acceptance of his theory. But the geologic record simply demanded that amount of water. Here Bretz also reminds one of Darwin, who gathered evidence that supported his theory of evolution by natural selection long before he had any idea of the mechanism that might be behind it. One is also reminded of Newton, who “framed no hypothesis” about what gravity was, as he worked out the equations that described its behavior.

It’s important here to pause and consider the implications for the scientific attitude, because it reveals that understanding a cause is not necessary for scientific explanation. Causal explanations are an important part of complete scientific theories, and positing a miracle is not an acceptable substitute. But most important is having evidence that a hypothesis is warranted; cause can be inferred later. This is not to trivialize causal explanation. It’s just that finding a cause is often the last part of scientific explanation, coming into place after all of the other pieces of evidence have been gathered and fit together.

In Bretz’s case, it would have been preferable if he had known a cause, but initially he simply could not specify one, so he stuck to his evidence from the geologic record. Note that Bretz was enormously self-critical and made numerous corrections and modifications to his theory as he went along.54 This was nothing, however, compared to the criticism he was to receive at the hands of his fellow scientists.

In 1927, Bretz stood on the steps of the Cosmos Club in Washington, DC, to present his evidence before a gathering of the nation’s most distinguished geologists. Foremost among these were six members of the US Geological Survey (USGS), who were a sort of governing board for the profession. Bretz gave a lengthy, definitive, presentation based on his six years of fieldwork in the area. Then, when it came time for the respondents to speak, “all hell broke loose.”

One by one, each of the men sitting at the presentation table rose to confront him with objections, criticisms, and—for the first time—their own interpretations of the scablands. It quickly became clear that this had been a planned attack; a strategic event that allowed Bretz to offer his views, then be subjected to the collective bile of virtually every prominent geologist of his time. It seems clear that the official stance of that influential body was one of intolerance for any theory that strayed from the uniformitarian line.55

This, of course, is not how science is supposed to work. Looking back, one suspects that the learned scientists were objecting to Bretz’s theory because of motivated reasoning rooted in a near-ideological commitment to uniformitarianism. It is one thing to attack a theory based on lack of fit with the evidence; it is another to say that it cannot be tolerated because it would erode the field’s hard-won progress in distancing itself from religious views that would keep it from being seen as scientific.

In this book, I have made the case that the scientific attitude is instantiated not just in the values of individual scientists, but in the community of scholars. So what can be said about a case where the “heretic” is behaving more scientifically than his critics? Is it possible to defend the importance of the scientific attitude even when the consensus of one’s profession is wrong? This is a delicate question, for although it is common for individual scientists sometimes to outpace the group—indeed, this is often how breakthrough ideas come to disseminate and lead to a changed understanding—what is rare is for someone to openly contradict the scientific consensus for decades and later be vindicated. Great historical examples of this include Galileo, Semmelweis, Alfred Wegener, and other martyrs for science. But when such episodes occur, what is the guidepost for keeping science on track? It has to be the evidence. It’s not that Bretz was making some wild claim out of the blue: he had the evidence to back it up. So even while it is understandable that there will sometimes be community resistance to those are perceived to have departed from orthodoxy in their scientific thinking, over time this conflict must be resolved in the only way possible in science: through empirical evidence.

And this was in fact precisely what occurred with Bretz’s theory of the channeled scablands. After the disastrous Cosmos Club talk, one of Bretz’s previous rivals helped him to realize that the only water source big enough for such a massive flood must have been the spontaneous draining of a glacial lake. This turned out to be the right answer. It is now thought that the failure of a giant ice dam at Lake Missoula released over 500 cubic miles of water that reached 100 miles south of Portland, Oregon.56 Still the detractors held on.

The geologic community went about their business and tried to ignore the fact that this upstart geologist was spouting nonsense about massive floods he claimed had altered the topography of a vast western landscape in a geologic blink of an eye.57

Bretz went into a deep depression. As time passed, some of his critics eventually died, while others came around, but a new generation of geologists also grew up who were more sympathetic to Bretz’s theory.58 After decades, Bretz was eventually vindicated. As one of his critics said years later when he finally visited the scablands himself, “How could anyone have been so wrong?”59 How sweet it must have been in 1965 when a group of geologists made an official field trip to the scablands and sent Bretz this telegram: “We are all catastrophists now.”60

In a strange coda to Bretz’s story, his legacy has now been claimed by thousands of creationists, who regard him as a sort of folk hero for almost single-handedly proving their case for a biblical flood. Of course, Bretz did no such thing, yet there are creationist websites on which he is celebrated as a David who went up against the organized science Goliath and won.61 What to say about this? And does Bretz’s example give support to the idea that someone like Ted Cruz just may be the next Galileo? This seems preposterous; remember that the lodestar in Bretz’s story is one of sticking to the evidence and eschewing ideology. If a science denier claims that climate change is a hoax, where is the evidence? Without this, it is just speculation or worse. This is not to say that simple skepticism or even stubbornness marks a departure from science. The standards for accepting a new theory should be high. But when a scientist abandons evidence for ideology, this is no longer science.

What might happen if a purportedly crackpot theory did have the evidence? Then it must be tested and, if it holds up, the scientific consensus must change. Just as the scientific attitude should ideally guide individual behavior, so too it must guide group behavior. Science is supposed to be self-correcting at both the individual and the group level. Imagine a scientist with an outlier theory that did not square with the evidence and was rejected by the rest of the profession. If this scientist clings to the theory, in some sense he or she will have left the profession. Can the same thing happen to an entire discipline? Although it is more commonly the case that the group corrects the individual—as in the case of cold fusion—it does sometimes occur that the individual corrects the group.

Just as geology was temporarily taken off course by refusal to accept Alfred Wegener’s theory of continental drift, so it was later to suffer the same fate with Bretz’s theory of the channeled scablands. It is painful to think that geology was “not scientific” for a time, but that is as it must be when scientists refuse to change a theory in the face of compelling evidence. Just as the Catholic Church refused to acknowledge the truth of Galileo’s new theory of the heavens—choosing instead to stick with its familiar but false ideology—so for a time did geology choose to embrace strict uniformitarianism over empirical evidence.62 What makes science distinctive, however, is that even when this occurs, there is a way to get back on track.63 Indeed, note that—as a science—geology did eventually recognize the force of Bretz’s data, and return to the scientific attitude. (The Catholic Church, however, did not and was instead humiliated into an apology to Galileo 350 years after it had already lost the debate over heliocentrism.)

We must here face squarely the implications of what this example says about the question of whether the scientific attitude is a defining feature of science. The idea behind the scientific attitude cannot be that the group is always right. Galileo, Semmelweis, Wegener, and Bretz provide the counterexamples. The individual is sometimes far ahead of his or her contemporaries. True, as we saw earlier in Sunstein’s experimental studies, it is often easier for groups to find the truth than for individuals. But this does not mean that this always happens. Sometimes the individual has the better theory. And that is perfectly all right in science. What is important for the preservation of the scientific attitude is that any disputes are settled by the evidence. Sometimes it is not just individual theories that need correction, but an entire discipline’s consensus.

Returning to Bretz, it is worth considering for a moment why uniformitarianism had such a hold on geology. In this case, I think we see one of the rare instances where an entire scientific field was subject to ideological influence. One reason that uniformitarianism was so favored by geologists was that it was seen as a bulwark against the creationists. It was a way of vindicating the idea that the natural world could be explained through the slow workings of natural processes rather than some sort of catastrophe that might be expected from divine intervention. Still, it is perfectly consistent to think that natural events can occur suddenly, over short periods of time. But this does not necessarily mean that God exists.64 It is also important to realize that Bretz did not jump to the conclusion of catastrophism rashly. His own guiding philosophy was uniformitarianism until he was pushed elsewhere by the evidence. In his papers and talks on the subject, it is clear that Bretz did not take the implications of his theory lightly. He anticipated criticism and tried to address it, but still believed in his own theory because there was no other explanation for what he saw. Contrast that with some of the geologists who had not even seen the evidence of the scablands but were nonetheless trying to shut him down. In this, they were being ideologues. As scientists, why were they more committed to the theory of uniformitarianism than to following the evidence? This demands an explanation too.

My hypothesis is that ideology can have a corrupting influence both on those who are committed to it and on those who are fighting it. As scientists, the members of the USGS should not have cared whether Bretz’s evidence was consistent with one overarching theory or another, yet they did. Why? Because they were locked in their own battle with fundamentalist Christians and did not want Bretz’s theory to give aid and comfort to their enemies. This is an example of how ideology can infect the scientific process, whether we are on the “right” side of it or not. That is, even if we are merely changing the way that we do science to fight against those who are unscientific, we can damage science. It is not a question of whether Bretz’s theory came from a lone individual or a group, whether it advocated gradual change or sudden intervention. What matters is that it was consistent with the data. But when we try to go around this—when we seek either to confirm or disconfirm a theory based on our extra-empirical commitments—problems can arise. Most often this occurs when religious or political ideologues crab the process of learning from evidence because of their personal beliefs about divine intervention, human freedom, equality, nature, nurture, or some other speculative commitments. But this can also occur when an individual or group is fighting against such ideologies as well. The temptation to push things a little toward the way that we think (or hope) truth lies can be great. But this can result in unexpected reversals (or even frauds), which can then backfire and do a disservice to the public’s trust in science.

Consider here the “climate-gate” email controversy from a few years back, where a handful of scientists spoke of suppressing evidence in response to Freedom of Information Act requests that they knew would be used by denialists to cherry pick data and undercut the truth about global warming. Although they were joking, and surely must have felt that they were on the “right side” of science, the fallout for climate science was terrible. Even after multiple official inquiries—which showed that the scientists had actually done nothing wrong and that their work was never compromised—it fed straight into the conspiracy theories of those who had argued that global warming was a hoax perpetrated by liberal scientists. When we compromise our standards, even if we feel that we are working “on the side of the angels,” science can suffer.65

It is deeply frustrating to have science attacked by ideologues, who care nothing for what is precious about it and seek data only to support their favored hypotheses. But the price of scientific freedom is eternal openness. This does not mean that we have to tolerate crackpot theories. If there is no evidence, and no warrant behind them, there is no reason to put the scarce resources of science into checking them. But what to do when the evidence does show something strange? Here Sagan seems right. We must give it a hearing. And this is exactly what we will do at the end of this chapter when we consider the nearly three decades of work done on ESP at the Princeton Engineering Anomalies Research (PEAR) center. But first we must attend to the topic of pseudoscience.

Pseudoscientists Are Not Really Open to New Ideas

The problem with pseudoscientists is not merely that they are not doing science, but that they claim that they are. Some probably know that they are only pretending. Others perhaps believe that their work is being unfairly scorned. But the bottom line is that when one is making explanatory claims about empirical matters, fit with the evidence should be the primary consideration.66 Why then is it so difficult to get most pseudoscientists to admit not just that their theories are not true, but that they are not even scientific? The answer is that, similar to denialists, their belief in their theories seems deeply rooted in wishful thinking.

Clearly, wishful thinking is not in keeping with the scientific attitude. One should not decide in advance on the basis of ideology what one wants to be right, then chase the evidence to support it. In science, one is supposed to be guided by the evidence and one’s beliefs are supposed to be shaped by it. As we know, scientific hypotheses can come from anywhere. Intuition, wishful thinking, hope, stubbornness, and wild guesses have all resulted in respected scientific theories. But here is the key: they must be supported by the evidence, as judged by the community of other scientists.

Here once again consider Sagan’s matrix. Are pseudoscientific hypotheses open to new ideas? Not particularly. While it is probably fair to say that many astrologers, dowsers, crystal healers, intelligent design theorists, and the like are extremely gullible (as Sagan notes), they are also customarily closed-minded to an extreme degree in accepting the import of any evidence that disproves their theories. They will not submit to falsification. Controlled experiments are rare. Cherry picking data is common. Like denialists, most pseudoscientists seem to want to avoid disconfirming evidence, even as they complain that other scientists will not consider their own.

One expects that some are profiting from this cat-and-mouse game and know what they are doing. While some are misleading, others are misled. Astrology is a billion-dollar industry worldwide.67 According to NBC News, Americans spend $3 billion a year on homeopathy.68 Other advocates of pseudoscience are surely straight-up ideologues who are in it not for the money but because they think they are right. And of course there is always willful ignorance and those who are duped. All are a danger to good science. Whether someone actually believes their untruths or merely pretends to, it is hostile to science to refuse to form empirical beliefs based on commitment to the standards of evidence. In pseudoscience as in denialism, they eschew (or at least cheat on) the scientific attitude. Intuition is prized over fact. “Skepticism” is used at one’s convenience. Gullibility is rampant. A double standard is applied to evidence. Dark conspiracies are spun about the work of those who oppose them. Both pseudoscience and denialism surely also include those who are benefiting from public confusion, while others naively do their bidding.69

The crucial question for pseudoscientists is this: If your theories are true, where is the evidence? You may claim that you are being persecuted or ignored by mainstream science, but if you actually have good answers, why would they do that? As we just saw with the example of Harlen Bretz, if you have the evidence, the rest of the field will eventually beat a path to your door. But the onus is still on you. If even an eminent scientist like Bretz faced fierce, sometimes unreasoned opposition to his theory, why should pseudoscientists expect to have it easier? It is perhaps not great testimony for open-mindedness in the practice of science that Bretz had to fight so hard—even when he had the evidence—but that is the plight of the maverick. Scientists are stingy with warrant. So why do pseudoscientists expect to be taken seriously when they can offer no—or only equivocal—evidence? Because they “might” be right? But we have already seen that this matters little when the question of justification is on the line.

Where are the falsifiable predictions of astrology? Where are the double-blind controlled experiments by faith healers? Why have those who claim that time travel is possible never gone back and made a killing in the stock market?70 If those who hold “alternative” beliefs want their views to be taken seriously, they must expect them to hold up under intense scrutiny and criticism. As we saw earlier, this sometimes happens, and the results are routinely disappointing.71 Instead, pseudoscientists usually prefer to release their own selective evidence. But this is only a masquerade of science.

Pseudoscience in Action: Creationism and Intelligent Design Theory

The long sordid history of opposition to evolutionary theory has been well told elsewhere.72 Starting with the Scopes Monkey Trial in Tennessee in 1925, those who sought to fight against Darwin’s theory of evolution by natural selection first chose to try to keep it out of the public school biology classroom. This strategy was fairly successful until its constitutionality was challenged in 1967.73 As we saw in chapter 2, a more modern creationist agenda then shifted from one of trying to keep evolution out of the classroom to one of lobbying for the inclusion of creationism alongside it. This began in Arkansas in 1981 with Act 590, which required teachers to give “balanced treatment” by teaching creation science alongside evolution science in biology classrooms. When this was successfully challenged on constitutional grounds in McLean v. Arkansas, Judge William Overton ruled that the claim that Darwinian biology was itself a “secular religion” was ludicrous and that “creation science” was not scientific in the least in that “a scientific theory must be tentative and always subject to revision or abandonment in light of facts that are inconsistent with, or falsify, the theory.”74 Thus was creation science revealed to be nothing more than pseudoscience.

Years later, the creationists regrouped under the banner of intelligent design (ID) theory, which purported to be a scientific alternative to evolution. This was the product of a “think tank” called the Discovery Institute, which was founded in Seattle, Washington, in 1990, with the agenda of promoting ID theory and attacking evolution. After a multiyear campaign of funding and promoting ideologically driven criticisms of evolution, and flooding the media with misinformation in a public relations blitz intended to raise doubts about evolution, the next court battle was fought in Pennsylvania in 2004, in a case called Kitzmiller v. Dover Area School District. Again, this history is recounted elsewhere,75 but the main point is that the strategy was no longer one to get creationism or creation science taught in science classrooms, but instead to make the case for the completely separate scientific theory of intelligent design, which the paleobiologist Leonard Krishtalka among others has called “creationism in a cheap tuxedo.”76 This effort too went down in stunning defeat. In a judgment similar to the earlier Arkansas ruling, Judge John E. Jones found that intelligent design theory was not science and that its attacks on evolution had already been refuted by the scientific community. It had, moreover, none of its own peer-reviewed studies or evidence to offer in support of its claims. In a bold rebuke, Jones went on to scold school officials for such “breathtaking inanity” and wasting taxpayer money. He then ordered them to pay $1 million to the plaintiffs in damages.

After this, the creationists’ strategy changed. With court options now deemed too dangerous, the opponents of evolution chose to try to influence the law itself. In 2008, the Discovery Institute drafted a piece of model legislation that sought to protect the “academic freedom” of teachers who felt intimidated or threatened in teaching the “full range of scientific views regarding biological and chemical evolution.”77 The language went on to identify the “confusion” created by the Dover ruling and stated that of course nothing in the act should be construed as “promoting any religious doctrine.” This was nothing but a fig leaf for renewed attempts to try to get creationism into the nation’s science classrooms.

After an initial defeat in Florida in 2008—where Democrats seized on ambiguities in the House language to argue that the academic freedom of teachers to cover birth control, abortion, and sex education should also be protected—the first such academic freedom bill was passed in Louisiana that same year. Though not completely modeled on the Discovery Institute’s language, it was seen as a win for antiscience forces. Here legislators were careful to strip out all mention of evolution (and global warming) as examples of “controversial” theories from their original bill and renamed it the Louisiana Science Education Act. It was signed into law by Governor Bobby Jindahl and remains one of only two state academic freedom bills in the country.

Similar efforts then died in legislation in Missouri, Alabama, Michigan, South Carolina, New Mexico, Oklahoma, Iowa, Texas, and Kentucky, before another academic freedom bill was passed in Tennessee in 2012. This one purported to protect “teachers who explore the ‘scientific strengths and scientific weaknesses’ of evolution and climate change.”78 Soon after, in early 2013, four other states immediately followed suit: Colorado, Missouri, Montana, and Oklahoma. Indeed, Oklahoma has become the poster child for such legislation, as it had been reintroduced for every session in the state senate for the last five consecutive years. The language in these bills is similar.79 In the latest 2016 Oklahoma Senate bill legislators sought to

create an environment within public school districts that encourages students to explore scientific questions, learn about scientific evidence, develop critical thinking skills and respond appropriately and respectfully to differences of opinion about controversial issues.80

There is only one problem: disputes in science are and should be resolved on the basis of evidence, not opinion. The Oklahoma House bill in 2016 states that:

The Legislature further finds that the teaching of some scientific concepts including but not limited to premises in the areas of biology, chemistry, meteorology, bioethics and physics can cause controversy, and that some teachers may be unsure of the expectations concerning how they should present information in some subjects such as, but not limited to, biological evolution, the chemical origins of life, global warming and human cloning.81

I am happy to report that these bills, along with similar ones in Mississippi and South Dakota, all failed in 2016. In recent years, similar bills have also failed in Arizona, Indiana, Texas, and Virginia. For those interested in keeping track of the fate of current and future antiscience legislation, there is a complete chronology at the website for the National Center for Science Education.82

It is sad commentary on public understanding of science that things have gone this far. As Thomas Henry Huxley (“Darwin’s bulldog”) once put it, “Life is too short to occupy oneself with the slaying of the slain more than once.” But this is precisely the wrong attitude to have when one is fighting pseudoscience, which is perennial. As we have seen, the tactics shift and new strategies are employed, but the fight must go on.

Indeed, I witnessed this firsthand in my own skirmish with the Discovery Institute. In a 2015 article entitled “The Attack on Truth” that appeared in the Chronicle of Higher Education, I wrote that the Discovery Institute was a “Seattle organization advocating that ‘intelligent-design theory’ be taught in the public schools as balance for the ‘holes’ in evolutionary theory.”83 This apparently enraged the folks at the Discovery Institute, who blasted me with two consecutive blog posts for allegedly not recognizing that they had always “consistently opposed mandating intelligent design in public schools.”84 This is absurd, but perhaps it was part of their new strategy to outrun the stink of the Kitzmiller ruling.85 On the advice of friends, I did not respond, but if I had it surely would have been worth pointing out that there is a difference between “mandating” and “advocating,” and that if it were really true that they did not advocate teaching intelligent design in public schools, then what was the point of all of their amicus work on behalf of the defendants in the Kitzmiller case?

The picture painted here is a familiar one: pseudoscientists have no real understanding of or respect for what science entails. Furthermore, with the example of creationism/intelligent design, one also suspects that in this instance pseudoscience has bled over into denialism, where their proponents’ minds are made up before they have even considered the evidence, because their views were not based on evidence in the first place. How then could intelligent design hope to pass itself off as a science? Part of the strategy is to try to exploit the weaknesses of science. Recall Sagan’s criterion that science must be open to new ideas: Here the ID theorists complain that evolutionists are unfairly excluding their views from a fair hearing (even while they themselves refuse to acknowledge any evidence that contradicts their own views). “Teach the controversy” is their mantra. We need to examine all subjects impartially in science because ideas can come from anywhere. But if so, they complain, then why are the “scientific” claims of ID theory excluded from the biology classroom? Because there are none. I could spend many pages here dismembering the “scientific” evidence of ID theory point by point, but this has already been done brilliantly and at length by others.86 If any readers feel the need to be convinced, I refer them to this work. For now, I am prepared to defer to Judge Jones’s pithy conclusion in the Kitzmiller case that ID theory is “not science.”87

Naturally, the ID theorist would push back and hope to engage in a lengthy debate about the origins of the eye and the missing link (too bad that evolutionists have an explanation for both).88 But this raises the important question of whether such debates are even worth having—for do ID theorists even accept scientific standards of evidence? As pseudoscientists, ID theorists seek to insulate their views from refutation. They misunderstand the fundamental principle behind scientific reasoning, which is that one should form one’s beliefs based on empirical evidence and commit to changing one’s beliefs as new evidence comes forward.

And they have other misconceptions about science as well. For one, ID theorists often speak as if, in order to be taught in the science classroom, evolution by natural selection would have to be completely proven by the evidence. As we have seen, that is not how science works. As discussed in chapter 2, no matter how strong the evidence, scientific conclusions are never certain. Ah, the ID theorist may now object, isn’t that why one should consider alternative theories? But the answer here is no, for any alternative theories are also bound by the principle of warrant based on sufficient evidence, and ID theory offers none. Note also that any “holes” in evolutionary theory would not necessarily suggest the validity of any particular alternative theory, unless it could explain them better.

Our conclusion? ID theory is just creationist ideology masquerading as science. The objection that evolution is not “settled science,” and therefore that ID theory must at least be considered, is nonsense. One’s place in the science curriculum must be earned. Of course, it is theoretically possible that evolutionary theory could be wrong, as any truly scientific theory could be. But this is overwhelmingly unlikely, as it is supported by a plethora of evidence from microbiology up through genetics. Merely to say, “Your theory could be wrong” or “Mine could be right” is not enough in science. One must offer some evidence. There must be some warrant. So even though it is theoretically true that evolution could be wrong, this does not somehow make the case for ID theory any more than it does for the parody religion of Pastafarianism and its “scientific” theory of the Flying Spaghetti Monster, which was invented as brilliant satire by an unemployed physics student during the Kitzmiller trial to illustrate the scientific bankruptcy of ID theory.89 Indeed, if scientific uncertainty requires the acceptance of all alternative theories, must the astronomer also teach Flat Earth theory? Should we go back to caloric and phlogiston? In science, certainty may be unobtainable, but evidence is required.90

Another predictable complaint of ID theorists is that natural selection is “just a theory.” Recall this myth about science from chapter 2. But to say that evolution is a theory does not dishonor it. To have a theory with a strong basis of evidential support—that backs up both its predictions and explanations, and is unified with other things that we believe in science—is a formidable thing. Is it any wonder that ID theory cannot keep up?

Yet let us now ask the provocative question: what if it could? What if there were some actual evidence in support of some prediction made by ID theory? Would we owe it a test then? I think we would. As noted earlier, even fringe claims must sometimes be taken seriously (which does not mean that they should be immediately inserted into the science classroom). This is because science is open to new ideas. In this way, some fringe theorists who bristle at the dismissal of their work as pseudoscience may have a legitimate complaint: where there is evidence to offer, science has no business dismissing an alternative theory based on anything other than evidence. But this means that if “pseudoscientific” theories are to be taken seriously, they must offer publicly available evidence that can be tested by others in the scientific community who do not already agree with the theory. So when they meet this standard, why don’t scientists just go ahead and investigate?

Sometimes they do.

The Princeton Engineering Anomalies Research Lab

In 1979, Robert Jahn, the dean of the School of Engineering and Applied Science at Princeton University, opened a lab to “pursue rigorous scientific study of the interaction of human consciousness with sensitive physical devices, systems, and processes common to contemporary engineering practice.”91 He wanted, in short, to study parapsychology. Dubbed the Princeton Engineering Anomalies Research (PEAR) lab, they spent the next twenty-eight years studying various effects, the most famous being psychokinesis, which is the alleged ability of the human mind to influence physical events.

A skeptic might immediately jump to the conclusion that this is pseudoscience, but remember that the proposal was to study this hypothesis in a scientific way. The PEAR team used random number generator (RNG) machines and asked their operators to try to influence them with their thoughts. And what they found was that there was a slight statistically significant effect of 0.00025. As Massimo Pigliucci puts it, although this is small, “if it were true it would still cause a revolution in the way we think of the basic behavior of matter and energy.”92

What are we to make of this? First, it is important to remember our statistics:

Effect size is the size of the difference from random. Suppose you had a fair coin and flipped it 10,000 times and it came up heads 5,000 times. Then you painted the coin red and it came up heads 5,500 times. The 500 difference is the effect size.

Sample size is how many times you flipped the coin. If you flipped a painted coin only 20 times and it came up heads 11 of them, that is not very impressive. It could be due just to chance. But if you flip the coin 10,000 times and it came up heads 5,500 times, that is pretty impressive.

P-value is the probability that the effect you see is due to random chance. The reader will remember from our discussion in chapter 5 that p-value is not the same as effect size. The p-value is influenced by the effect size but also by the sample size. If you do your coin flips a large number of times and you still get a weird result, then there will be a lower p-value because it is unlikely to be due to chance. But effect size can also influence p-value. To get 5,500 out of 10,000 coin flips is actually a pretty big effect. With an effect that big, it is much less likely to be just due to randomness, so the p-value goes down.

Before we move on to the PEAR results, let’s draw a couple of conclusions from the coin flip example. Remember that p-value does not tell you the cause of an effect, just the chance that you would see it if the null hypothesis were true. So it could be that the red paint you used on the coin was magic, or it could be that the weight distribution of the paint influenced the coin tosses to land on heads more often. One can’t tell. What we can tell, however, is that having a larger number of trials means that even a very small effect size can be magnified. Suppose you had a perfectly normal looking unpainted coin, but it had a very small bias because of the way it was forged. As you flipped this coin one million times, the small bias would be magnified and would show up in the p-value. The effect size would still be small, but the p-value would go down because of the number of times you’d done the flip. Conclusion: it wasn’t a fair coin. Similarly, a large effect size can have a dramatic effect on p-value even with a small number of flips. Suppose you took a fair coin and painted it red, and it came up heads ten times in a row. That is unlikely to be due to random chance. Maybe you used lead paint.

So what happened in the PEAR lab? They did the equivalent of flipping the coin for twenty-eight years in a row. The effect size was tiny, but the p-value was minuscule because of the number of trials. This shows that the effect could not have been due to random chance, right? Actually, no.

Although their hearts may have been in the right place—and I do not want to accuse the folks at the PEAR lab of any fraud or even bad intent—their results may have been due to a kind of massive unintentional p-hacking. In Pigliucci’s discussion of PEAR research in his book Nonsense on Stilts, it becomes clear that the entire finding depends crucially on whether the random number generators were actually random.93

What evidence do we have that they were not? But this is the wrong question to ask. Remember that one cannot tell from a result what might have caused it, but before one embraces the extraordinary hypothesis that human minds can affect physical events, one must rule out all other plausible confounding factors.94 Remember that once we painted the coins, we could not tell whether the effect was due to the “magic” properties of the paint or its differential weight on one side of the coin. Using Occam’s razor, guess which one a skeptical scientist is going to embrace? Similarly, the effect at the PEAR lab could have been due either to psychokinesis or a faulty RNG. Until we rule out the latter, it is possible that all those years working with RNGs in the Princeton lab do not show that psychokinesis is possible at all, so much as they show that it is physically impossible to generate random numbers! As Robert Park puts it in Voodoo Science, “it is generally believed that there are no truly random machines. It may be, therefore, that the lack of randomness only begins to show up after many trials.”95

How can we differentiate between these hypotheses? Why are there no random machines? This is an unanswered question, which goes to the heart of whether the psychokinetic hypothesis deserves further research (or indeed how it could even be tested). But in the meantime, it is important also to examine other methodological factors in the PEAR research. First, to their credit, they did ask other labs to try to replicate their results. These failed, but it is at least indicative of a good scientific attitude.96 What about peer review? This is where it gets trickier. As the manager of the PEAR lab once put it, “We submitted our data for review to very good journals, but no one would review it. We have been very open with our data. But how do you get peer review when you don’t have peers?”97 What about controls? There is some indication that controls were implemented, but they were insufficient to meet the parameters demanded by PEAR’s critics.

Perhaps the most disconcerting thing about PEAR is the fact that suggestions by critics that should have been considered were routinely ignored. Physicist Bob Park reports, for example, that he suggested to Jahn two types of experiments that would have bypassed the main criticisms aimed at PEAR. Why not do a double-blind experiment? asked Park. Have a second RNG determine the task of the operator and do not let this determination be known to the one recording the results. This could have eliminated the charge of experimenter bias.98

While there has never been an allegation of fraud in PEAR research, it was at least suspicious that fully half of their effect size was due to the trials done by a single operator over all twenty-eight years, who was presumably an employee at the PEAR lab. Perhaps that individual merely had greater psychic abilities than the other operators. One may never know, for the PEAR lab closed for good in 2007. As Jahn put it,

It’s time for a new era, for someone else to figure out what the implications of our results are for human culture, for future study, and—if the findings are correct—what they say about our basic scientific attitude.99

Even while one may remain skeptical of PEAR’s results, I find it cheering that the research was done in the first place and that it was taken seriously enough to critique it. While some called the lab an embarrassment to Princeton, I am not sure I can agree. The scientific attitude demands both rigor on the part of the researchers and an openness to consider scientifically produced data by the wider scientific community. Were these results scientific or were they pseudoscientific? I cannot bring myself to call them pseudoscience. I do not think that the researchers at PEAR were merely pretending to be scientists any more than those doing cold fusion or string theory. Perhaps they made mistakes in their methodology. Indeed, if it turns out to be the case that there actually is no such thing as a random number generator, perhaps the PEAR team should be credited with making this discovery! Then again, it would have been nice to see an elementary form of control, where they had let one RNG run all by itself (perhaps in another room) for twenty-eight years, with no attempt to influence it, and measure this against their experimental result. If the control machine showed 50 percent and the experimental one showed 50.00025, I would be more inclined to take the results seriously (both for showing that psychokinesis was possible and that RNGs were too).

Conclusion

Is it possible to think that one has the scientific attitude, but not really have it? Attitudes are funny things. Perhaps I am the only one who knows how I feel about using empirical evidence; only my private thoughts can tell me whether I am truly testing or insulating my theory. And even here I must respect the fact that there are numerous levels of self-awareness, complicated by the phenomenon of self-delusion. Yet the scientific attitude can also be measured through one’s actions. If I profess to have the scientific attitude, then refuse to consider alternative evidence or make falsifiable predictions, I can be fairly criticized, whether I feel that my intentions are pure or not. The difference that marks off denialism and pseudoscience on one side and science on the other is more than just what is in the heart of the scientist or even the group of scientists who make up the scientific community. It is also in the actions taken by the individual scientist, and the members of his or her profession, who make good on the idea that science really does care about empirical evidence. As elsewhere in life, we measure attitudes not just by thought but also by behavior.

Notes