images

I propose to share with you a few reflections about the nature of scientific inquiry and its importance for public life. At a superficial level one could say that I will be addressing some aspects of the relation between science and society; but, as I hope will become clear, my aim is to discuss the importance, not so much of science, but of what one might call the scientific worldview1—a concept that goes far beyond the specific disciplines that we usually think of as “science”—in humanity's collective decision-making. I want to argue that clear thinking, combined with a respect for evidence—especially inconvenient and unwanted evidence, evidence that challenges our preconceptions—are of the utmost importance to the survival of the human race in the twenty-first century, and especially so in any polity that professes to be a democracy.

Of course, you might think that calling for clear thinking and a respect for evidence is a bit like advocating Motherhood and Apple Pie (if you'll pardon me this Americanism)—and in a sense you'd be right. Hardly anyone will openly defend muddled thinking or disrespect for evidence. Rather, what people do is to surround these confused practices with a fog of verbiage designed to conceal from their listeners—and in most cases, I would imagine, from themselves as well—the true implications of their way of thinking. George Orwell got it right when he observed that the main advantage of speaking and writing clearly is that “when you make a stupid remark its stupidity will be obvious, even to yourself.”2 So I hope that I will be as clear tonight as Orwell would have wished. And I intend to illustrate disrespect for evidence with a variety of examples—coming from the Left and the Right and the Center—starting from some fairly lightweight targets and proceeding to heavier ones. I aim to show that the implications of taking seriously an evidence-based worldview are rather more radical than many people realize.

So let me start by drawing some important distinctions. The word science, as commonly used, has at least four distinct meanings: it denotes an intellectual endeavor aimed at a rational understanding of the natural and social world; it denotes a corpus of currently accepted substantive knowledge; it denotes the community of scientists, with its mores and its social and economic structure; and, finally, it denotes applied science and technology. In this essay I will be concentrating on the first two aspects, with some secondary references to the sociology of the scientific community; I will not address technology at all. Thus, by science I mean, first of all, a worldview giving primacy to reason and observation and, second, a methodology aimed at acquiring accurate knowledge of the natural and social world. This methodology is characterized, above all else, by the critical spirit: namely, the commitment to the incessant testing of assertions through observations and/or experiments—the more stringent the tests, the better—and to revising or discarding those theories that fail the test.3 One corollary of the critical spirit is fallibilism: namely, the understanding that all our empirical knowledge is tentative, incomplete, and open to revision in the light of new evidence or cogent new arguments (though, of course, the most well-established aspects of scientific knowledge are unlikely to be discarded entirely).

It is important to note that well-tested theories in the mature sciences are supported in general by a powerful web of interlocking evidence coming from a variety of sources. Moreover, the progress of science tends to link these theories into a unified framework, so that (for instance) biology has to be compatible with chemistry, and chemistry with physics. The philosopher Susan Haack has illuminatingly analogized science to the problem of completing a crossword puzzle, in which any modification of one word will entail changes in interlocking words; in most cases the required changes will be fairly local, but in some cases it may be necessary to rework large parts of the puzzle.4

I stress that my use of the term “science” is not limited to the natural sciences, but includes investigations aimed at acquiring accurate knowledge of factual matters relating to any aspect of the world by using rational empirical methods analogous to those employed in the natural sciences. (Please note the limitation to questions of fact. I intentionally exclude from my purview questions of ethics, aesthetics, ultimate purpose, and so forth.) Thus, “science” (as I use the term5) is routinely practiced not only by physicists, chemists, and biologists, but also by historians, detectives, plumbers, and indeed all human beings in (some aspects of) our daily lives.6 (Of course, the fact that we all practice science from time to time does not mean that we all practice it equally well, or that we practice it equally well in all areas of our lives.)

The extraordinary successes of the natural sciences over the last four hundred years in learning about the world, from quarks to quasars and everything in between, are well known to every modern citizen: science is a fallible yet enormously successful method for obtaining objective (albeit approximate and incomplete) knowledge of the natural (and to a lesser extent, the social) world.

But, surprisingly, not everyone accepts this; and here I come to my first—and most lightweight—example of adversaries of the scientific worldview, namely academic postmodernists and extreme social constructivists. Such people insist that so-called scientific knowledge does not in fact constitute objective knowledge of a reality external to ourselves, but is a mere social construction, on a par with myths and religions, which therefore have an equal claim to validity. If such a view seems so implausible that you wonder whether I am somehow exaggerating, consider the following assertions by prominent sociologists:

“The validity of theoretical propositions in the sciences is in no way affected by factual evidence.” (Kenneth Gergen)7

“The natural world has a small or non-existent role in the construction of scientific knowledge.” (Harry Collins)8

“For the relativist [such as ourselves] there is no sense attached to the idea that some standards or beliefs are really rational as distinct from merely locally accepted as such.” (Barry Barnes and David Bloor)9

“Since the settlement of a controversy is the cause of Nature's representation not the consequence, we can never use the outcome—Nature—to explain how and why a controversy has been settled.” (Bruno Latour)10

“Science legitimates itself by linking its discoveries with power, a connection which determines (not merely influences) what counts as reliable knowledge.” (Stanley Aronowitz)11

Statements as clear-cut as these are, however, rare in the academic postmodernist literature. More often one finds assertions that are ambiguous but can nevertheless be interpreted (and quite often are interpreted) as implying what the foregoing quotations make explicit: that science as I have defined it is an illusion, and that the purported objective knowledge provided by science is largely or entirely a social construction. For example, Katherine Hayles, professor of literature at Duke University and former president of the Society for Literature and Science, writes the following as part of her feminist analysis of fluid mechanics:

Despite their names, conservation laws are not inevitable facts of nature but constructions that foreground some experiences and marginalize others…. Almost without exception, conservation laws were formulated, developed, and experimentally tested by men. If conservation laws represent particular emphases and not inevitable facts, then people living in different kinds of bodies and identifying with different gender constructions might well have arrived at different models for [fluid] flow.12

(What an interesting idea: perhaps “people living in different kinds of bodies” will learn to see beyond those masculinist laws of conservation of energy and momentum.) And Andrew Pickering, a prominent sociologist of science, asserts the following in his otherwise-excellent history of modern elementary-particle physics:

Given their extensive training in sophisticated mathematical techniques, the preponderance of mathematics in particle physicists’ accounts of reality is no more hard to explain than the fondness of ethnic groups for their native language. On the view advocated in this chapter, there is no obligation upon anyone framing a view of the world to take account of what twentieth-century science has to say.13

But let me not spend time beating a dead horse, as the arguments against postmodernist relativism are by now fairly well known: rather than plugging my own writings, let me suggest the superb book by the Canadian philosopher of science James Robert Brown, Who Rules in Science? An Opinionated Guide to the Wars.14 Suffice it to say that postmodernist writings systematically confuse truth with claims of truth, fact with assertions of fact, and knowledge with pretensions to knowledge—and then sometimes go so far as to deny that these distinctions have any meaning.

Now, it's worth noting that the postmodernist writings I have just quoted all come from the 1980s and early 1990s. In fact, over the past decade, academic postmodernists and social constructivists seem to have backed off from the most extreme views that they previously espoused. Perhaps I and like-minded critics of postmodernism can take some small credit for this, through initiating a public debate that shed a harsh light of criticism on these views and forced some strategic retreats. But most of the credit, I think, has to be awarded to George W. Bush and his friends, who showed just where science bashing can lead in the real world.15 Nowadays, even sociologist of science Bruno Latour, who spent several decades stressing the so-called “social construction of scientific facts,”16 laments the ammunition he fears he and his colleagues have given to the Republican right wing, helping them to deny or obscure the scientific consensus on global climate change, biological evolution, and a host of other issues.17 He writes:

While we spent years trying to detect the real prejudices hidden behind the appearance of objective statements, do we now have to reveal the real objective and incontrovertible facts hidden behind the illusion of prejudices? And yet entire PhD programs are still running to make sure that good American kids are learning the hard way that facts are made up, that there is no such thing as natural, unmediated, unbiased access to truth, that we are always prisoners of language, that we always speak from a particular standpoint, and so on, while dangerous extremists are using the very same argument of social construction to destroy hard-won evidence that could save our lives.18

That, of course, is exactly the point I was trying to make back in 1996 about social construction talk taken to subjectivist extremes. I hate to say I told you so, but I did—as did, several years before me, Noam Chomsky, who recalled that in a not-so-distant past, left-wing intellectuals took an active part in the lively working-class culture. Some sought to compensate for the class character of the cultural institutions through programs of workers’ education, or by writing bestselling books on mathematics, science, and other topics for the general public. Remarkably, their left-wing counterparts today often seek to deprive working people of these tools of emancipation, informing us that the “project of the Enlightenment” is dead, that we must abandon the “illusions” of science and rationality—a message that will gladden the hearts of the powerful, who are delighted to monopolize these instruments for their own use.19

Let me now pass to a second set of adversaries of the scientific worldview, namely the advocates of pseudoscience.20 This is of course an enormous area, so let me focus on one socially important aspect of it, namely so-called “complementary and alternative therapies” in health and medicine. And within this, I'd like to look in a bit of detail at one of the most widely used “alternative” therapies, namely homeopathy—which is an interesting case because its advocates sometimes claim that there is evidence from meta-analyses of clinical trials that homeopathy works.

Now, one basic principle in all of science is GIGO: garbage in, garbage out. This principle is particularly important in statistical meta-analysis because if you have a bunch of methodologically poor studies, each with small sample size, and then subject them to meta-analysis, what can happen is that the systematic biases in each study—if they mostly point in the same direction—can reach statistical significance when the studies are pooled. And this possibility is particularly relevant here because meta-analyses of homeopathy invariably find an inverse correlation between the methodological quality of the study and the observed effectiveness of homeopathy: that is, the sloppiest studies find the strongest evidence in favor of homeopathy.21 When one restricts attention only to methodologically sound studies—those that include adequate randomization and double-blinding, predefined outcome measures, and clear accounting for dropouts—the meta-analyses find no statistically significant effect (whether positive or negative) of homeopathy compared to placebo.22

But the lack of convincing statistical evidence for the efficacy of homeopathy is not, in fact, the main reason why I and other scientists are skeptical (to put it mildly) about homeopathy; and it's worth taking a few moments to explain this main reason because it provides some important insights into the nature of science. Most people—perhaps even most users of homeopathic remedies—do not clearly understand what homeopathy is. They probably think of it as a species of herbal medicine. Of course plants contain a wide variety of substances, some of which can be biologically active (with either beneficial or harmful consequences, as Socrates learned). But homeopathic remedies, by contrast, are pure water and starch: the alleged “active ingredient” is so highly diluted that in most cases not a single molecule remains in the final product.

And so, the fundamental reason for rejecting homeopathy is that there is no plausible mechanism by which homeopathy could possibly work, unless one rejects everything that we have learned over the last two hundred years about physics and chemistry: namely, that matter is made of atoms, and that the properties of matter—including its chemical and biological effects—depend on its atomic structure. There is simply no way that an absent “ingredient” could have a therapeutic effect. High-quality clinical trials find no difference between homeopathy and placebo because homeopathic remedies are placebos.23

Now, advocates of homeopathy sometimes respond to this argument by asserting that the curative effect of homeopathic remedies arises from a “memory” of the vanished active ingredient that is somehow retained by the water in which it was dissolved (and then by the starch when the water is evaporated!). But the difficulty, once again, is not simply the lack of any reliable experimental evidence for such a “memory of water.” Rather, the problem is that the existence of such a phenomenon would contradict well-tested science, in this case the statistical mechanics of fluids. The molecules of any liquid are constantly being bumped by other molecules—what physicists call thermal fluctuations—so that they quickly lose any “memory” of their past configuration. (Here when I say “quickly,” I'm talking picoseconds, not months.)

In short, all the millions of experiments confirming modern physics and chemistry also constitute powerful evidence against homeopathy. For this reason, the flaw in the justification of homeopathy is not merely the lack of statistical evidence showing the efficacy of homeopathic remedies over placebo at the 95 percent or 99 percent confidence level. Even a clinical trial at the 99.99 percent confidence level would not begin to compete with all the evidence in favor of modern physics and chemistry. Extraordinary claims require extraordinary evidence. (And in the unlikely event that such convincing evidence is ever forthcoming, the person who provides it will assuredly win a triple Nobel Prize in physics, chemistry, and biology—beating out Marie Curie, who won only two.)

Despite the utter scientific implausibility of homeopathy, homeopathic products can be marketed in the United States without having to meet the safety and efficacy requirements that are demanded of all other drugs (because they got a special dispensation in the Food, Drug, and Cosmetic Act of 1938). Indeed, US government regulations require each homeopathic remedy that is marketed over-the-counter (OTC) to state, on the label, at least one medical condition that the product is intended to treat—but without requiring any evidence that the product is actually efficacious in treating that condition!24 The laws in other Western countries are equally scandalous, if not more so.25

Fortunately, it seems that this particular pseudoscience has thus far made only modest inroads in the United States—in contrast to its wide penetration in France and Germany, where homeopathic products are packaged like real medicines and sold side by side with them in virtually every pharmacy. But other and more dangerous pseudosciences are endemic in the United States: prominent among these is the denial of biological evolution. It is essential to begin our analysis by distinguishing clearly between three very different issues: namely, the fact of the evolution of biological species; the general mechanisms of that evolution; and the precise details of those mechanisms. Of course, one of the favorite tactics of deniers of evolution is to confuse these three aspects.

Among biologists, and indeed among the general educated public, the fact that biological species have evolved is established beyond any reasonable doubt. Most species that existed at various times in the past no longer exist; and conversely, most species that exist today did not exist for most of the earth's past. In particular, modern Homo sapiens did not exist one million years ago, and conversely, other species of hominids, such as Homo erectus, existed then and are now extinct. The fossil record is unequivocal on this point, and this has been well understood since at least the late nineteenth century.

A more subtle issue concerns the mechanisms of biological evolution; and here our modern scientific understanding took a longer time to develop. Though the basic idea—descent with modification, combined with natural selection—was set forth with eminent clarity by Darwin in his 1859 book, On the Origin of Species, the precise mechanisms underlying Darwinian evolution were not fully elucidated until the development of genetics and molecular biology in the first half of the twentieth century. Nowadays we have a good understanding of the overall process: errors in copying DNA during reproduction cause mutations; some of these mutations either increase or decrease the organism's success at survival and reproduction; natural selection acts to increase the frequency in the gene pool of those mutations that increase the organism's reproductive success; as a result, over time, species develop adaptations to ecological niches; old species die out and new species arise. This general picture is nowadays established beyond any reasonable doubt, not only by paleontology but also by laboratory experiments.

Of course, when it comes to the precise details of evolutionary theory, there is still lively debate among specialists (just as there is in any active scientific field): for instance, concerning the quantitative importance of group selection or of genetic drift. But these debates in no way cast doubt on either the fact of evolution or on its general mechanisms. Indeed, as the celebrated geneticist Theodosius Dobzhansky pointed out in a 1973 essay, “nothing in biology makes sense except in the light of evolution.”26

Everything that I have just said is, of course, common knowledge to anyone who has taken a half-decent course in high-school biology. The trouble is, fewer and fewer people—at least in the United States—nowadays have the good fortune to be exposed to a half-decent course in high-school biology. And the cause of that scientific illiteracy is (need I say it?) politics: more precisely, politics combined with religion. Some people reject evolution because they find it incompatible with their religious beliefs. And in countries where such people are numerous or politically powerful or both, politicians kowtow to them and suppress the teaching of evolution in the public schools—with the result that the younger generation is denied the opportunity to evaluate the scientific evidence for themselves, and the scientific ignorance of the populace is faithfully27 reproduced in future generations.

In 2005, a fascinating cross-cultural survey was carried out in thirty-two European countries, along with the United States and Japan.28 Respondents were read the statement, “Human beings, as we know them, developed from earlier species of animals,” and were asked whether they considered it to be true, false, or were not sure. Of all thirty-four countries, the United States holds thirty-third place for belief in evolution (with roughly equal numbers responding “true” and “false”). Only Turkey—where the secular heritage is under increasing assault from the elected Islamist government and its supporters—shows less belief in evolution than the United States. (Please note that this question concerns merely the fact of evolution, not its mechanisms.)

Of course, not all religious people reject evolution. Fundamentalist Christians do reject evolution, as do many Muslims and orthodox Jews, but Catholics and liberal Protestants have come (over time and perhaps grudgingly) to accept evolution, as have some Muslims and most Jews.29 Therefore, from a purely tactical point of view, nonfundamentalist religious people are the allies of scientists in their struggle to defend the honest teaching of science.

And so, if I were tactically minded, I would stress—as most scientists do—that science and religion need not come into conflict. I might even go on to argue, following Stephen Jay Gould, that science and religion should be understood as “nonoverlapping magisteria”: science dealing with questions of fact, religion dealing with questions of ethics and meaning.30 But I can't in good conscience proceed in this way, for the simple reason that I don't think the arguments stand up to careful logical examination. Why do I say that? For the details, I have to refer you to a seventy-five-page chapter in my book,31 but let me at least try to sketch now the main reasons why I think that science and religion are fundamentally incompatible ways of looking at the world.32

When analyzing religion, a few distinctions are perhaps in order. For starters, religious doctrines typically have two components: a factual part, consisting of a set of claims about the universe and its history; and an ethical part, consisting of a set of prescriptions about how to live. In addition, all religions make, at least implicitly, epistemological claims concerning the methods by which humans can obtain reasonably reliable knowledge of factual or ethical matters. These three aspects of each religion obviously need to be evaluated separately.

Furthermore, when discussing any set of ideas, it is important to distinguish between the intrinsic merit of those ideas, the objective role they play in the world, and the subjective reasons for which various people defend or attack them.

Alas, much discussion of religion fails to make these elementary distinctions: for instance, confusing the intrinsic merit of an idea with the good or bad effects that it may have in the world. Here I want to address only the most fundamental issue, namely, the intrinsic merit of the various religions’ factual doctrines. And within that, I want to focus on the epistemological question—or to put it in less fancy language, the relationship between belief and evidence. After all, those who believe in their religion's factual doctrines presumably do so for what they consider to be good reasons. So it's sensible to ask: What are these alleged good reasons?

Each religion makes scores of purportedly factual assertions about everything from the creation of the universe to the afterlife. But on what grounds can believers presume to know that these assertions are true? The reasons they give are various, but the ultimate justification for most religious people's beliefs is a simple one: we believe what we believe because our holy scriptures say so. But how, then, do we know that our holy scriptures are factually accurate? Because the scriptures themselves say so.33 Theologians specialize in weaving elaborate webs of verbiage to avoid saying anything quite so bluntly, but this gem of circular reasoning really is the epistemological bottom line on which all “faith” is grounded. In the words of Pope John Paul II, “By the authority of his absolute transcendence, God who makes himself known is also the source of the credibility of what he reveals.”34 It goes without saying that this begs the question of whether the texts at issue really were authored or inspired by God, and on what grounds one knows this. “Faith” is not in fact a rejection of reason but simply a lazy acceptance of bad reasons. “Faith” is the pseudo-justification that some people trot out when they want to make claims without the necessary evidence.

But of course we never apply these lax standards of evidence to the claims made in the other fellow's holy scriptures: when it comes to religions other than one's own, religious people are as rational as everyone else. Only our own religion, whatever it may be, seems to merit some special dispensation from the general standards of evidence. And here, it seems to me, is the crux of the conflict between religion and science. Not the religious rejection of specific scientific theories (be it heliocentrism in the seventeenth century or evolutionary biology today); over time most religions do find some way to make peace with well-established science. Rather, the scientific worldview and the religious worldview come into conflict over a far more fundamental question: namely, what constitutes evidence.

Science relies on publicly reproducible sense experience (that is, experiments and observations) combined with rational reflection on those empirical observations. Religious people acknowledge the validity of that method, but then claim to be in the possession of additional methods for obtaining reliable knowledge of factual matters—methods that go beyond the mere assessment of empirical evidence—such as intuition, revelation, or the reliance on sacred texts. But the trouble is this: What good reason do we have to believe that such methods work, in the sense of steering us systematically (even if not invariably) toward true beliefs rather than toward false ones?35 At least in the domains where we have been able to test these methods—astronomy, geology, and history, for instance—they have not proven terribly reliable. Why should we expect them to work any better when we apply them to problems that are even more difficult, such as the fundamental nature of the universe?

Last but not least, these nonempirical methods suffer from an insurmountable logical problem: What should we do when different people's intuitions or revelations conflict? How can we know which of the many purportedly sacred texts—whose assertions frequently contradict one another—are in fact sacred?

In all these examples I have been at pains to distinguish clearly between factual matters and ethical or aesthetic matters, because the epistemological issues they raise are so different. And I have restricted my discussion almost entirely to factual matters, simply because of the limitations of my own competence. But if I am preoccupied by the relation between belief and evidence, it is not solely for intellectual reasons—not solely because I'm a “grumpy old fart who aspire[s] to the sullen joy of having it known that [I] don't suffer fools gladly”36 (to borrow the words of my friend and fellow gadfly Norm Levitt, who died suddenly four years ago at the young age of sixty-six). Rather, my concern that public debate be grounded in the best available evidence is, above all else, ethical.

To illustrate the connection I have in mind between epistemology and ethics, let me start with a fanciful example: Suppose that the leader of a militarily powerful country believes, sincerely but erroneously, on the basis of flawed “intelligence,” that a smaller country possesses threatening weapons of mass destruction; and suppose further that he launches a preemptive war on that basis, killing tens of thousands of innocent civilians as “collateral damage.” Aren't he and his supporters ethically culpable for their epistemic sloppiness?

I stress that this example is fanciful. The overwhelming preponderance of currently available evidence suggests that the Bush and Blair administrations first decided to overthrow Saddam Hussein, and then sought a publicly presentable pretext, using dubious or even forged “intelligence” to “justify” that pretext and to mislead Congress, Parliament, and the public into supporting that war.37

Which brings me to the last, and in my opinion most dangerous, set of adversaries of the evidence-based worldview in the contemporary world: namely, propagandists, public-relations flacks, and spin doctors, along with the politicians and corporations who employ them—in short, all those whose goal is not to analyze honestly the evidence for and against a particular policy, but is, rather, simply to manipulate the public into reaching a predetermined conclusion by whatever technique will work, however dishonest or fraudulent.

So the issue here is no longer mere muddled thinking or sloppy reasoning; it is fraud.

The Oxford English Dictionary defines “fraud” as “the using of false representations to obtain an unjust advantage or to injure the rights or interests of another.”38 In the Anglo-American common law, a “false representation” can take many forms, including:

• A false statement of fact, known to be false at the time it was made;39

• A statement of fact with no reasonable basis to make that statement;40

• A promise of future performance made with an intent, at the time the promise was made, not to perform as promised;41

• An expression of opinion that is false, made by one claiming or implying to have special knowledge of the subject matter of the opinion—where “special knowledge” means knowledge or information superior to that possessed by the other party, and to which the other party did not have equal access.42

Anything here sound familiar? These are the standards that we would use if George Bush and Tony Blair had sold us a used car. In fact, they sold us a war that has cost the lives of 179 British soldiers, 4,486 American soldiers, and somewhere between 112,000 and 600,000 Iraqis43—a human toll, that is, of somewhere between 35 and 200 September 11ths; that has cost the American taxpayers a staggering $810 billion (with ultimate estimates ranging from $1–3 trillion);44 and that has strengthened both al-Qaeda and Iran—in short, a war that may well turn out to be the greatest foreign-policy blunder of American history. (Of course the British have a longer history, and hence a longer history of blunders to compete with.)

Now, in the common law there are in fact two distinct torts of misrepresentation: negligent misrepresentation and fraudulent misrepresentation. Fraudulent misrepresentation is of course difficult to prove because it involves the state of mind of the person making the misrepresentation, i.e., what he actually knew or believed at the time of the false statement.45 Which means that the question becomes (as it was in the case of an earlier American president who stood accused of far lesser crimes and misdemeanors): What did George Bush and Tony Blair know and when did they know it? Unfortunately, the documents that could elucidate this question are top secret, so we may not know the answer for fifty years, if ever. But enough documents have been leaked so far to support, I think, a verdict of fraudulent misrepresentation.46

Now, all this is very likely old hat to most readers. We know perfectly well that our politicians (or at least some of them) lie to us; we take it for granted; we are inured to it. And that may be precisely the problem. Perhaps we have become so inured to political lies—so hard-headedly cynical—that we have lost our ability to become appropriately outraged. We have lost our ability to call a spade a spade, a lie a lie, a fraud a fraud. Instead we call it “spin.”47

We have now traveled a long way from “science,” understood narrowly as physics, chemistry, biology, and the like. But the whole point is that any such narrow definition of science is misguided. We live in a single real world; the administrative divisions used for convenience in our universities do not in fact correspond to any natural philosophical boundaries. It makes no sense to use one set of standards of evidence in physics, chemistry, and biology, and then suddenly relax your standards when it comes to medicine, religion, or politics. Lest this sound to you like a scientist's imperialism, I want to stress that it is exactly the contrary. As the philosopher Susan Haack lucidly observes,

Our standards of what constitutes good, honest, thorough inquiry and what constitutes good, strong, supportive evidence are not internal to science. In judging where science has succeeded and where it has failed, in what areas and at what times it has done better and in what worse, we are appealing to the standards by which we judge the solidity of empirical beliefs, or the rigor and thoroughness of empirical inquiry, generally.48

The bottom line is that science is not merely a bag of clever tricks that turn out to be useful in investigating some arcane questions about the inanimate and biological worlds. Rather, the natural sciences are nothing more or less than one particular application—albeit an unusually successful one—of a more general rationalist worldview, centered on the modest insistence that empirical claims must be substantiated by empirical evidence.

Conversely, the philosophical lessons learned from four centuries of work in the natural sciences can be of real value—if properly understood—in other domains of human life. Of course, I am not suggesting that historians or policymakers should use exactly the same methods as physicists—that would be absurd. But neither do biologists use precisely the same methods as physicists; nor, for that matter, do biochemists use the same methods as ecologists, or solid-state physicists as elementary-particle physicists. The detailed methods of inquiry must of course be adapted to the subject matter at hand. What remains unchanged in all areas of life, however, is the underlying philosophy: namely, to constrain our theories as strongly as possible by empirical evidence, and to modify or reject those theories that fail to conform to the evidence. That is what I mean by the scientific worldview.

It is because of this general philosophical lesson, far more than any specific discoveries, that the natural sciences have had such a profound effect on human culture since the time of Galileo and Francis Bacon. The affirmative side of science, consisting of its well-verified claims about the physical and biological world, may be what first springs to mind when people think about “science,” but it is the critical and skeptical side of science that is the most profound, and the most intellectually subversive. The scientific worldview inevitably comes into conflict with all nonscientific modes of thought that make purportedly factual claims about the world. And how could it be otherwise? After all, scientists are constantly subjecting their colleagues’ theories to severe conceptual and empirical scrutiny. On what grounds could one reject phlogistic chemistry, the fixity of species, or Newton's particle theory of light—not to mention thousands of other plausible but wrong scientific theories—and yet accept astrology, homeopathy, or the virgin birth?

The critical thrust of science even extends beyond the factual realm, to ethics and politics. Of course, as a logical matter one cannot derive an “ought” from an “is.”49 But historically—starting in the seventeenth and eighteenth centuries in Europe and then spreading gradually to more or less the entire world—scientific skepticism has played the role of an intellectual acid, slowly dissolving the irrational beliefs that legitimated the established social order and its supposed authorities, be they the priesthood, the monarchy, the aristocracy, or allegedly superior races and social classes.50 Four hundred years later, it seems sadly evident that this revolutionary transition from a dogmatic to an evidence-based worldview is very far from being complete.