There is something fascinating about science. One gets such wholesale returns of conjecture out of such a trifling investment of fact.
—MARK TWAIN
I’M NOT A DOCTOR, BUT I PLAY ONE ON TV!” “NINE OUT of ten doctors recommend. . . . “These are common phrases that remind us of just how much science is revered by the general public—with good reason, but also with some glaring exceptions. There are excellent reasons to hold science in high regard, from its fundamental discoveries about the origin of humanity and the origin of the cosmos to the countless technological and medical benefits that have originated from scientific research. Most people do feel—if asked in the abstract—that scientists as a group deserve the high reputation that their profession has achieved in modern society. Moreover, some scientists have even become household names and appear in societal discourse outside the realm of science, from Albert Einstein to Stephen Hawking, Charles Darwin, and Richard Dawkins.
When it comes to the specifics, however, a large portion of the public seems to ignore or even be downright hostile to a number of scientific ideas. About 30 percent of Americans believe in astrology, a notion that was debunked centuries ago; about 40 percent still believe in creationism, an idea that has been rejected by science for more than a century and a half; in a recent count (a Gallup poll in March 2010) only slightly more than 50 percent of Americans agreed that climate change is real and largely caused by humans, even though the consensus within the scientific community in that regard has been overwhelming for years; and almost 20 percent of the population believe that vaccines cause autism, even though the only paper claiming such a connection has been shown to be a fraud and dozens of follow-up studies have revealed no link whatsoever.
The idea underlying this chapter is that science is neither the new god nor something that should be cavalierly dismissed. As a society, we need a thoughtful appreciation not only of how science works but also of its power and its limits. This isn’t just an (interesting, I would submit) intellectual exercise: how we think about science has huge personal and societal consequences, affecting our decisions about everything from whether to vaccinate our children to whether to vote for a politician who wants to enact policies to curb climate change. We cannot all become experts, especially in the many highly technical fields of modern science, but it is crucial for our own well-being that we understand the elements of how science works (and occasionally fails to), that we become informed skeptics about the claims that are made on behalf of science, and that we also do our part to nudge society away from an increasingly dangerous epistemic relativism. Jenny McCarthy, the celebrity who has been pushing for people to stop vaccinating their children because of the threat of autism, famously said, “My science is Evan [her son]. He’s at home. That’s my science.” As much as this kind of declaration elicits empathy for a distressed mother, the fact is that, no, science is not and cannot be done through the personal experience of a single individual with no technical training, just as nobody other than a highly trained brain surgeon, working in a properly equipped facility, should attempt brain surgery.
The first thing to understand about science is that scientific reasoning is a refined form of the two basic types of reasoning that we all use; in this (limited) sense, science truly is just common sense writ large. The two types of reasoning in question are deduction and induction. Let’s take a look. Deductive reasoning takes this form:
Premise 1: All philosophers like arguing.
Premise 2: Massimo is a philosopher.
Conclusion: Therefore, Massimo likes arguing.
Deductive reasoning is “truth-preserving,” as philosophers like to say. That is, if the structure of the argument is valid (as is the one above, which can be generalized as: if P, then Q; P; therefore Q, where P and Q stand for any meaningful statement), and if the two premises are true, then the conclusion must be true. So there are basically two ways in which a deductive argument can go wrong: when its structure is flawed (the argument is invalid) or when one or more of its premises are not true (the argument is unsound).
Here is an example of invalid deduction:
Premise 1: If it snows, the pavement becomes wet.
Premise 2: The pavement is wet.
Conclusion: Therefore, it snowed.
Can you catch the problem? It helps to see the formal structure of the argument: if P, then Q; Q; therefore P. Notice the similarity to the first example, which may make it temporarily difficult to see what is wrong with this second one. But think about it for a minute: clearly there may be many other reasons why the pavement is wet—for instance, because somebody used a powerful water jet to clean it up. (If you live in New York City you’ll know instantly what I mean.) Indeed, this kind of mistake is so common that it has its own name: it’s called the fallacy of affirming the consequent (the consequent is Q). So when you look at a deduction, the first thing you want to check is whether the argument being presented is valid—that is, whether it is structurally correct.
What about the first example, which is in fact valid? Is the conclusion true? The answer is yes in this particular case: as it turns out, I do like arguing (in a friendly manner, my favorite motto being “Truth springs from argument amongst friends,” by David Hume). But the conclusion wouldn’t necessarily hold for another philosopher. Why? Because while the second premise is true (I am indeed a philosopher), the first isn’t: not all philosophers like arguing. I know that from personal experience, and I’m sure that statistics could be brought to bear on the issue. So the argument is valid (structurally correct) but unsound, because at least one of its premises is incorrect.
Why is it interesting to know any of this as far as our own lives are concerned? Because deductive reasoning is at the foundations of logic and mathematics, both of which are much more rigorous than science. So, if we can already find issues with deduction (and hence with logic and math!), things are bound to get even more intriguing when we move to the second fundamental type of reasoning that constitutes the bread and butter of science: induction. Philosopher Francis Bacon (1561–1626) was one of the early modern theorists of the scientific method, and he thought that its very foundation was provided by inductive reasoning. There are several types of induction, but for the purposes of our discussion let’s simply think of it as that general kind of reasoning that allows us to infer things that we don’t know from the things that we do know. For instance, I can reasonably infer that the sun will rise tomorrow, even in the absence of any technical knowledge in astronomy, because we have countless observations of past instances in which the sun has risen and there is no reason to believe that whatever mechanism has made that possible in the past will not hold also tomorrow. In other words, induction works because we extrapolate past experience to future happenings, and we are justified in doing so as long as the mechanisms (or laws of nature) that have worked in the past continue to do so in the future. This process of inductive inference based on empirical evidence is largely what it means to do science.
Predictably, however, there is a problem with induction, and it is a big one. It was first pointed out by David Hume in the eighteenth century and best exemplified by a thought experiment proposed by Bertrand Russell (1872–1970) two centuries later. Russell asked us to imagine an inductivist turkey (the original story actually featured a chicken) who is brought to a new farm one day and begins to take notes (that is, collects empirical data) on what happens to him. After a few days, he realizes that he is being fed every morning around 7:00. Being a rigorous inductivist, however, the turkey is wary of drawing inferences about the future on the basis of few data, so he keeps accumulating observations and refrains from making predictions. Eventually, after 364 days of data gathering, the inductivist turkey finally feels confident of his knowledge base and hazards a prediction: he will be fed tomorrow morning at 7:00. Alas, that morning happens to be Thanksgiving, and the turkey is instead “prepared” to be served for dinner.
The sad story of the inductivist turkey illustrates one of the major problems with inductive reasoning: it is not truth-preserving (unlike deduction, when properly carried out). Hume’s critique, however, went even further. He pointed out that the only reason we think that induction is a good way to proceed about making inferences concerning the world is because it has worked in the past. This observation may seem innocuous enough until we realize, upon a moment’s reflection, that Hume was saying that our endorsement of induction is in itself a form of induction: we argue that induction works because it has worked in the past, thereby applying inductive reasoning to justify induction. This, Hume observed, is an instance of circular reasoning, one of the most elementary logical fallacies. Oops! If this doesn’t immediately bother you, think about it for a while and you’ll find that it does. Let me put it clearly: Hume’s critique amounts to saying that there is no rigorously logical foundation for the entire enterprise of science! Add to that the failure during the early part of the twentieth century (by a number of people, including Bertrand Russell himself) to find a logically tight foundation for math, and all of a sudden we are looking at some pretty solid reasons to doubt the truth of both math and science (and, incidentally, logic itself).
Naturally, philosophers simply won’t stand for something like this, and plenty have tried to come to the rescue of science. (Scientists themselves aren’t usually bothered too much by this sort of issue, though arguably they should be.) A valiant attempt was mounted by Karl Popper (1902–1994), who thought he figured out a way to enlist deduction to solve the problem of induction. His attempt ultimately failed, but it has much to teach us about the nature of both science and philosophy, so let us take a closer look.
Popper thought that what scientists really should be doing was trying not to prove but to disprove their theories, a process he called “falsification.” The idea was that—partly because of the problem of induction—one can never prove a proposition, no matter how many new facts agree with it, since new evidence may come up later that disproves it. But, Popper thought, once a theory has been falsified, that’s the end of the story—it will never again emerge to fight another day. In other words, science makes progress because scientists learn what doesn’t work and discard it, thereby getting closer and closer to the truth.
There is quite a bit that is appealing about this idea, not the least of which is that it is based on the application of deductive logic. To see how, consider the following deductive argument:
Premise 1: If Newtonian mechanics is true, then light should bend around massive objects by a certain amount.
Premise 2: Light bends around massive objects by a larger amount than predicted by Newton.
Conclusion: Newtonian mechanics is wrong.
This argument is both valid and sound. It is valid because its form (if P, then Q; not Q; therefore not P) is correct, and it is sound because as it turns out astronomers at the beginning of the twentieth century did discover that light is bent by massive objects (like the sun) to a degree that is not compatible with Newton’s predictions. Consequently, they permanently discarded Newtonian mechanics in favor of Einstein’s general relativity (which predicted the correct amount of bending). Problem solved, then—science works by the process of falsification, and Popper incidentally also solved the problem of induction.
Well, not quite. The history of science provides us with plenty of examples of scientists simply not behaving the way Popper said they should, and for good reasons. For instance, in 1821 the astronomer Alexis Bouvard had calculated a series of tables predicting the position of what was then thought to be the outermost planet in the solar system, Uranus. The problem, as Bouvard soon recognized, was that there was a significant discrepancy between the predictions and the actual positions of the planet in the sky. According to a strict interpretation of falsificationism, Bouvard and his colleagues at that point should have rejected Newton’s theory, as it was manifestly and systematically incompatible with a large set of data. But they didn’t. Instead, Bouvard immediately intuited the obvious answer: there must be another planet influencing Uranus’s orbit, thus accounting for the anomaly. A few years later, on September 23, 1846, Neptune was discovered within 1 degree from the position calculated by the astronomer Urbain Le Verrier. Newtonian theory was safe (for the moment), and the solar system had acquired a new member!
This episode illustrates that the actual practice of science is very different from what Popper at first proposed, and in particular that scientists do not throw out a hypothesis for which there is a lot of confirmatory evidence, even in the face of some disconfirming evidence, until they absolutely have to, and probably not until they have a better alternative handy. (Newtonian physics survived until the advent of relativity.) A better way to think about how science works was proposed by Thomas Kuhn (1922–1996), partly in response to Popper’s ideas. (This, incidentally, is how philosophy makes progress: people analyze other people’s ideas and show the logical flaws in them, until the original idea is either modified and improved or completely abandoned.)
Kuhn suggested that science works in two modes, one that accounts for most of everyday scientific activities, and another that explains when new theories are proposed and old ones abandoned. For Kuhn, everyday science is about “puzzle solving”—scientists applying a particular conceptual framework, which Kuhn called a “paradigm,” to the solution of specific questions. So, for instance, an astronomer applies the Newtonian paradigm to the calculation of the orbits of the known planets (pre-Neptune), a biologist uses the Darwinian paradigm to investigate the mating habits of a particular species of butterflies, and a geologist deploys the continental drift paradigm to explain the geographical distribution of a given group of fossils.
Most of the time this puzzle solving goes on quite well and amounts to what Kuhn called “normal science.” However, from time to time some of these puzzles are not solvable within the reigning paradigm. This does not usually bother scientists, who simply move to the next puzzle in order to continue their careers. But occasionally the growing number of unsolved puzzles begins to cause a stir in the scientific community. People start tinkering with the paradigm itself, and if this doesn’t work, the science in question enters a period of crisis, from which it emerges only when a new paradigm has been identified and accepted by the relevant scientific community. This is precisely what happened at the beginning of the twentieth century in physics, when a mounting number of problems with Newtonian mechanics and classical physics gave rise to Einstein’s relativity and quantum mechanics.
Even Kuhn, however, isn’t the end of the story: successive philosophers of science have found problems with his account of science and come up with interesting alternatives in turn. But this isn’t a book about the philosophy of science; it is about appreciating the intricacies of both science and philosophy so that we can be better informed and more sophisticated thinkers about life in general. So we will move on to one last debate concerning the nature of science before trying to figure out if we can trust science at all, given all these problems! (The answer will turn out to be yes, but in a somewhat cautionary and provisional manner.)
The debate in question is between so-called realists and antirealists about scientific theories. Let’s first clear up one possible source of confusion: antirealists are not people who claim that science is a fantasy, or that scientific theories are arbitrary and have no connection with reality. Indeed, the word realism here is probably a bad choice, but sometimes labels stick and it’s more straightforward to use the terms adopted by the community of scholars who study these things. So, a realist about scientific theory is someone who says that theories in science describe reality as it actually is, or as close to it as human reasoning and observational powers can get us. Most scientists, needless to say, are realists, and a good number of philosophers of science are too. When physicists who are realists talk about electrons, for instance, they are not conjuring a hypothetical construct to help make sense of the data, but rather are referring to physical objects out there with the characteristics of electrons—even though we cannot actually observe them. (The latter bit is important, since much of the discussion hinges on the status of the “unobservables” in science—that is, the theoretical entities that are necessary for a theory to work but cannot be directly observed.)
Now, one might think, how can anyone seriously doubt that electrons exist? Isn’t it because of the existence of electrons that when we turn a switch connected to an electrical circuit the lights in our apartment turn on? Well, that’s the realist explanation of what is happening. But the antirealist will quickly point out that plenty of times in the past scientists have posited the existence of unobservables that were apparently necessary to explain a given phenomenon, only to discover later on that such unobservables did not in fact exist. A classic case is the aether, a substance that was supposed by nineteenth-century physicists to permeate space and make it possible for electromagnetic radiation (like light) to propagate. It was Einstein’s theory of special relativity, proposed in 1905, that did away with the necessity for aether, and the concept has been relegated to the dustbin of scientific history ever since. The anti-realists will relish pointing out that modern physics also features a number of similarly unobservable entities, from quantum mechanical “foam” to dark energy, and that the current crop of physicists seems just as confident about the latter two as their nineteenth-century counterparts were about aether.
The most compelling argument on the side of the antirealists is the so-called underdetermination of theory by the data. It is easy to show that any particular set of empirical observations—the basic point of reference for any scientific theory—is compatible with literally an infinite number of different theories, a good number of which will not be trivial variations on a single theory. Want an example? The rage in theoretical physics for the past three decades (and counting) has been something called “superstring theory.” It is supposed to conceptually unify the two dominant subtheories in physics, general relativity and quantum mechanics (which make different predictions when applied to some of the same phenomena, thus suggesting that there is something wrong or incomplete about them). Superstring theory at the moment cannot be tested experimentally because it does not make any new verifiable prediction that is not also made by preexisting theories like relativity and quantum mechanics. That’s bad enough, since the hallmark of a scientific theory is supposed to be its empirical verifiability. But the really bad news is that superstring theory is not actually a theory—it is a family of mathematically related theories, estimated to number about 10500 members. That’s an astronomically high number of theories (to write it out, you’d have to write a 1 followed by a whopping five hundred 0s!), and there simply is no way that we will ever have sufficient experimental data to discriminate among such a huge number of theories, which means that any (or none) of them could be true and we will simply not be able to tell. The data, no matter how much of it there is, will massively underdetermine—that is, won’t be able to pick out—the correct theory.
Things begin to look grim for the realists now, don’t they? Nonetheless, they do have a pretty convincing countermove, which is amusingly referred to as the “no miracles” argument. It goes like this. Although it may be true that the available data never pick the absolute best theory and only that theory, would it not also be very odd indeed, if not little short of a miracle, if scientific theories had nothing to do with the way the world really is? What are the odds—argues the realist—that a complex theory like Einstein’s general relativity, or quantum mechanics, would be able to predict an astounding number of facts about the world, and to an astounding degree of precision, if it was not in some meaningful sense really describing the way the world is?
And yet, the antirealist would argue in turn, this is precisely what has happened even in the recent past (an observation colorfully known as the “pessimistic meta-induction”). Newtonian mechanics was considered the dominant theory in physics for centuries, and it is still used today to calculate the trajectories of projectiles, such as space probes. And it works. But we also know that it is profoundly wrong. Although it is sometimes said that Newtonian mechanics can be derived as a special case via mathematical approximation from the general theory of relativity, it is also true that the underlying concepts of space-time proposed by Newton and Einstein are qualitatively different; at best, only one can be correct, certainly not both. Even so, the “no miracles” argument does have some force, and the debate between realists and antirealists, once we pass the simple level sketched earlier, quickly becomes very technical, and it remains unresolved to specialists in the field to this day.
There is, however, a third way to look at science, a way that acknowledges both of the main points that I have tried to explore throughout this chapter. On the one hand, it is undeniable that science works; scientific discoveries do make a practical difference in our lives, and we do understand the universe better because of science. The theory of evolution, quantum mechanics, and the idea of continental drift are not “just” theories—they are sophisticated ways to comprehend how the world works. Cars, computers, airplanes, and space probes, not to mention the human genome project or the various medical treatments we use to improve our well-being, don’t work because of magic, and they are not a matter of opinion. On the other hand, it is also true that most scientific theories have at some point or another been proven to be wrong, which means that we don’t have any reason to believe that the current ones will not also join the dustbin of history. Indeed, even the most spectacular and well-regarded theory in contemporary science—the standard model arising from the conjunction of general relativity and quantum mechanics—is already known to be wrong in some fundamental respects, which is why physicists keep looking for a broader, more unifying theory of the deepest foundations of reality.
How, then, do we resolve this seeming contradiction between science being in some sense undeniably wrong and in another sense equally, undeniably right? Through an idea called “perspectivism.” To understand how this works, consider a simple example first: color perception. In some sense, colors are the result of objective facts about the world: a certain electromagnetic radiation, characterized by a given wavelength, hits a particular material at a given angle; the light so reflected is captured by our retina and excites certain pigments inside our eyes, which in turn activates certain neural pathways, with the result—after a surprisingly sophisticated further processing by the brain—that we see, say, “red.” In another sense, however, the way I perceive color is irreducibly subjective: it boils down to a first-person experience that simply cannot be shared by anyone else in the exact same way. Besides, you and I may be looking at precisely the same object and yet judge it to be of a different color, for a variety of reasons: because we are looking at it from different angles, or at different times and therefore in different light conditions, or maybe because I’m partially color-blind (I am) and you are not. In other words, even though there are objective facts about the world that affect our perception of color, there is also significant room for different perspectives about that aspect of the world. The physical facts pertinent to electromagnetic radiation and material properties delimit what can and cannot be perceived by a certain biological system of pigments and neurons, but the complexity and variability of the latter leaves room for a number of subjective interpretations of what is going on.
Scientific perspectivism applies the same idea to the process of science: scientific knowledge is both objective and subjective, because it results from a particular perspective (the human one) interacting with how the world actually is. The result is that our scientific theories will always be tentative and to some extent wrong (as the antirealist maintains), but will also capture to a smaller or greater extent some important aspect of how the world actually is (as the realist maintains). Science provides us with a perspective on the world, not with a God’s-eye view of things. It gives us an irreducibly human, and therefore to some extent subjective—yet certainly not arbitrary—view of the universe.
Now, why should any of this be of concern to the intelligent person interested in improving her or his well-being through the use of reason? Because a better understanding of how science actually works puts us in the position of the sophisticated skeptic, who is neither a person who rejects science as a matter of anti-intellectual attitude nor a person who accepts the pronouncements of scientists at face value, as if they were modern oracles whose opinions should never be questioned. Too often public debates about the sort of science that affects us all (climate change, vaccines and autism, and so on) are framed in terms of alleged conspiracies on the part of the scientific community on one side and of expert opinion beyond the reach of most people on the other side. Scientists are just like any other technical practitioners and in very fundamental ways are no different from car mechanics or brain surgeons. If your problem is that your car isn’t running properly, you go to a mechanic. If there is something wrong with your brain, you go to the neurosurgeon. If you want to find out about evolution, climate change, or the safety of vaccines, your best bet is to ask the relevant community of scientists.
Just as with car mechanics and brain surgeons, however, you will not necessarily find unanimity of opinion in this community, and sometimes you may want to seek a second or even a third opinion. Some of the practitioners will not be entirely honest (though this is pretty rare across the three categories I am considering), and you may need to inquire into their motives. Scientists are not objective, godlike entities, dispensing certain knowledge. They have a human perspective on things, including the field in which they are experts. But other things being equal, your best bet—particularly when the stakes are high—is to go with the expert consensus, and if a consensus is lacking, you’re better off going with the opinion of the majority of experts. Keeping in mind, of course, that they might, just might, be entirely wrong.
There is one area of scientific inquiry that is both very new and of particular interest to our quest for how to live an intelligent life: cognitive neuroscience. You’ve seen the colorful brain scans, and you’ve probably heard the claims that studying the brain makes sense of just about everything human. “My brain made me do it” is likely to surface as a defense (admittedly a very strange one) in court cases, and you may find yourself serving on a jury that has to make sense of the bold new science of human behaviors and motivations. In Part III, we will look at what this approach has to say about our will to change our lives, the origin of our decisions, our propensity to fall in love, and even the way we handle friendships. As usual, we’ll add a bit of philosophical reflection to the mix to get beyond the empirical facts as we assess what they mean for our lives and how to improve them.