There is nothing new to be discovered in physics now. All that remains is more and more precise measurement.
—William Thomson, Lord Kelvin, discoverer of the correct value of the temperature of absolute zero, in an address in 1900 to the British Association for the Advancement of Science
“Arational” (or nonrational or quasi-rational) practices in science occur alongside—even in opposition to—the linear, rational textbook version of scientific progress. Sometimes scientists abandon generally accepted theories and devote themselves to other theories that are not well supported by available evidence. Their adoption of the new theory is initially as much a matter of faith as of logic or data.
Scientific theories are sometimes traceable to particular worldviews that differ across academic fields, between ideologies, or from one culture to another. The different theories sometimes literally conflict with one another.
The arational aspects of science may have contributed to the rejection of the concept of objective truth by some people who describe themselves as deconstructionists or postmodernists. What defense is possible against such nihilism? What can be said to people who assert that “reality” is mere socially constructed fiction?
Paradigm Shifts
Five years after Lord Kelvin’s pronouncement about the boring future of physics, Einstein published his paper on special relativity. Relativity theory literally replaced Isaac Newton’s mechanics—the laws describing motion and force that had stood unchallenged for two centuries. Einstein’s theory was not a mere new development in physics. It heralded a new physics.
Fifty years after Einstein’s paper was published, the philosopher and sociologist of science Thomas Kuhn shook the scientific community by announcing in his book The Structure of Scientific Revolutions that science doesn’t always consist of an earnest slog through theory followed by collection of data followed by adjustment of theory. Rather, revolutions are the customary way that science makes its greatest advances.
The old theory gets creaky, anomalies slowly pile up, and someone has a bright idea that, sooner or later, ends up overthrowing the old theory—or at least rendering it much less relevant and interesting. The new theory typically doesn’t account for all the phenomena that the old theory does, and its new contentions are at first supported by data that are underwhelming at best. Often the new theory isn’t concerned with explaining established facts at all, but only with predicting new ones.
Kuhn’s analysis was upsetting to scientists in part because it introduced an element of seeming irrationality into the concept of scientific progress. Scientists jump ship not so much because the old theory is inadequate or because new data have come in. Rather, a paradigm shift occurs because a new idea has come along that is more satisfying in some respects than the old idea, and the scientific program it suggests is more exciting. Scientists seek “low-hanging fruit”—startling findings suggested by the new theory that couldn’t be explained by the old theory—that are ripe for the picking.
Often the new theoretical approaches lead nowhere in particular, even though large numbers of scientists are pursuing them. But some new paradigms do break through and replace older views, seemingly overnight.
The field of psychology offers a particularly clear example of the rapid rise of a new paradigm and the near-simultaneous abandonment of an old one.
Psychology from early in the twentieth century till roughly the late 1960s was dominated by reinforcement learning theories. Ivan Pavlov showed that once an animal had learned that a particular arbitrary stimulus signaled a reinforcement of some kind, that stimulus would elicit the same reaction as the reinforcing agent itself. A bell that preceded the introduction of meat would come to produce the same salivary reaction as the meat itself. B. F. Skinner showed that if a given behavior was reinforced by some desirable stimulus, the behavior would be performed whenever the organism wanted the reinforcement. Rats learn to press a lever if that results in food being delivered. Psychologists produced thousands of experiments testing hypotheses derived from one or another principle suggested by Pavlovian and Skinnerian theories.
During the heyday of learning theory, psychologists reached the conclusion that much of human behavior is the result of modeling. I see Jane do something for which she gets a “positive reinforcement.” So I learn to do the same thing to get that reinforcement. Or I see her do something that gets her punished, so I learn to avoid that behavior. “Vicarious reinforcement theory” was both obvious and hard to test in a rigorous way, except by hothouse experiments showing that children sometimes imitate other people in the short term. Hit a doll and the child may imitate that. But that doesn’t show that chronically aggressive adults got that way by observing other people get rewarded for aggressive behavior.
Among scientifically minded psychologists it was de rigueur to have a reinforcement-learning theory interpretation of every psychological phenomenon, whether it involved the behavior of animals or humans. Scientists who offered different interpretations of the evidence were ignored or worse.
An Achilles’ heel of reinforcement theory stems from the fact that it’s incrementalist in nature. A light comes on and a shock follows a short time later. The animal slowly learns that the light predicts a shock. Or the animal presses a lever that produces food and the animal gradually learns that lever pressing is its meal ticket.
But phenomena began cropping up in which the animal learned almost instantaneously the connection between two stimuli. For example, an experimenter might periodically deliver an electric shock to a rat shortly after a buzzer sounded. The rat would begin to show fear (indicated by, for example, crouching or defecating) whenever the buzzer sounded. But if a light preceded the buzzer and there was no shock, the rat would show substantially less fear—on the very first trial when the light was introduced. On the next trial there might be virtually no fear expressed at all. This suggested to many people that some types of learning could best be understood as the result of some fairly sophisticated causal thinking on the part of the rat.1
Around the same time the temporal puzzles were discovered, Martin Seligman delivered an extremely serious blow to one of the most central tenets of traditional learning theory, namely that you could pair any arbitrary stimulus with any other arbitrary stimulus and an animal would learn that association.2 Seligman showed that the arbitrariness dictum was hopelessly wrong. Recall from Chapter 8 that associations the animal was not “prepared” to learn would not be learned. Dogs can readily learn to go to the right if a light appears on the right rather than the left, but not if it appears on top rather than on the bottom. Pigeons will starve to death while a learning theorist tries to teach them that not pecking at a light will produce a food pellet.
The failures of learning theory to account for the extremely rapid learning of some connections and the impossibility of learning other connections were not initially seen as the body blows that they were. The danger to learning theory came not from these anomalies but from seemingly unrelated work on cognitive processes, including memory, the influence of schemas on visual perception and interpretation of events, and causal reasoning.
Many psychologists began to see that the really exciting phenomena to be examined had to do with thinking rather than learning. Almost overnight hundreds of investigators began studying the operations of the mind, and study of learning processes came to a virtual halt.
Learning theory was not so much disproved as ignored. In retrospect, it can be seen that the program of research had become what the philosopher of science Imre Lakatos termed a “degenerative research paradigm”—one that is no longer producing interesting findings. Just more and more about less and less.
The new opportunities were in the field of cognition (and later in the field of cognitive neuroscience). Within very few years virtually no one was studying learning, and few cognitive scientists deigned to pay attention to learning-theory interpretations of their findings.
As in science, great changes in technology, industry, and commerce are often due to revolution rather than evolution. The steam engine is invented, resulting in the replacement of wool by cotton as the main fabric used for clothing in many parts of the world. Trains are invented, resulting in the deregionalizing of manufacturing. Mass production of goods in factories arrives, ending time-immemorial manufacturing techniques. Within a brief period of time the invention of the Internet changed … everything.
One difference between paradigmatic changes in science and those in technology and business practices is that the old paradigm often as not hangs around in science. Cognitive science didn’t replace all learning theory findings, or even the explanations behind the findings. Rather, it just established a body of work that couldn’t have been produced within the learning theory framework.
Science and Culture
Bertrand Russell once observed that scientists studying the problem-solving behavior of animals saw in their experimental subjects the national characteristics of the scientists themselves. The pragmatic Americans and the theoretically inclined Germans had very different understandings of what was happening.
Animals studied by Americans rush about frantically, with an incredible display of hustle and pep, and at last achieve the desired result by chance. Animals observed by Germans sit still and think, and at last evolve the solution out of their inner consciousness.
Ouch! Any psychologist knows there was more than a grain of truth in Russell’s lampoon. Indeed, the groundwork for the cognitive revolution was laid by Western Europeans, especially Germans, who worked primarily on perception and thinking rather than learning. American soil was pretty barren for cognitive theory, and work on thought would undoubtedly have come along much later if not for prodding by Europeans. It’s no accident that social psychology, which was founded by Europeans, was never “behaviorized” in the first place.
In addition to having to acknowledge the arational aspects of paradigm shifts, scientists have had to come to grips with the fact that cultural beliefs can profoundly influence scientific theories.
The Greeks believed in the stability of the universe, and scientists from Aristotle to Einstein were in thrall to this commitment. The Chinese, in contrast, were confident that the world was constantly changing. Chinese attention to context led to their correct understanding of acoustics, magnetism, and gravity.
Continental social scientists shake their heads in exasperation with what they call the rigid “methodological individualism” of American social scientists and their inability to see the relevance or even the existence of larger social structures and of the zeitgeist. The major advances in thinking about societies and organizations have primarily continental rather than Anglo-Saxon roots.
Western primatologists could see no social interaction among chimpanzees more complicated than the behavior that a pair of chimps exhibited toward each other until Japanese primatologists showed the very complicated nature of chimpanzee politics.
Even the preferred forms of reasoning differ across cultures. Logic is foundational for Western thought, dialecticism for East Asian thought. The two types of thinking can produce literally contradictory results.
The rapid and incompletely justified nature of shifts in scientific theories, together with recognition of the role of culture in affecting scientific views, contradicted the picture of science as an enterprise of pure rationality operating in the light of unshakable facts. These deviations may have contributed to a thoroughly antiscientific approach to reality that began to gain steam in the late twentieth century.
Reality as a Text
After we came out of the church, we [Samuel Johnson and his biographer James Boswell] stood talking for some time together of Bishop Berkeley’s ingenious sophistry to prove the nonexistence of matter, and that everything in the universe is merely ideal. I [Boswell] observed, that though we are satisfied his doctrine is not true, it is impossible to refute it. I never shall forget the alacrity with which Johnson answered, striking his foot with mighty force against a large stone, till he rebounded from it—“I refute it thus.”
—James Boswell, The Life of Samuel Johnson
Not everyone today seems to be as readily convinced of the reality of reality as Johnson.
Recall the umpire from Chapter 1 who denied any reality to the concepts of strikes and balls other than his labeling of them as such. Many people who call themselves postmodernists or deconstructionists would endorse that umpire’s view.
In Jacques Derrida’s phrase: “Il n’y a pas de hors-texte.” (There is nothing outside of the text.) People with such orientations sometimes deny that there is any “there” there at all. “Reality” is merely a construction, and nothing exists other than our interpretation of it. The fact that interpretations of some aspect of the world can be widely or even universally shared is irrelevant. Such agreement only indicates that there are shared “social constructions.” One of my favorite phrases from this movement is that there are no facts—only “regimes of truth.”
This extreme subjectivist view drifted over to America from France in the 1970s. The general idea behind deconstructionism is that texts can be dismantled to show the ideological leanings, values, and arbitrary perspectives that underlie all inferences about the world, including assertions posing as facts about nature.
An anthropologist of my acquaintance was asked by a student at my university how anthropologists deal with the problem of reliability concerning characterizations of the beliefs and behavior of people in other cultures. In other words, what to do about the sometimes varying interpretations of different anthropologists? She replied, “The problem doesn’t arise because what we anthropologists do is interpret what we see. Different people are expected to have different interpretations because of their different assumptions and viewpoints.”
This answer scandalized my student—and me. If you’re doing science, agreement is everything. If observers can’t agree about whether a given phenomenon exists, then scientific interpretation can’t even get launched. What you have is a mess.
But my mistake was in thinking that cultural anthropologists necessarily regard themselves as scientists. Early on in my work on cultural psychology I tried to make contact with cultural anthropologists. I wanted to learn from them, and I expected they would be interested in my empirical work on cultural differences in thought and behavior. I was shocked to discover that most of the people defining themselves as cultural anthropologists had no desire to talk to me and no use for my data. They were not about to “privilege” (their term) my evidence over their interpretations.
To my astonishment, postmodernist nihilism made strong headway in academic fields ranging from literary studies to history to sociology. How strong? An acquaintance told me about asking a student whether she thought the laws of physics were mere arbitrary assertions about nature. “Yes,” she assured her questioner. “Well, when you’re up in an airplane you figure any old laws of physics could keep it in the air?” “Absolutely,” she replied. A survey of students at a major university by the philosopher and political scientist James Flynn found that most believed that modern science is merely one point of view.3 Those poor students came by their opinion honestly. It was encouraged by the sort of thing they had been told in many of their humanities and social science courses. One might think that professors in those fields were merely amusing themselves or perhaps trying to stimulate thought on the part of their students. But consider the tale of the physicist and the postmodernists.
In 1996, Alan Sokal, a physics professor at New York University, sent a manuscript to Social Text, a journal with a proudly postmodern stance and an editorial roster including some quite famous academics. Sokal’s article, titled “Transgressing the Boundaries: Towards a Transformative Hermeneutics of Quantum Gravity,” tested just how much nonsense such a journal was willing to swallow. The article, saturated with postmodern jargon, announced that “an external world whose properties are independent of any individual human being” was “dogma imposed by the long post-Enlightenment hegemony over the Western intellectual outlook.” Because scientific research is “inherently theory laden and self-referential,” it “cannot assert a privileged epistemological status with respect to counterhegemonic narratives emanating from dissident or marginalized communities.” Quantum gravity was pronounced a mere social construction.
Sokal’s article was accepted without peer review. On the day of publication of his article in Social Text, Sokal revealed in the journal Lingua Franca that the article was a pseudoscientific hoax. The editors of Social Text responded that the article’s “status as parody does not alter, substantially, our interest in the piece, itself, as a symptomatic document.”
George Orwell said that some things are so stupid that only intellectuals believe them. But to be fair, no one actually believes that reality is merely a text, though many people undoubtedly think they believe it. Or did. Postmodernism is gradually fading from the North American academic scene. It dissipated long ago in France, where, as my French anthropologist friend Dan Sperber said, “it never even had the prestige of being French!”
Should you find yourself in a conversation with a postmodernist, and I can’t wholeheartedly recommend you do, try the following. Ask whether the balance on the person’s credit card statement is a mere social construction. Or ask whether he thinks power differentials in society are merely a matter of interpretation or whether they have some basis in reality.
I have to admit, incidentally, that postmodernist concerns have produced some research related to power, ethnicity, and gender that seems valid and important. The anthropologist Ann Stoler, for example, has done very interesting research on the shaky and sometimes hilarious criteria used by the Dutch in colonial Indonesia to determine who was and was not “white.” Nothing so straightforward as the American rule that anyone with a “single drop” of African blood was a Negro, which of course was a social construction without any remote basis in physical reality. Stoler’s work is of substantial interest to historians, to anthropologists, and to psychologists interested in how people categorize the world and how people’s motivations influence their understanding of the world.
What I find particularly ironic about postmodernists is that they asserted without evidence that interpretations of reality are always just that, and did so in the complete absence of knowledge about the findings by psychologists that support a contention only slightly less radical on its face than postmodernists’ views. One of the greatest accomplishments of psychologists is the demonstration of the philosopher’s dictum that everything from the perception of motion to understanding of the workings of our own minds is an inference. Nothing in the world is known as directly or infallibly as intuition tells us it is.
But the fact that everything is an inference doesn’t mean that any inference is as defensible as another. Should you find yourself at the zoo with a postmodernist, don’t let him get away with telling you that your belief that the large animal with the trunk and tusks is an elephant is a mere inference—because it could be a mouse with a glandular condition.
Summing Up
Science is based not only on evidence and well-justified theories—faith and hunches may cause scientists to ignore established scientific hypotheses and agreed-upon facts. Several years ago, the literary agent John Brockman asked scores of scientists and public figures to tell him about something they believed that they couldn’t prove—and he published their responses in a book.4 In many instances, an individual’s most important work was guided by hypotheses that could never be proved. As laypeople we have no choice but to do the same.
The paradigms that underlie a given body of scientific work, as well as those that form the basis for technologies, industries, and commercial enterprises, are subject to change without notice. These changes are often initially “underdetermined” by the evidence. Sometimes the new paradigm exists in uneasy partnership with the old, and sometimes it utterly replaces the old.
Different cultural practices and beliefs can produce different scientific theories, paradigms, and even forms of reasoning. The same is true for different business practices.
Quasi-rational practices by scientists, and cultural influences on belief systems and reasoning patterns, may have encouraged postmodernists and deconstructionists to press the view that there are no facts, only socially agreed-upon interpretations of reality. They clearly don’t live their lives as if they believed this, but they nevertheless expended a colossal amount of university teaching and “research” effort promulgating these nihilistic views. Did these teachings contribute to the rejection of scientific findings in favor of personal prejudices so common today?