Errors of opinion may be tolerated where reason is left free to combat it.
—THOMAS JEFFERSON
AS WE HAVE SEEN WHEN TALKING ABOUT MORALITY, human beings have always desperately tried to differentiate themselves from the animal world. A probably apocryphal but nonetheless illustrative story has it that a Victorian lady, upon hearing for the first time of Darwin’s theory that we are related to chimps, commented that even if true one would hope that the news didn’t get around because it would be embarrassing. But the news did get around, and science has made it increasingly difficult to find clear-cut differences between us and other animals. We are not the only animals to use tools, for instance, and not the only ones to engage in cultural practices. Still, two things stand out as uniquely human as far as we can see: language (not just communication) and (deliberative) reason. Aristotle, in Book Z of his Metaphysics, defines man (today we would say humans) as the rational animal, a definition that acknowledges both a continuity with the rest of the biological world (we are animals, after all) and a sharp qualitative difference that separates us from it. As we shall see in this chapter, however, it may be more accurate to think of human beings as the rationalizing animals, with language—ironically—providing a key tool that confuses our own and other people’s thoughts. If we wish to pursue the fulfilled life, then, we need to come to terms with how easily we fool ourselves into thinking what we ought to know better than to think.
It is remarkably easy for our brain to be manipulated into believing that we are making a rational decision when in fact we are doing anything but. I experienced this firsthand a few years ago when I participated in a live demonstration at the taping of the National Public Radio show Radiolab. We were first asked to think about the last two digits of our social security number, but not to tell anyone. Then we were presented with an item that we might have purchased in a toy or electronics store and asked how much we thought was reasonable to pay for it. I did not see the connection between the two exercises until the hosts of the show lined everyone up in order of what they were willing to pay for the item and thus showed us that there was a perfect correlation with our social security numbers: the higher the last digits of our social security number, the more we thought it was “reasonable” to pay for the proffered item. This is an example of what psychologists call “priming”: once you start thinking about something, even though it is logically unrelated to the task at hand, you take a certain attitude toward the task that is best explained by the priming effect, not by any objective characteristic of the task. This is also why, for instance, interviewers (or people on a date) can be induced to rate a job candidate as “cold” or “warm” depending on whether they were holding an iced or hot drink in their hands while conducting the interview! We should beware not only of job interviewers but also of advertisers, courtroom lawyers, and a host of other people and situations in which our brains can be manipulated without our even noticing.
A related phenomenon is known as “framing,” and it has been demonstrated in a variety of circumstances and experimental settings. For instance, research carried out by Benedetto de Martino, Dharshan Kumaran, Ben Seymour, and Raymond J. Dolan and published in the prestigious Science magazine shows how easy it is to nudge people toward either a risk-averse or a risk-prone financial behavior, simply depending on how one poses the exact same problem to them. De Martino and his colleagues asked their subjects to think about what they would do with a certain amount of money given to them, of which they would be able to keep only a certain amount. People behaved very differently if the problem was framed in terms of “keeping” the money versus “losing” the money: apparently, to our brain it makes a difference whether we are told that we get to keep $60 of a $100 gift (thereby losing $40), or that we will lose $40 (thereby keeping $60), even though obviously the two situations are logically identical. We are not talking high math here, or complex probability theory, and yet intelligent persons make radically different decisions about equivalent problems simply depending on how the problem is presented to them. This is why the next time you hear about the results of an opinion poll the first thing you may want to ask yourself is: how were the questions framed? The results of the poll might have been very different had the researchers framed the questions in another fashion. (We return to framing in the context of politics in Chapter 13.)
But if we really wish to get a good feeling for just how weird our brain can be in terms of beliefs and how to rationalize them, we need to look into the vast literature on brain injuries, particularly the many studies of delusions. Delusion is not just a common derogatory word, but has a technical meaning as well. The fourth edition of the Diagnostic and Statistical Manual of Mental Disorders, published in 2000, defines it this way:
A false belief based on incorrect inference about external reality that is firmly sustained despite what almost everyone else believes and despite what constitutes incontrovertible and obvious proof or evidence to the contrary. The belief is not one ordinarily accepted by other members of the person’s culture or subculture (e.g., it is not an article of religious faith). When a false belief involves a value judgment, it is regarded as a delusion only when the judgment is so extreme as to defy credibility.
Besides noting the odd exception that the authors of the DSM-IV made for religious belief (does something cease being a delusion just because a large number of people share in it?), one can also argue that this definition is far too broad. For instance, a good number of Americans do not believe that the earth is billions of years old, “despite what constitutes incontrovertible and obvious proof or evidence to the contrary.” This isn’t just a result of religious fundamentalism, as it is often simplistically assumed, but stems also from the fact that the “incontrovertible and obvious” proof appears as such only to people technically trained in biology, geology, or physics. Still, there are some peculiarly instructive cases that would qualify as true delusions by pretty much anyone’s standards—except, of course, those of the patients affected.
Take, for instance, the following heart-wrenching description of a patient affected by Cotard’s syndrome, which manifests itself in the delusion that one is dead:
She repeatedly stated that she was dead and was adamant that she had died two weeks prior to the assessment (i.e., around the time of her admission). She was extremely distressed and tearful as she related these beliefs, and was very anxious to learn whether or not the hospital she was in, was “heaven.” When asked how she thought she had died, she replied “I don’t know how. Now I know that I had a flu and came here on 19th November. Maybe I died of the flu.” Interestingly, she also reported that she felt “a bit strange towards my boyfriend. I cannot kiss him, it feels strange—although I know that he loves me.”
Cotard’s syndrome can manifest itself in a variety of stunning ways: sufferers may be convinced, for instance, that their flesh is rotting, or that they have lost their internal organs. Occasionally, the disease causes the delusion of being immortal. Cotard’s syndrome is caused by a disconnect between the area of the brain that recognizes faces and the amygdala, the site of emotions. In other words, sufferers see their own face in the mirror, but do not respond emotionally as if it is their face. The rationalizing brain then kicks into high gear and makes up an explanation to account for the disconcerting sense data: if the person in the mirror looks like me and yet doesn’t feel like me, it must be because I’m dead.
Something very similar goes on in a related delusion known as Capgras syndrome. Those suffering from this delusion think that a close person, like a spouse, a friend, or a parent, has been replaced by a look-alike impostor. Again, this outrageous conclusion is reached by a brain that has been confronted with an otherwise inexplicable set of facts about the world as it perceives them—in this case, the disconnect between other people’s appearance and what one feels for them. Those suffering from this delusion simply must make up a story to account for what has happened, because that narrative provides them with the illusion that they are in control, regardless of how flimsy the alleged “explanation” actually is.
Perhaps the best evidence that the brain often works more as a rationalizing than as a rational agent comes from classic experiments with “split-brain” patients. All human beings have two hemispheres in their brain, which are anatomically and functionally distinct. This is not unusual in vertebrates, and animals from fish to mammals use the left hemisphere to control everyday behavior while the right hemisphere is more apt to deal with unusual circumstances. (Did you need any further proof of evolution?) In our species, a special structure called the corpus callosum connects the two hemispheres in normal individuals, ensuring continuous communication and coordination between the two halves of our minding organ. In some people, however, the corpus callosum is severed, either because of an accident or, as in most cases, because of emergency surgery to alleviate the symptoms of extreme epileptic seizures. The behavior of these split-brain patients can be followed, both to help them cope with their unusual situation and to help researchers learn more about how each hemisphere works in isolation. The results provide a stunning insight into what must be the normal—and usually unseen—operation of the human brain.
One of the classic experiments of this type was conducted by Michael Gazzaniga’s group at Dartmouth College. They took advantage of the fact that one can show images to only one hemisphere, since the right hemisphere controls the left half of the visual field, while the left hemisphere has access to the right half. Moreover, the left hemisphere can communicate verbally, while the right cannot; the right, however, controls the left arm (just like the left hemisphere controls the right arm), so it can respond to questions nonetheless. The experiment began with the right hemisphere being shown the image of a house during a snowstorm while the left hemisphere was presented with a bird’s foot. Researchers could communicate with the two hemispheres separately, since the left one controls language, while the right one responds to visual cues. Each hemisphere was then asked to pick the most appropriate image among several to be logically associated with the one it had just been shown. Both hemispheres responded correctly, the right one picking a shovel (to go with the snowstorm), the left one choosing a chicken (to go with the bird’s foot). Things got interesting once the experimenters asked the left hemisphere—verbally—why the patient (whose actions had been prompted by both hemispheres, working independently) had picked a shovel and a chicken. Remember that the left hemisphere had no access to the decision-making process of its right counterpart, because of the severed corpus callosum. That didn’t stop it from providing an apparently rational, and yet entirely made up, explanation: the shovel had been picked in order to clean the chicken shed!
The irony here is that modern neurobiological research seems to indicate that the right hemisphere is more veridical, sticking to a direct interpretation of the information that reaches it, while the left hemisphere—the one in charge of language—is prone to weaving complex narratives somewhat detached from reality in order to make sense of contradictory information and relieve what psychologists call “cognitive dissonance.” Here is a simple example of how this tendency to look for explanations that are more complex than necessary gets us into trouble: it turns out that even rats can beat us at a simple cognitive task! The task consists of figuring out that the lights appearing on a screen are both random (there is no underlying organizing rule or principle) and statistically more likely to appear on the top portion of the screen. Rats and other animals figure out (obviously, not consciously) that the lights have a tendency to appear at the top of the screen and quickly develop an optimized strategy of pushing the appropriate button to obtain a reward. Human beings significantly underperform the rats because they insist on concocting complex theories about the real rule generating the pattern; since there is no such rule, their chance of reward is much lower. It is hard to imagine a more elegant demonstration that over-thinking things is not a good idea.
Sometimes our overthinking results in what cognitive scientists call “confabulation.” Spontaneous (that is, nonpathological) confabulation occurs when we are pushed to retrieve details of an event that we do not remember. Always attentive to the stress that results from cognitive dissonance, the brain immediately “retrieves” memories that are not actually there, literally making up stories as we go to reduce the distance between what we are told we should know and what we remember. This is how at least some of the so-called repressed memories come out in psychotherapy, occasionally resulting in unjust prosecution of parents accused of child sexual abuse by their own children, even though the “memory” was simply a result of confabulation that the patient’s brain concocted to reduce the cognitive dissonance induced by the therapist’s probing.
As usual, we learn a great deal about how the brain works when it fails to do so: confabulation can turn pathological, often as the result of damage in the orbitofrontal-anterior limbic system, although it can also be induced by environmental stresses, such as alcoholism. In these cases, not only are individuals absolutely convinced of the stories they produce, but they strenuously defend their version of reality even in the face of obvious evidence to the contrary. For instance, a patient admitted to a hospital in Berne (Switzerland) insisted that he was in Bordeaux (France). When confronted with the landscape outside the window, he admitted that it didn’t look like Bordeaux, but immediately added, “I am not crazy, I know that I am in Bordeaux!”
The phenomenon of confabulation is neurologically complex, and the spontaneous form that may affect healthy subjects is significantly different from the pathological form. Nonetheless, what seems to be happening is that something goes wrong with the brain’s ability to keep past memories separate from the monitoring of ongoing reality, with the result that information about what is going on gets mixed up in the brain and its “rationalizer” kicks in at full speed to make some sense—any sense—of the jumbled information.
All this evidence from neuroscience may not have shaken your faith in human beings’ reasoning abilities that much. After all, we have mostly been talking about pathologies—even though scientists like V. S. Ramachandran maintain that Cotard’s and Capgras syndromes, as well as the behavior of split-brain and confabulating patients, are just an exaggerated and easier to study version of what normally goes on inside our minds. The fact, however, is that Aristotle’s view of humans as the rational animal is also shaken by more subtle evidence from cognitive science that pertains to how people make everyday judgments about personal and social issues—as, for instance, when we are caught between the necessity for rational decision-making and our tendency to “go hedonic.”
A study conducted by Michel Cabanac and Marie-Claude Bonniot-Cabanac presented subjects with a series of questions about social issues ranging from abortion to homosexuality and from climate change to the situation in the Middle East. They were then asked to provide their judgment of a number of possible solutions to each issue. The researchers asked the subjects to rate the possible solutions in two different ways (at different times): first, they were asked to indicate which solutions for each issue felt pleasurable, neutral, or displeasurable; and second, they were asked which solution they would pick if they had the power to enact it.
The results were insightful. To begin with, when subjects were under time pressure they tended to go for the hedonic solution—that is, the answer that made them feel better. However, if given more time and explicitly told to weigh the options neutrally and rationally, subjects tended to change their preferences and were much less likely to pick whatever made them feel good. In their discussion of these results, Cabanac and Bonniot-Cabanac also cite previous studies establishing that when people are in a good mood they are more likely to assess situations rationally, as opposed to hedonically. As it turns out, the human brain also feels pleasure when it engages in logical and rational thinking—for instance, when we are able to figure out the answer to a puzzle—but it predictably feels significantly more pleasure (as measured by the release of brain endorphins) when the choice is hedonic. One can see the results of this type of research as either encouraging or dispiriting. The pessimist could reasonably point to the conclusion that people prefer judgments that make them feel better over more considered ones and that as simple a thing as mood affects our judgments on moral and social issues. The optimist, however, could just as reasonably respond that when people are made aware of these tendencies and given a setting suitable for reflection, they are capable of more sophisticated judgments.
All of this is important to our quest for a meaningful life because such a life depends not only on our ability to exercise the best judgment possible on a variety of issues but also on the overall functionality of our society as a thriving democracy. Which brings me to the troubling research conducted by political scientists Christopher Achen and Larry Bartels at Princeton University. Achen and Bartels were interested in empirically investigating how voters make up or change their mind on important issues and to what extent they use party affiliation as a “proxy” (a kind of shortcut) for their decisions.
The first case study concerned people’s perception of how the federal budget fared during President Bill Clinton’s first term. Let’s begin with the actual data: the deficit was significantly reduced by the end of the term, by about 90 percent. What did people think had happened? Overall, only about one-third of the American public appreciated the fact that the deficit had been reduced, while 40 percent (and 50 percent of Republicans) thought the deficit had actually increased. This finding is particularly discouraging because the amount of the federal deficit isn’t a matter of political opinion (unlike, say, the best way to deal with the deficit), and the pertinent information is both easy to comprehend and widely reported by the media. Indeed, Achen and Bartels found the situation to be slightly better when they focused on the 20 percent of voters who were most informed: among these voters, they found, reality had a “pulling effect.” Still, they concluded that “reality seems to have had virtually no effect on the responses of people in the bottom two-thirds of the information scale.”
The picture got even more complicated when the researchers turned to a second case study that used data from the same period: people were asked whether the economy had improved over the previous year (1996). The fact was that the economy had indeed improved, and a full three-quarters of people knew that. (A comforting percentage, of course, but we are still left with one out of four Americans being willfully ignorant of a basic and very practical fact affecting their lives.) Partisanship, however, clearly played a large role: twice as many Republicans as Democrats stated that the economy had gotten worse. Curiously, however, in this case the effect of partisanship was particularly visible among the better-informed voters. Why the contrast with the case concerning the deficit? The authors of the study speculate that the big difference was that President Clinton was explicitly mentioned when people were asked about the budget deficit, while the question about the economy had been phrased in neutral terms—a perfect example of the framing effect we saw earlier.
When Achen and Bartels then looked at whether and how people change their minds on important issues, they found yet more reason to be skeptical of human reason. They had access to longitudinal (across years) data concerning people’s positions on abortion, which is unusual as a political issue in that opinions tend to be stable over long periods of time. They found that in the period from 1982 to 1997—when the Republican Party gradually made abortion a major issue in its platform, under the rising influence of the Christian right—people were leaving the party while retaining their previous position on abortion, women more frequently than men (which makes sense, since the issue is obviously more pressing for women). Interestingly, men who were well informed on the issue acted as the women did, while less-informed men were more likely to stay with the Republican Party through the period during which its political platform changed. This latter group was also more likely to rationalize what amounted to a change in their position on abortion in order to harmonize it with their decision to stick with the Republican Party. A similar phenomenon happened more recently when President George W. Bush pushed for privatizing social security: as it turns out, people who supported privatization did not see Bush more favorably after he announced his policy; however, people who already supported Bush did become more likely to support privatization of social security than they had been before.
Given all the disconcerting things we have learned about how the brain largely works to rationalize our views of the world—in both its pathological and standard modes—was Aristotle wrong in thinking that the distinctive characteristic of humanity is rationality? Not exactly. The Greek philosopher was also one of the early students of human psychology, and he was very much aware of the constant failings of the human mind. What he meant was that human beings, as far as we know, are the only animals capable of rational thinking, despite the fact that it doesn’t come easy. That is why it is crucial to be aware of the many pitfalls of human reasoning, which we have begun to look at in this chapter and which, as we will see, affect even the quintessential application of reason to our understanding of the world: science itself (Chapter 8). Only through this awareness and constant vigilance can we hope to improve our ability to make reasonable decisions, both large and small, about everything that affects our lives. Think of it as training your brain the same way you train your muscles at the gym: both efforts achieve better results the more we take advantage of the best knowledge available about how they work.