5. Thinking straight is hard to do

Pitfalls and challenges for the new Enlightenment

One of the downsides of being a philosopher, I’ve discovered, is that you get a fair number of unsolicited communications from people in cults, who are hoping to discuss their “philosophy” with you. These days my inbox is cluttered with messages from Falun Dafa (or Falun Gong)—which is a fairly harmless organization, except that they made the mistake of having spooked the Chinese government (just by being organized), and now suffer genuine repression. This has probably had the effect of making them more popular than they otherwise would have been. Their core ideology is an amalgamation of traditional Buddhist meditative practices with an elaborate alien mind-control theory. According to founder Li Hongzhi, the development of science and technology is an alien plot, designed to foment war and destruction among humans so that aliens can seize our bodies, replace people with aliens, and so on.1 The details are, quite literally, crazy. But like all cults, the question is how so many thousands (or in this case, millions) of non-crazy people could come to believe it.

One of the astonishing things about Falun Dafa is just how standard it is, in every aspect. It’s an absolutely generic cult, instantly recognizable as what it is to anyone from anywhere in the world. It’s not crazy with Chinese characteristics, it’s just plain old crazy, like the common cold of mental infections. Reading through their literature, you get the sense that someone has been following a universal instruction manual on how to build a cult. How do you attract followers? Option number one: tales of miraculous healing! Sure enough, the other day I received a Falun Dafa email that included this testimonial:

I have witnessed and heard of many stories about the wonders of reading “Falun Dafa is good.” Ms. Liu, a villager in Tangshan City, is sixty-nine years old. Last year she was diagnosed with colon cancer. The doctor told her, “You are elderly and your blood pressure is too low. Quit seeking treatment. Go home and eat whatever you like.” Her third youngest sister practices Falun Gong. She visited her and taught her to practice Dafa by reciting “Falun Dafa is good.” Ten days later, Ms. Liu completely recovered and the late stage colon cancer disappeared. What is the most miraculous thing was that she looked like a young girl and became so beautiful. That’s Falun Dafa’s Divine Power. Master Li is absolutely superior to Jesus Christ! Falun Dafa saves people. Falun Dafa solves everything.

If this sort of thing sounds familiar, that’s because it is. It turns out that there is something like a universal instruction manual for getting people to acquire false beliefs. As Pascal Boyer has shown, some fanciful stories are easier to get people to believe than others.2 Human reasoning is subject to a number of very specific biases, and to the extent that a belief system is able to exploit these biases, it may be able to successfully reproduce itself. This is why the belief systems of cults and the “arguments” used to support them have a certain depressing familiarity. They all occupy the same ecological niche. Like a virus that is able to avoid detection by the immune system, some irrational beliefs are able to withstand whatever scrutiny most people are able to bring to bear upon them, because they exploit certain characteristic flaws in our reasoning. (Of course, the mechanism that generates the belief system is evolutionary, not intentional. Mental illness alone generates thousands of crazy preachers and belief systems, and of those, only a small fraction are able to attract followers. The belief systems that become widespread are ones that happen to exploit cognitive biases in a way that shelters them from rational scrutiny.)

Consider the story of Ms. Liu from Tangshan City. There’s a lot going on in this email, but the most prominent feature of the story is that it exploits what psychologists refer to as confirmation bias. When considering a hypothesis, people have a tendency to look only for positive evidence that is consistent with the hypothesis, while failing to ensure the absence of evidence that is inconsistent with the hypothesis. They fail to “think the negative.” Between Ms. Liu’s recovery and the divine power of Falun Dafa, two pieces of information are missing. First, what would have happened if she hadn’t recited “Falun Dafa is good”? For example, what are the chances that the cancer diagnosis was simply mistaken? Or if she had cancer, what are the chances of spontaneous remission, or even just not dying for a long time? Second, how many people suffering from terminal cancer who recite “Falun Dafa is good” fail to recover? Many people don’t ask these questions, simply because it doesn’t occur to them, or they don’t realize that the story alone provides absolutely no support for the hypothesis without this missing information. Failure to appreciate confirmation bias is actually one of the hallmarks of “superstitious” thinking. (For comparison, consider this: When she was younger, my wife’s relatives used to make her drink a “growth potion” that they had obtained from a herbalist, full of bark, leaves, and nasty chicken parts. They all agreed, however, that at age twenty she was a bit old to be drinking it, since the potion seemed to work best on teenagers.3)

There is a temptation when reading about these cognitive biases to think of them as the sort of thing that explains why everyone else is stupid. Diagnosing them in oneself is a lot more difficult. Daniel Kahneman describes this in terms of the “futility of teaching psychology.” Students read about all sorts of experiments describing all sorts of pernicious errors and biases, then simply “exempt themselves” from the obvious conclusion that they themselves are just as biased.4 In part this is because of the mistaken belief that bias can be detected through introspection (and so just knowing about a bias would be enough to provide immunity from its effects). In part it is a result of another well-known cognitive bias, which is that most people vastly overestimate their own abilities in every area of life.5 Freedom from bias is just another one of those areas. Psychologists refer to it as the bias blind spot.6 It’s easy to lament the superstitiousness of Chinese peasants. With ourselves, on the other hand, it’s a different story.

Take me, for example. I am a university professor, teaching in the Department of Philosophy. In order to get there, I both studied and taught logic, argumentation analysis, and probability theory. I’m also by temperament a bit of a rationalist. Kids started calling me “Mr. Spock” in grade three. From there I went on to chess club, computer programming, and, finally, logic and philosophy. I’ve also read extensively in the psychological literature on cognitive biases—enough, for example, to know what confirmation bias is and how it typically manifests itself. And yet one day, not that many years ago, I found myself falling into the most elementary trap, in a test used to detect confirmation bias. Here is how it goes.

The test was designed by Peter Wason, who had a peculiar genius for inventing problems that would cause people’s reasoning abilities to go haywire. It goes something like this:

Experimenter: I am going to present you with a series of three numbers, and you have to try to guess the rule that generated them. Before deciding, you may ask me three questions. Your questions must have the following form. You give me a set of three numbers, and I will tell you whether or not they are an instance of the rule.

Me: Okay, fun! Sounds a bit like Mastermind. I used to be really good at that.

Experimenter: Your first set of numbers is: 2, 4, 6.

Me: Wow, that’s easy. This must be the warm-up phase.

Okay, how about: 6, 8, 10?

Experimenter: Yes, that is an instance of the rule.

Me: Um, okay. How about 22, 24, 26?

Experimenter: Yes, that is an instance of the rule.

Me: Okay, this is stupid. I don’t even need the last guess.

Let’s try 100, 102, 104?

Experimenter: Yes, that is an instance of the rule.

Me: All right, the rule is ‘Three even numbers in order.’

Experimenter: No, that is not the rule.

Me: Come again?

Experimenter: That is not the rule.

Me: Then what is the rule?

Experimenter: The rule is “Any three numbers in ascending order.”

Me: You’re telling me that 3, 5, 7 satisfies the rule?

Experimenter: Yes.

Me: 2, 67, 428 satisfies the rule?

Experimenter: Yes.

Me: Oh my God, I’m an idiot.

This is how I learned to take confirmation bias seriously.7 When asked to formulate an hypothesis and evaluate it, I quickly jumped on the first pattern that I saw, and then proceeded to look exclusively for results that confirmed my theory, without trying to falsify it. Once I got it into my head that they were even numbers in order, it never occurred to me to suggest a sequence of odd numbers, with the aim of eliciting a “no” response. I completely failed to “think the negative.”8 In fact, I can clearly remember pausing just before asking the third question, not being sure what to ask, thinking that I had exhausted all useful questions. I was even a bit puzzled by the fact that I had been given three questions to ask when obviously I only needed one or two.

This is a big enough deal that it’s worth dwelling on for a moment. Figure 5.1 provides a little graphical representation of the set of sequences that satisfy the actual rule (“any three numbers in ascending order”), along with the subset that satisfies the more restrictive rule that I guessed (“three even numbers in sequence”). These are both contained within an even more gigantic set—not shown—that contains all possible sequences of three numbers, in any order whatsoever. When presented this way, we can begin to see how extraordinary it is that I confined all my guesses to the contents of the small circle. Even if I was convinced that the rule was “three even numbers in order,” a seemingly obvious way of checking this would be to guess a sequence of odd numbers, with the intention of generating a “no” response. (After all, a “no” can be just as informative as a “yes.”) Had I tried this—guessing wrong on purpose—I would of course have been surprised to discover that I hadn’t in fact guessed wrong, and that there was this enormous, unexplored territory just outside the realm of hypotheses that I had been considering. And of course, had I continued my attempts to guess wrong, I would have discovered an even larger territory just outside that … Why not guess a set of numbers out of sequence, in order to see whether that is essential to the rule?

Figure 5.1. The 2, 4, 6 problem

What is particularly striking about this test of confirmation bias is that it reveals more than just wishful thinking. It’s one thing to know that people look only for supporting evidence for their pet theories or for things that they want to believe. But in this case, I had nothing whatsoever invested in the initial hypothesis. There is simply a huge blind spot in our reasoning. The “yes” in response to “6, 8, 10” does, in fact, support the hypothesis that the rule is “three even numbers in sequence.” The problem is that it supports a very large number of other hypotheses too, and that before one can establish that “three even numbers in sequence” is correct, one must rule out these other hypotheses. And yet I failed—failed egregiously, outrageously—to do it. This is exactly the same problem that leads people to think that the miraculous recovery of Ms. Liu generates support for belief in the divine power of Falun Dafa.

It is easy to see why our intuitive judgments might be out of whack in this regard. One of the things that rapid cognition does extremely well is pattern recognition. What sets the grandmaster chess player or the experienced doctor or the seasoned firefighter apart is the ability to instantly detect patterns that no one else can see.9 The human brain is, unfortunately, quite hyperactive in this regard, and tends also to see patterns where in fact none exist. This is called apophenia when it reaches pathological proportions, but the underlying tendency is ubiquitous.10 Most people, for instance, when shown a sequence of random coin tosses or dice rolls, will insist that the sequence is not random—since it contains too many “streaks.” Apple experienced a deluge of complaints about the shuffle feature on its iPod from customers who insisted that the random sequence was biased against certain artists.11 People in general lack an intuitive feel for what randomness looks like, and so are constantly amazed by coincidences that are, as a matter of fact, not particularly improbable. Some psychologists have suggested that this is because, in a natural environment, genuine randomness is somewhat uncommon, and so there is no particular benefit to being able to detect it.12 A more plausible account is simply that the evolutionary cost of false positives (thinking that there is a pattern when in fact there is none) is much lower than the evolutionary cost of false negatives (thinking that there is no pattern when in fact there is one), and so natural selection has favored a pattern-recognition system that has very high sensitivity and low specificity.13 In an environment where there is some chance of being eaten by a predator, it pays to treat every rustle in the grass as a potential threat. The cost of getting “spooked” and running away from nothing are much lower than the costs of ignoring some telltale signs and getting torn limb from limb. Thus we tend to get all excited when we see a pattern. If and when we do think “What if I’m wrong?” it is not a thought that occurs to us naturally. It is a cognitive override imposed on us by reason.

My biggest mistake in the 2, 4, 6 challenge was in thinking that because I was smart, I was somehow immune to cognitive bias.14 This is, as it turns out, not true. (In fact, most smart people who read the Falun Dafa email and reject the conclusion do so not because they can spot the error in reasoning, but because of a different bias, belief bias, whereby we reject as illogical any argument that leads to a conclusion that we take to be false. In other words, most people reading the email start from the belief that there is no such thing as the divine power of Falun Dafa, and so they reject the argument. Yet many of these same people happen to believe in the divine power of Jesus Christ, and so accept arguments that have exactly the same structure when presented in support of the divinity of Christ. Indeed, the “argument from miracles” figured prominently in Christian theology for thousands of years and, if you can believe what you see on TV, continues to impress many people in the southern United States.)

Keith Stanovich and his collaborators have shown, through an extensive series of studies, that higher intelligence does not make people less susceptible to cognitive biases. The types of cognitive abilities that are measured by intelligence tests are typically not those that we associate with rational thought. On the contrary, these tests typically measure intuitive thinking skills (such as the ability to manipulate geometric objects, decipher anagrams, classify words, or categorize items), and they also reward those who are able to respond quickly. These skills are weakly correlated with reasoning ability, and not at all correlated with the ability to avoid cognitive biases. It is interesting to note, however, that once you tell people that their answers are wrong and that they have fallen victim to a cognitive bias, more intelligent people are better at figuring out where they went wrong and at correcting their answers. In other words, intelligence is useful when it comes to thinking rationally, but cognitive miserliness—the desire to solve problems with the least effort possible—is independent of intelligence. Intelligence alone does not trigger the override that leads us to question our intuitive judgments. And since we all have the same bugs in our lower-level systems, high-IQ individuals are just as easily tricked as low-IQ individuals.

Stanovich has introduced the term dysrationalia to describe “the inability to think and behave rationally, despite adequate intelligence.”15 Everyone knows someone who fits the profile: the retired engineer who becomes a 9/11 truther and spends weeks on end studying the physics of controlled demolition; the farmer who spends the off-season surfing the internet, developing elaborate theories about monetary policy and the perfidy of banks; the secretary who develops an obsessive interest in astrology and insists on doing sophisticated “readings” for everyone she knows; the young mother who starts reading up on pesticides and is soon buying organic everything (and yet continues to nip out for the occasional smoke). Stanovich has argued that our failure to take this sort of irrationality seriously, combined with the fetishization of IQ in our society, has had a number of untoward consequences. The most troublesome is that it has led to the promotion to positions of authority and influence of people who are actually quite irrational and, as a result, make terrible decisions both for themselves and for others.16

The best example of this is former U.S. president George W. Bush, who is widely thought to be stupid, despite considerable evidence to the contrary. During the 2004 campaign, when he and John Kerry, his Democratic opponent, were pressed to release their Yale undergraduate transcripts, people were surprised to discover that Bush had graduated with a higher grade point average than Kerry.17 And yet there can be no doubt that America’s first “MBA president” made a series of terrible decisions. In Stanovich’s analysis, most of this is due to Bush having adopted a set of flawed decision-making heuristics, all of which led him to put far too much emphasis on his “gut feelings.” (For example, he claimed that he was able to get a sense of Russian president Vladimir Putin’s “soul” simply from having “looked him in the eye.”) Even some of those on his team noted his rather alarming absence of curiosity.18 He had a particular lack of interest in exploring ideas or opinions that might be inconsistent with his own. He was also a believer in firm decision making, with no second-guessing or doubts. (He prided himself on being “the decider.” Asked toward the end of his second term whether he could think of any mistakes he had made as president, he was genuinely stumped.) And finally, there was his habit of praying for guidance (which, on the most probable interpretation, is just a way of bolstering support for one’s gut feelings). This is all a recipe for irrational decision making, because it disables the most important mechanisms that we use to counteract confirmation bias.

Like Bush, most cranks and conspiracy theorists are also not dumb, as witnessed by the fact that they are able to develop elaborate theories, along with complex explanations for why their views are not more widely accepted.19 The problem is that they suffer from an extreme case of confirmation bias. They fail to “think the negative” when it comes to their own views and, for various psychosocial reasons, resist the “check” that the disbelief of others would normally serve in correcting them. One can see this tendency on dramatic display in the movie Room 237, which is organized around five different individuals each presenting their personal analysis of the “hidden meaning” of Stanley Kubrick’s movie The Shining. It’s astonishing to listen to people from different walks of life making exactly the same cognitive error, over and over again—but equally astonishing to see how quickly one can be lured into doing the same oneself.

These examples are all fairly harmless. But consider the case of global warming skeptics, who spend untold hours trying to unravel the enormous scientific conspiracy that has tricked us all into believing that human activity is causing changes in the earth’s climate. They never stop to ask themselves the simple question, “How could we be increasing the carbon dioxide content of the atmosphere and not have it change the earth’s climate?” Or consider religion. Even in its most refined variants, any attempt to develop a “rational theology” is inevitably an exercise in confirmation bias. From Aristotle’s celebrated five proofs for the existence of God to contemporary arguments for intelligent design, absolutely none of these arguments provide any support at all for what might broadly be described as a “theistic,” much less a Christian, worldview. People are so busy trying to make the case for belief in God against the threat of atheism that it never even occurs to them that they need to exclude other hypotheses. Indeed, the Venn diagram for intelligent design arguments looks exactly like the one for the Wason problem. Jumping from “the miracle of life” to “belief in God” is exactly the same as leaping from “2, 4, 6” to “three even numbers in order.” There’s a gaping hole in the argument, which, unfortunately, most people just glide right past.

Figure 5.2. Intelligent design hypotheses

David Hume pointed this out long ago. Even for those who believe in intelligent design, there remains a wide range of possibilities. “A man who follows your hypothesis,” he argued, “is able perhaps to assert, or conjecture, that the universe, sometime, arose from something like design: but beyond that position he cannot ascertain one single circumstance; and is left afterwards to fix every point of his theology, by the utmost license of fancy and hypothesis.”20 Given the obvious imperfections in the world, Hume said, the more likely explanation would appeal to some sort of inattention or defect in the creator. Perhaps the world is just a preliminary sketch, carried out by some “infant deity” who “later abandoned it, ashamed of his lame performance.” Or it could have been made by a lesser god, “the object of derision to his superiors.” It might have been made by an aging, senile deity who died shortly thereafter, leaving the universe to run amok. The possibilities are manifold. More importantly, proponents of intelligent design have not one shred of evidence that could exclude any of these hypotheses. Yet when they demand that we “teach the controversy” over creationism in schools, this is typically not what they have in mind.

This failure to “think the negative” is somewhat exacerbated in our culture by a tendency to view “genius” as the ability to see connections that others don’t. That’s actually a better working definition of “crazy” than of “genius.” Seeing connections is easy—the human mind is wired to see connections and patterns everywhere, even where they don’t exist. The trick lies not in seeing connections, but in throwing out the ones that are no good. Albert Einstein may have seen connections that others did not see, but he was also able to say with absolute precision what the testable consequences of his theory were and, most importantly, how that theory could be proven false. It was his ability to “think the negative” that made him a great scientist, not the ability to “think different.” Indeed, the central virtue of scientific education—as opposed to merely technical education—is the relentless focus on disproof. The scientific method forces us to do something that is, in fact, highly unnatural, which is to think about how we could be wrong: to seek not just confirmation, but disconfirmation.21

There is a long-standing debate about the uniqueness of Western civilization, and in particular why it gave rise to the scientific revolution, the Enlightenment, and later the Industrial Revolution. There are many theories, but a particularly compelling suggestion is that Aristotle deserves much of the credit—not so much because of the theories he proposed, but because of the theories he rejected. At the time that Aristotle began teaching, Plato (and, through Plato’s works, Socrates) had already achieved the status of intellectual giants. Their fame and influence extended throughout the Hellenic world. And yet Aristotle began one of his most influential works—now referred to as the Nicomachean Ethics—by rejecting the most important idea in Plato’s philosophy, namely, the “theory of the Forms.” Given a choice between honoring his master and correcting what he saw as his master’s mistake, he chose to correct the mistake, on the grounds that philosophy, or the love of wisdom, generates “a sacred duty to prefer the truth to one’s friends.”22

Nowadays we tend to take this sort of behavior for granted, but at the time people found it quite scandalous. Indeed, for hundreds of years philosophy was regarded as an unreliable guide to the truth, not to mention an unsuitable basis for political order, simply because the two greatest philosophers of all time had disagreed with one another. (One of the major projects among philosophers in the Roman Empire, such as Plotinus, was to paper over these differences, in order to cleanse philosophy of this stain on its reputation.)

What Aristotle’s rejection of Plato created, however, was a culture of contestation in the Western intellectual tradition. Although Plato had paid lip service to the power of dialogue, there is not much real disagreement going on in his work (Socrates’s interlocutors get pummeled into submission far too quickly). With Aristotle, one found dramatic illustration of the idea that intellectual progress could occur by correcting the mistakes of others, so that rather than having to rely on oracles, wise men, or prophets to reveal the truth to us, we could work it out over time, by arguing with one another. The result was a brilliant kluge. As individuals, we have enormous difficulty seeing our own mistakes, much less the biases that lead us into them. People have a powerful inclination to think that “biased thinking on their part would be detectable by conscious introspection,”23 and yet it isn’t. Introspection is far, far less powerful than any of us would have imagined. And unfortunately, the limits of introspection cannot be ascertained through introspection. Thus the most powerful check that we have on our own tendency toward biased thinking is the willingness—perhaps even the eagerness—of other people to correct us.

This is reflected in the fact that, under the right circumstances, groups do much better than individuals on tests of biased reasoning.24 What underlies this, ultimately, is the internal connection between talking and reasoning. If one person happens to get it right, he or she will often be able to explain to the others why it is the right answer, and thereby raise the performance of all. People who get the wrong answer, on the other hand, are less often able to talk others into seeing things their way. This is because the wrong answer is typically a consequence of not having thought the problem through using the resources of explicit, linguistically structured thought, so as soon as you try to explain what you were thinking, the problem with it becomes apparent.

Many of our institutions intentionally sharpen the antagonism between individuals in order to enhance the effectiveness of this kluge. Rather than relying on people’s love of wisdom to motivate them to correct others, we associate tangible status rewards with the ability to show others to be wrong. We do so through stylized debates and competitions, such as criminal trials or “peer commentary” on scientific articles. For example, as skepticism about miracles gathered strength in the sixteenth century, the Catholic church began appointing a “devil’s advocate,” whose job during canonization hearings was to present arguments against the elevation of the proposed individual to sainthood. Since Catholic sainthood to this day requires confirmed reports of at least two miracles after the death of the person, one can see how such an institution would have value. Indeed, it even appears to have had some effect, since the abolition of the devil’s advocate in 1983 under Pope John Paul II led to a sharp uptick in the number of canonizations—typically based on accounts of “miracle cures” similar to the one enjoyed by Ms. Liu in Tangshan City.

It is worth noting that these institutional practices of criticism and doubt are not external to human reason. Like the pencil and paper that we use to do mathematical computations, discussion and argumentation with others are part of the machinery of rational thought. Just as thinking is a form of silent talking, reasoning is a form of silent arguing. Our ability to run simulations of these external processes is imperfect and fraught with error. This is why the results of the real thing—actual argumentation—are often superior to the results of its internalized or simulated variant.

It is fortunate that we have other people, because the list of cognitive biases that we have to overcome is formidable. The literature on this has become truly enormous, but here is a partial list, detailing only some of the most important biases:

1. Optimism bias. We tend to think that everything will work out for the best, that we have more control over the world than we actually have, and that we are more likable, intelligent, witty, competent, and rational than we actually are. Knowing that half of marriages end in divorce or that 60 percent of restaurants fail within three years does not deter people from getting married or opening restaurants in large numbers.25 Knowing that all projects go over budget and past deadline does not stop people from assuming that their project will come in below budget and on time. And so on.

2. Myside bias. We have—to put it bluntly—self-serving beliefs. An excuse sounds more plausible when it is used to explain our own behavior, not that of others. A distribution seems more fair when it generates a benefit for us while others have to pay the bill. Indeed, a lot of what looks like self-interested behavior is actually—in the mind of the person doing it—completely disinterested behavior but based upon a description of the situation that is completely self-serving.

3. Framing and anchoring effects. People will often make completely different decisions depending on how a choice is presented to them (or “framed”). We also tend to be influenced by factors that should have no impact on our decision (“anchors”). For example, it has been shown that the list price of a house has a significant impact on what people take that house to be worth—and that the effect is just as large with professional real estate agents as it is with members of the public at large.26 Even presenting people with what they know to be random numbers can skew their answers to numerical questions.27

4. Loss aversion. People care more about losses than (equivalent) foregone gains. For example, if you offer people a 3 percent rebate for using cash instead of a credit card, most will use their credit cards. But if you advertise a lower price along with a 3 percent surcharge for using a credit card, many of the same people will pay cash. (This is a type of framing effect, but it is of such economic significance that it deserves its own category. In particular, it has a huge impact on tax compliance, since people tend to treat tax deducted at the source as a foregone gain and tax owing at the end of the year as a loss.)

5. Belief bias. We tend to judge the quality of arguments by whether or not they generate conclusions that we already believe. This makes us incredibly unresponsive to new evidence, different ways of looking at things, and so on. Consider the following syllogism: “All living things need water. Roses need water. Therefore, roses are living things.” Seventy percent of university students consider this an instance of valid reasoning.28

6. Probabilities. There is an astonishing range of studies showing that people are terrible at dealing with probabilities. We routinely violate simple principles, such as the conjunction rule, which says that two events taken together cannot be more likely than either event taken singly. We treat unlikely events as being more probable if they haven’t happened in a while. We also vastly overestimate the significance of small probabilities.29 More arcane biases involve base rate neglect, whereby we ignore the background probability of an event when assessing new evidence.

What is important to recognize about these biases is that they are all examples of how “thinking with our gut” can lead us astray, or even into making disastrous choices. The only way to correct these biases is through the exercise of reason. Intuition is not only impervious to correction; it is oblivious to its own flaws and limitations. Reason may not be able to overpower these tendencies—indeed, the very idea of a “bias” shows that these flawed decision-making heuristics often intrude into consciousness and corrupt our rational decision-making processes—but it is the only faculty even able to try.

As has been mentioned, we tend to worry about civilization collapsing into barbarism, not the other way around. But what exactly is barbarism? And why does it seem to be the default social arrangement? Thomas Hobbes has been criticized for claiming that the natural state of man—the sort of thing that we get when there are no laws and no government—is “solitary, poor, nasty, brutish and short.”30 If you look at the list, though, and compare it to “failed states” like Somalia, or to the human condition prior to the emergence of the modern state, you can see that he was wrong about only one item on the list. People are certainly poor, they are nasty to one another, there is an incredible amount of violence, and life tends to be short. The only thing Hobbes got wrong was “solitary.” People are not solitary in the state of nature; what they are, in fact, is tribal. This means that they form small communities or groups bound together by ties of family and friendship (“blood” and “honor”), governed by a powerful in-group code, reinforced through antagonistic and typically violent relations with outsiders. This pattern of social organization describes everything from modern American street gangs to Somali warlords, Albanian clans, and nomadic hunter-gatherer societies throughout human history.31

Tribalism is what you get when people act naturally, when they go with their instincts, when they do what feels right, both morally and intellectually. This is because we are adapted for living in groups like this, and all of our social instincts support and reinforce this type of social structure. The problem with the arrangement is that it dramatically limits the scope of cooperation. This is why Hobbes claimed that in the state of nature, “there is no place for industry, because the fruit thereof is uncertain: and consequently no culture of the earth; no navigation, nor use of the commodities that may be imported by sea; no commodious building; no instruments of moving and removing such things as require much force; no knowledge of the face of the earth; no account of time; no arts; no letters; no society.”32

What makes the collective action problems generated by tribalism particularly intractable is that they are not generated by self-interest, narrowly construed. People often engage in antisocial behavior out of a sense of loyalty to their fellow tribe members (or fealty to their god, or commitment to their shared values, or whatever). This is what makes street gangs so hard to combat. People are not just in it for the money. They are genuinely committed to the gang, willing to make huge personal sacrifices in order to support their brothers. They often break the law out of a sense of higher loyalty. The problem is that we have a set of pro-social instincts—to help others, at whatever cost—that are limited in scope. We don’t want to help any old person; we want to help our brethren, our friends, the people on our side. This makes it easy to organize small-scale cooperation within a small group but much more difficult to organize large-scale cooperation across groups. Indeed, it would in many ways be easier getting people to cooperate if they were merely self-interested.

One particularly clear example of this is our taste for vengeance. Human beings appear to be the only primates who exhibit what evolutionary theorists refer to as “altruistic punishment.”33 We are willing to punish people who break the rules or act noncooperatively, even when it is personally costly to do so. Chimpanzees enter into many “I’ll scratch your back, you scratch mine” cooperative relationships, but if the other individual fails to reciprocate, they simply end the relationship and refuse to cooperate with that individual again. Humans, on the other hand, will often go out of their way to retaliate, usually at some risk to themselves. Furthermore, uninvolved third parties will sometimes intervene—again, at some risk to themselves—in order to see that the defector makes good or is punished for defecting. There is good reason to think that this propensity is innate, not cultural, simply because it is part of the structure that allows us to acquire and transmit culture in the form of shared rules and institutions. Retributiveness is also quite a visceral, intuitive reaction, which people have difficulty rationalizing.

Now when it comes to organizing cooperation in small groups, retributiveness is clearly advantageous. If you’re trying to decide whether to take advantage of someone in a community of merely self-interested individuals, you have to weigh the short-term benefits of free riding against the benefits that you will forego if you get shut out of future cooperative relations. But if you also have to factor in the possibility of retaliation—that some people might track you down and make you suffer, just to get you back for what you did—then it becomes a lot harder to make the case for free riding. This is why, for instance, organized criminal groups can maintain stable systerns of cooperation—such as a code of silence—despite their lack of legal authority over members. They are able to operate an entirely decentralized system of punishment.

The big weakness with this system is that it doesn’t handle accidents very well. When someone does something wrong then suffers some violence by way of retaliation, we may accept this as a legitimate response. If, however, it is some misunderstanding or accident that leads to an act of retaliation, the person who suffers it is likely to regard it as an act of unwarranted aggression and so respond in kind. This can set off a sequence of tit-for-tat retaliation, in which each new act of “punishment” compels the other to respond in kind, creating what we refer to as a “feud.” Indeed, if one examines the more famous blood feuds in history, they almost always begin with some alcohol-induced act of violence (or even just insult) that one side took very seriously but that the other side felt should have been excused. As a result, the initial retaliation was not accepted as fair but was instead treated as a provocation, which must be retaliated against. This second retaliation was, in turn, treated as provocation, and so was responded to in kind. Thus the two sides became locked into a pattern of reciprocal violence, of the sort that can last for generations. Indeed, what makes the pattern so difficult to break is that everyone involved feels obliged to continue the feud—that it would be a betrayal of the victims on their side to forgive and forget.

This is a fairly large bug in the human motivational system, because as the number of individuals involved in a cooperative interaction increases, the chances of an accidental defection increase as well.34 Anyone who has ever worked on a group project or team will know the dynamic. Social pressure can be very effective in small groups, but as the number of people and the anonymity of interaction increase, the chances that everything will fall apart, that the group will collapse into mutual recrimination and paralysis, increases as well. We all know what this sounds like: “If he’s not going to help, why should I?” “Let’s see how you like it when someone does that to you.” “I’ll get mine finished just as soon as she finishes hers.” These sorts of feelings are often perfectly legitimate, and yet at the same time collectively disastrous. Unfortunately, we have no intuitive feel for when our retributive sentiments are helping out and when they are making things worse.

The only way out of the state of nature involves a rational insight into the structure of the collective action problems we face. Almost every intractable collective action problem involves a conflict between parties, each of which sees itself as good (or as having acted correctly) and its opponents as evil (or as having acted wrongly). This is usually the product of a powerful blend of myside bias, in-group solidarity, and retributivism. And yet this characterization of the problem (“We’re good, they’re evil”) is itself a major source of the problem. It’s because each side sees things this way that both refuse to back down. And it’s not just that they don’t want to back down, it’s that they both feel morally obliged to not back down.

It’s easy to see how collective action problems, by their very structure, encourage this way of thinking. These problems arise when it is possible to create a slight benefit for yourself by creating a larger cost for someone else. Consider a typical “tragedy of the commons,” like overfishing. Catching a lot of fish is obviously beneficial for the individual fisherman. But it also creates a cost. By reducing the breeding stock, it reduces the overall fish population, making it difficult to catch as many fish the following year. In extreme cases, it can lead to the complete collapse of the fish stock (as happened with the cod fishery on the Grand Banks off Newfoundland). So why would anyone do this? It’s not short-sightedness. It’s not that fishermen fail to anticipate that their actions will eventually undermine their own livelihood. It’s that each individual was able to capture for himself some of the benefits associated with overfishing, whereas almost all of the costs were transferred onto others. So even though the costs were much greater than the benefits, everyone still had an incentive to produce them.

But when you examine the outcome, it’s easy to see how everyone could consider the whole thing to be someone else’s fault. After all, if you look at the damage that you suffer, it actually is caused by someone else. So the obvious solution seems to be for the other person to stop what he is doing. If someone were to come along and say “No, you need to stop doing it,” the natural reaction would be to say “Let him stop first,” or maybe “What, are you crazy? If I stop doing it then he gets all the benefit, and I get nothing,” or even worse, “No way—not until I pay that bastard back for what he did to me.” The problem is that both people say exactly the same thing, and so no one ever stops.

I have heard Newfoundlanders react with outrage—outrage!—at the suggestion that the collapse of the cod fishery was in any way their fault. Why? According to them, it was the Spanish and Portuguese, fishing in international waters, who were to blame. They were the ones fishing without quotas, using illegal nets, and so on. When you think about it, Newfoundlanders really had no choice but to take as much as they could. If they had cut back on their catch, it would have just left more for the international fleet. The cod stocks would have collapsed either way; all Newfoundlanders did was try to get their fair share.

That’s the story in Newfoundland. But of course, one could undoubtedly go to Spain and Portugal and hear exactly the same story in reverse. They were blameless, it was all the fault of the Canadians, and so on. In a sense, both stories are correct. In a collective action problem, everything bad that happens to you is actually someone else’s fault. None of this changes the fact, however, that the only way to solve the problem is for everyone, including you, to stop what they’re doing, and that means providing a benefit to the very same people who have, in the past, harmed you. This is so contrary to both instinct and intuition that it takes Herculean self-restraint for most people to get out of these problems. First of all, it takes a radical break with our own perspective, in order to see how our actions look to the other person. This typically involves the realization that other people are not evil, but are in fact acting on the same set of motives that we have. The entire language of “good” and “evil” needs to be overcome. Needless to say, this is the sort of insight that only reason can attain. Then, once we have achieved this rational insight into the structure of the problem, we must also impose an override on our intuitions, in order to resist the retaliatory urges that assail us at every level. Most of us are unable to do so, which is why we rely so heavily on kluges in order to achieve large-scale cooperation.

The most important institutional work-around that we have for this is the state.35 The state promotes cooperation not just by making and enforcing laws, but by exercising a monopoly on the legitimate use of force. It eliminates the disorder and chaos created by private vendettas and feuds by essentially denying individuals the right to private retaliation. Political theorists have long argued that this marks the central difference between the “civil condition” and the state of nature—or, to put it less delicately, between civilization and barbarism. Individuals in the civil condition must renounce the private use of force. This was a major idea in social contract theory, from Hobbes through Locke, and is a principle that is reflected in the basic structure of criminal law in our society.36 Civil suits involve a conflict between two parties—the plaintiff and the defendant—but criminal cases are always “the state” versus the defendant. Many people find this surprising, because they think that the victim of the crime should be the aggrieved party, not the state. They also find it surprising that, say, the families of Ronald Goldman and Nicole Brown Simpson should have been able to successfully sue O. J. Simpson for the death of their children even though Simpson had been acquitted of criminal charges in the matter. The distinction, however, makes perfect sense. As a result of the civil suit, Simpson was required to pay compensation to the families, but he was not punished. (Similarly, if you steal something, a civil court can make you give it back, but only a criminal court can punish you, above and beyond that, for having taken it.37) Compensation is a private matter between individuals, appropriately dealt with by the civil courts. Punishment is not a private matter, but is rather a public prerogative, and is therefore an issue between the individual and the state. This is a reflection of the fact that individuals in our society are taken to have renounced their right to use force against one another.

If anything deserves to be called the bedrock principle of civilization, this is it. Indeed, one of the major developmental hurdles that societies must overcome is the transition from tribal to hierarchical societies.38 Even though very advanced systems of cooperation have been organized in tribal societies, there is a clear limit on the level of complexity they can achieve. All major world civilizations have developed only after the subordination of tribal loyalties into a broader, hierarchically organized state that claims a monopoly on the use of force.39

The most common mistake that people make when thinking about human sociability is to assume that large-scale civilization is just a scaled-up version of tribal society, able to rely on the same sort of dispositions and instincts. This is not correct. There is a huge developmental hump in the transition from tribal society to the state—one that many societies simply never make it over. The configuration that results at the far end is not natural at all, but involves selective uptake, manipulation, and suppression of our natural impulses. For example, people still have extremely powerful retributivist urges, which never go away and which are never fully satisfied in the civil condition. This is why acts of vigilantism and, more broadly, fantasies about punitive violence and retribution (think of movies like Death Wish, Dirty Harry, Falling Down, and The Brave One) are ubiquitous in our society. No matter how punitive the criminal justice system, and no matter how widely the net is cast, a large segment of society will never be satisfied. Why? Simply because a civilized society is structurally incapable of satisfying the thirst for vengeance that many people find viscerally compelling.

The most striking feature of collective action problems is the way that people—indeed, whole societies—can get stuck in them. Everyone can see that there is a problem, and in many cases everyone can even agree about what the solution would be. Unfortunately, as long as everyone else is acting in a way that contributes to the problem, no one has any individual incentive to stop. There are many traps like this that societies can fall into. Official corruption, tax avoidance, or even just widespread rudeness are all good examples. Once such a behavior becomes widespread, it is extremely difficult to dislodge. Furthermore, it is in many cases impossible for the society to “evolve” its way out of the problem. Evolution, whether biological or cultural, works through a series of small steps, each of which must constitute something of an improvement over the previous one. In a collective action problem, however, each small step typically makes things worse for the person who takes it. The situation can be improved only if everyone takes a big step. These are the cases in which reason must play a central role. Because it’s not possible to try out multiple options in order to see which one is best, we have no choice but to develop the best models of the problem that we can, in order to determine, in the abstract, which one we should choose. We cannot wait and see: we have to try to pick a winner.

One can see here the limitations of the Burke’s defense of tradition. Although the mechanism of cultural evolution is extremely powerful in some cases, it is also capable of getting stuck in blind alleys and dead ends. There are times when piecemeal reform will not do, and a complete overhaul is required. Consider the case of police corruption. Once the police start taking bribes, it’s incredibly difficult to get them to stop. First of all, whatever rules you may impose, who is going to enforce them? Second, the existence of this secondary income source results over time in lower wages, which means that police officers become dependent upon bribes in order to make ends meet. This makes it difficult for one officer, or even a committed group of officers, to stop taking bribes. Thus the “culture of bribery” becomes self-enforcing, in many cases resulting in government paralysis. There is no obvious way of “evolving” out of it. The only way to fix the problem is to do something dramatic (as in the Republic of Georgia, where in 2005 President Mikheil Saakashvili decided to eliminate corruption among traffic police by firing the entire force—30,000 officers).

Thus there is no one-size-fits-all solution to the question of how social change should occur. There are times when the Burkean faith in tradition, along with the powers of piecemeal social reform, is entirely appropriate. But there are also times when Enlightenment rationalism and radical change are called for. The American political system, for instance, is designed to break up any large concentrations of power. One consequence is that anything other than piecemeal reform is extremely difficult to achieve. Yet the inefficiencies that result from incrementalism tend to accumulate over time.40 The effects of this can be seen all over. Consider the American tax code, which is extraordinarily byzantine and seems as though it were designed to antagonize the population. While many countries do a complete overhaul of the tax system every decade or so, the United States has gotten along with mere tinkering. The same can be said for health care. It is extremely difficult to see how any incremental process could improve things. Anything less than a complete overhaul will simply lead to increased spending. Yet the American political system is simply not capable of delivering “overhauls” of anything.

Although many Americans manage to persuade themselves that this is one of the virtues of their system,41 the fact that no other nation in the world has chosen to imitate it speaks volumes. Sometimes big problems require dramatic solutions. This is something that business managers have come to understand. Some organizations can be reformed, but others simply have to be broken apart and put back together along entirely different principles. We need to respect the power of “bottom-up” solutions, but we also need to leave room for radical, sweeping reform in cases where we are mired in muck or locked into a self-reinforcing system of mutual antagonism. So while it’s important not to overstate the powers of reason, when it comes to understanding complex systems it’s also important to realize that sometimes the only way to move things forward is through big, ambitious plans for reform. Big plans, however, can only be formulated and implemented through the exercise of our rational faculties.

To say that we need to strike a balance between reason and intuition would be to say something true but uninformative. The question is how to decide when we should place our trust in nonrational problem-solving mechanisms. Here it is helpful to have a sense of what these different systems are good at and where they struggle. Consider the following as a scorecard for the nonrational parts of our brain.

Things that it’s very good at:

Pattern recognition

Tracking movement

Mind reading

Remembering things

Building associations

Seeing the big picture

Things that it can’t do:

Count past three

Follow an argument with more than two steps

Consider hypothetical states of affairs

Think strategically

Manage uncertainty

Carry out a long-term plan

Solve large-scale collective action problems

For all its weaknesses, the list of things that only reason can do is quite impressive. Furthermore, the political implications of this analysis are clear. Civilization is not something that arose spontaneously, as an expression of our natural sociability (the way that a hive is to bees). On the contrary, civilization represents the triumph of reason over intuition. It is a precarious achievement, one that is reproduced only because we are able to systematically cultivate and reproduce the controlled processing style of cognition that allows us to preempt and override our more intuitive responses when we judge that these are liable to produce collectively self-defeating outcomes. In this respect, Freud was right in saying that civilization is built upon a partial repression of our instincts. The “natural” form of political authority is just personal loyalty to the ruler. Establishing a system whereby individuals show loyalty to an institution—to the office of the ruler, rather than to the person—is highly unnatural.

Looking around, it is easy to see that most of the institutional features that make our society decent and livable are unnatural in the same way. The rule of law is achieved only when individuals renounce the private use of force and sublimate their retributivist impulses. A market economy is possible only when individuals abstract from the fairness of particular transactions and evaluate the merits of the system in terms of its aggregate effects. Bureaucratic administration is possible only when individuals resist the temptation to use their power to advance the interests of friends and family. The modern welfare state arises when individuals accept that not all collective action problems can be solved through informal systems of reciprocity, that some require the organized enforcement of rules. Multiculturalism and ethnic tolerance are possible only when individuals recognize the cultural specificity of many of their own norms of conduct and so refrain from trying to impose them on others. Each one of these steps is extremely demanding, in the sense that it requires both self-control, in order to override the intuitive response, and very abstract, hypothetical reasoning, in order to see the rationale for the institutional arrangement.

The take-home lesson is that in the centuries-old struggle between civilization and barbarism, reason is not a neutral bystander. Certain institutional arrangements are actually “more rational” than others, in the sense that they depend upon people being rational for their reproduction and their merits can be properly assessed only from a rational point of view. This has important implications in the political domain, because it means that certain institutional arrangements can be defended only through appeal to rational arguments and considerations. And thus, to the extent that rationality is squeezed out of democratic political discourse, we may find that a gradual slide into barbarism is the inevitable outcome.