2. The art of the kluge

On the haphazard construction of the mind

In 1792, as the French Revolution entered its most radical phase, the Catholic church was outlawed by the central government and replaced by a new, secular cult, dedicated to the celebration of “Reason and Philosophy.” Notre Dame Cathedral, in the heart of Paris, was stripped of its Christian furnishings and transformed into a Temple of Reason (complete with an “altar of Liberty” and an inaugural procession that ended with the appearance of the “Goddess of Reason”).

The example was soon imitated across France. In 1794, at a ceremony held to inaugurate the new Temple of Reason in Châlons-sur-Marne, one observer provided a written account of the procession.1 The most interesting part is his record of banners that hung from the carts and wagons that made up the parade. The picture that emerges provides an uncomfortable sense of the darker forces that were at work in French society at the time.

It started out innocently enough: “Reason guides us and enlightens us,” read one pennant. “Prejudices pass away, reason is eternal,” read another.

Interspersed with these noble sentiments, however, were some rather more sinister suggestions. One cart contained a group of French soldiers wounded in combat, with a banner reading “Our blood will never cease to flow for the safety of the fatherland.” A bit farther on, a call to arms: “Destroy the tyrants, or die.”

Another cart displayed a group of wounded enemy prisoners. “They were very mistaken in fighting for tyrants,” read the banner.

The most disturbing, however, belonged to the local “surveillance committee,” the group responsible for rounding up and executing “counterrevolutionaries” in what would eventually come to be known as the Terror. Their banner read simply, “Our institution purges society of a multitude of suspect people.”

In the end, the Goddess of Reason didn’t quite merit the faith that was being shown in her. First of all, despite being “eternal,” reason manifestly failed to deliver order and stability to French society. Since the revolution, the French state has gone through somewhere between ten and seventeen constitutions, depending on how you count (compared to, say, the United States, which is still on its first). France is currently on its Fifth Republic, but it has also been, since the revolution, a monarchy, an empire, a military dictatorship, and a Nazi client state—a procession of political arrangements that made the ancien régime look rather decent by comparison.

Apart from all that, the French Revolution also marked the appearance on the political scene of a certain sort of ruthlessness—indeed, murderousness—in the use of state power that was to be reproduced time and again over the course of the twentieth century. Many innocent victims were sacrificed on the altar of Reason. This is a legacy that defenders of the Enlightenment impulse must contend with. Robespierre had unwisely suggested that revolutionary leaders should “guide the people by reason and repress the enemies of the people by terror.”2 The question is whether the second half of this was just an unnecessary flourish or whether it bore some internal connection to the rationalist temperament. At very least, before calling for a second Enlightenment, we need to understand how the first one could have gone so horribly wrong. To say that “they weren’t rational enough” is not an adequate response. What grounds do we have for thinking that we are any more rational now? We need to face up squarely to the weakness and limitations of reason and to recognize that the desire to rebuild all of society along “rational principles” is misguided. Only then can we redefine the Enlightenment project in such a way as to make it not just workable, but also more likely to improve the human condition.

There are many things that we are naturally well equipped to do; unfortunately, reasoning isn’t one of them. The ability of our brains to sustain rational thought is an example of what evolutionary theorists refer to as exaptation—where something that evolved in order to serve one purpose gets co-opted to perform some other.3 The classic example is feathers, which seem to have evolved in order to provide insulation and warmth, and only later became used for flight. Human design and engineering is full of this sort of repurposing as well. Consider the Nigerian student who acquired brief internet celebrity by building a helicopter from discarded car parts.4 It worked, but not very well, simply because none of the components had actually been designed for the purpose to which they were being put.

Human reason has a lot in common with a helicopter built from car parts. As Daniel Dennett observes, “many of its most curious features, and especially its limitations, can be explained as the byproducts of the kludges that make possible this curious but effective reuse of an existing organ for novel purposes.”5 Dennett accentuates the word kludge (nowadays more often spelled kluge) here because it is the crucial concept. The term is commonly used by engineers, mechanics, and computer programmers to describe a solution to a problem that gets something to work without really fixing the underlying problem.

Programmers often have to resort to kluges when trying to debug software. Suppose, for instance, you write some subroutine that takes a number as input, performs a complicated calculation on it, and then spits out another number as output. Everything is working fine, except that for some reason, whenever it receives the number 37 as input, the subroutine does something strange and produces the wrong answer. After spending hours staring at the code, trying to figure out why it’s not working properly, you give up trying to fix it. Instead, you just figure out manually what the right answer should be when the input is 37. Suppose it’s 234. You then add a line of code at the beginning that says something like the following: “Take the input number and feed it to the subroutine, unless that input happens to be 37, in which case just send back 234 as the answer.”

This is a kluge.

There are several components here that are essential. Most importantly, it works, in the sense that it causes the larger program to perform correctly even though the subroutine is still broken. It is also massively inelegant. If anyone else saw your code and figured out how you “fixed” the problem, you would be embarrassed. And finally, the underlying problem is still there, and so is likely to resurface on other occasions. (Suppose your code gets integrated into a larger program, and other people start using your sub-routine without realizing that they need to make this special exception for the number 37. Then the bug shows up again.)

Rational thought is made possible by an enormous collection of kluges. We tend not to realize this, simply because we have become so used to the way our minds work that we never stop to think about it. The psychologist Gary Marcus, however, has produced a sizable catalog of them (in a book appropriately titled Kluge: The Haphazard Construction of the Human Mind).6 Consider, for example, the way human memory works. When people design information storage and retrieval systems, they usually use what’s called a location-addressing system. The most familiar example of this is a library. Each book is given a number before it is put on a shelf in the stacks. This number is the “address,” and it tells you the location of the book. Each address is entered into a central registry (what some of us still call, anachronistically, the “card catalog”), along with an indication of what can be found there. This makes retrieving things easy: to find a book, you just get the address from the catalog, go to the location it points to, and pull the book from the shelf.

This system is the most efficient way of storing information for subsequent retrieval, as witnessed by the fact that computer memory is organized in exactly the same way (with a set of locations, each identified by a unique address). Unfortunately, this is not the sort of memory system that was given to us by evolution. Marcus refers to the system we have as “contextual memory,” and it has a number of quirky features.7 Most importantly, it is not systematically searchable, but is instead triggered by various sorts of stimuli, usually coming from the immediate environment, and is linked together by chains of association.8 For instance, many people will have had the following, familiar experience: you go to another room—say, the kitchen—to get something, but when you get there you can’t remember what it was you came to get. Try as you might, you simply can’t remember. If you go back to the spot where you were standing when you decided to go to the kitchen, all of a sudden the memory comes back again. Why? Because the memory is cued by the environment. When you see your desk you think, “Oh yeah, I wanted a cup of coffee.” But the kitchen doesn’t evoke that memory.

In many ways, our memory is like an old-fashioned “closed stacks” library, where you’re not allowed to go in and look for the book yourself but instead have to give a little slip of paper to a librarian, who goes and gets things for you. Even worse, though, you often can’t persuade the librarian to get the book that you want. Suppose instead that the librarian watches over your shoulder as you go about your business then runs off and fetches some book that he thinks you might find useful. Unfortunately, he has an unhealthy preoccupation with sex, violence, and food, and so often comes back with books that are not only unhelpful, but actually make it difficult for you to concentrate on what you’re doing. (Also, the stacks are in complete disarray, being the result of a partial merger between three or four older collections. Many of the books have also been destroyed, with the library having only a low-resolution scan left. So the librarian has to print up a new copy for you whenever one of these books is needed. Each time he does this he “cleans things up” a bit—unbeknownst to you—rewriting text that is too fuzzy, filling in the blanks, or adding in some details that he thinks might have been missed in the scan.9)

This is what we live with. There are, of course, good evolutionary reasons why the mammalian memory system would develop in this way (particularly if you think of our bodies, including our brains, as survival machines built to preserve our genes). It puts the highest-priority information at our fingertips, right when we are more likely to need it. Instead of having to search through our memory ourselves, we have what computer types call an “intelligent agent,” who sorts through it for us, quickly and without taxing central resources. So we get rapid updates with useful information, like “That guy has a mean temper,” or “Your brother was killed by a creature that looks a lot like that,” or “Last time you ate one of those you were sick for days.”

Unfortunately, the intelligent agent in question was programmed by our genes, not by us, and so its concept of important is not quite the same as our concept of important. When studying for an exam, the most important thing to you might be memorizing the periodic table, while your memory system is interested in anything but. And so your mind wanders, typically to things more closely related to survival and reproduction. Deleting things was also (apparently) not an important priority, given that the whole system was designed to work for only thirty years or so. As Pascal Boyer has observed, rather poignantly, the experience of grief is closely related to the fact that we lack the ability to delete the file on people after they have died, and so the librarian continues to prompt us with memories that are no longer relevant to the actual world.10 We have no way of making him stop, except the passage of time, which weakens many of the associations.

The entire system would be completely unworkable were it not for the fact that we have discovered all sorts of clever ways of tricking the librarian into getting us things that we want and ignoring things that are irrelevant. In particular, we’ve figured out how to make it look like we’re doing something, without actually doing it—enough to trick the librarian into going and getting us what we need. We’ve also noticed that the librarian tends to show up with more than just one book; he brings along extra material that he thinks might be related, based on patterns of association. So we have discovered that through repetition, or by putting things together into a narrative, or by otherwise building associations, we are able to enhance our recall.

And then, of course, there is language. Luckily for us, the librarian doesn’t distinguish between the sound of a bird, a crackling fire, or a falling tree and the sound of a spoken word—any of these can cue a host of memories. Similarly, he does not distinguish between the visual pattern of a leaf, an animal, or a cloud and that of a written word. Thus we can use language to manipulate the librarian, creating a set of triggers that are decoupled from the immediate environment. You can retrieve a picture of an elephant just by seeing the written word elephant.

We also learn how to translate information into different modalities, to get it into a format in which it can be more easily remembered. An unfamiliar name may be impossible to remember as a sound pattern, but easy once you learn how to spell it. A Chinese character may be impossible to remember as an image, but easy once you’ve learned the stroke order with your writing hand. A list of objects may be impossible to remember as a list, but easy once you learn to imagine a room with all of those things in it. These are all different ways of coaxing better performance out of our memory system.

The important point is that none of these little tricks fixes the underlying problem, which is that our memory is not well designed to support rational cognition. But it is what it is, and we can’t fix it. So we develop work-arounds, or kluges. Everyone has his or her own little bag of tricks, some of them better than others. There are some people who have great memories, such that one suspects that their brains just work better than everyone else’s. But if you look more carefully, you can see that a lot of people with great memories actually just have a better bag of tricks.

Suppose that one day you are abducted by aliens. You wake up on their ship strapped to a gurney. You notice that they are about to perform a series of horrific experiments on you, ending with a dissection that will most certainly bring about your death.

“Wait,” you cry out. “Stop!”

The alien scientist comes by, looks at you, and says, “Why should we stop?”

Pausing only briefly to acknowledge his surprising mastery of spoken English, you say, “Because it’s wrong to torture other intelligent species.”

“What intelligent species?” says the scientist. “Surely, you’re not referring to yourself.”

“Yes, I’m referring to myself,” you say. “I’m really smart.”

“No, you’re not,” says the alien. “You can’t even do mathematics.”

“Yes, I can,” you say.

“Okay, then tell me, what’s 78 times 43?”

“No problem. Do you have a pencil and paper? And could you unstrap my arms?”

The alien looks puzzled. “Why do you need a pencil and paper? Can you answer the question or can’t you?”

“Yes, I can answer it, I just need a pencil and paper.”

“Where I come from, we use our brains to do mathematics. You do it in some other, alien way?” says the alien.

“Sure. Untie me and I’ll show you.”

Moments later, equipped with a pencil and paper, you quickly work out the answer (3,354). And just to make sure you don’t wind up back on the gurney, you solve a couple of quadratic equations, derive some second-order derivatives, and sketch out Cantor’s uncountability proof.

“Wow,” he says. “You guys really can do math. What a strange species. How were we supposed to know that your brains require pencils in order to function correctly?”

At this point you realize that the alien has fallen victim to a very fundamental misunderstanding of how the human mind works. He thinks that your mind is housed entirely in your brain, and that your capacity to reason is based entirely upon the biological substratum of your cognitive system. So when you said that “you” were good at mathematics, he thought that you meant “you” in the sense of “your biological brain,” whereas what you really meant was “my biological brain plus something to write with and something to write on”—an easy mistake to make. And if you think about the mind strictly in terms of the brain, then he was right: humans are not a particularly intelligent species. When you abduct people—separating them entirely from their environment and from the artifacts they have developed that both augment and transform their computational abilities—then they’re not that smart. The peculiar genius of the human brain, however, lies not in its onboard computational power, but rather in its ability to colonize elements of its environment, transforming them into working parts of its cognitive (and motivational) system.

This misunderstanding of the human mind is actually quite common. Consider again Dennett’s characterization of the rational mind as a serial virtual machine “implemented on the parallel hardware of the brain.”11 The serial processing system is actually implemented not just on the hardware of the brain, but on portions of the environment as well. (In philosophy this is known as the extended mind thesis; it has been articulated and championed most forcefully by Andy Clark.) Dennett acknowledges that the mind often has to “offload” some of its memories onto the environment.12 The picture here is of the brain being like a CPU in a computer, while the world serves as a hard drive. Yet the role that the pencil and paper play when we are doing long multiplication would clearly be classified by any computer scientist as part of CPU function, not storage. This is even more obvious in the case of an abacus, where you are actually doing no math at all in your head, just moving your fingers in certain patterned ways, like playing a guitar. The computations are not just being stored outside your brain, they are being done outside your brain. Yet is there any important difference between the neophyte working an abacus and the older merchant who has “internalized” the device so that he need only twitch his fingers to work out the sums?

There is a close analogy to this in the way that we think about our body. Where does our body stop and the world begin? Our inclination is to think that our body is a collection of cells, distinguished by the fact that each one contains the same collection of human DNA. There is a sense in which this is true. And yet one’s body, in this sense, is not a functional unit. The living breathing body that you walk around with is actually an extremely large colony of organisms, in which nonhuman cells outnumber human cells by at least 10 to 1. (The National Institutes of Health in the United States have recently launched the Human Microbiome Project, as a successor to the now complete Human Genome Project, in order to catalog and sequence the nine hundred or so species of microorganism that we normally carry around with us.) Some of these organisms are parasitic, but many are symbiant species. Digestion, in particular, would be impossible without all the helpful bacteria that populate our gut. A purist might want to say that, technically, all these bugs are not part of “you.” The point, however, is that “you” are not functional without them, since your body has coevolved with them. We have no trouble saying that “you” digested your last meal, without getting picky about which parts of the process were done by which members of the colony. Furthermore, even though your gut basically “offloaded” certain tasks onto the bacteria, the entire process is still one that we are happy to call “digestion.”

For the same reason, we should be perfectly comfortable calling a certain process “thought” even when portions of it are offloaded onto environmental systems. In the same way that your digestive system includes a population of bacteria, your “cognitive system” includes a lot of what Clark calls environmental “scaffolding”: pencils, letters and numbers, Post-It notes, sketches, stacks of paper, internet searches, and, most importantly, other people.13 These are not merely passive storage systems; they are moving parts in our processes of rational thought.

The most obvious examples of environmental scaffolding involve our memory system, which is, as we have seen, especially ill suited to the type of tasks that we would like it to perform. Working memory, in particular, is not only the central bottleneck in any serial processor, it is an area in which our biological brains perform particularly badly.14 Most people cannot remember a seven-digit phone number for long enough to hang up their voice mail and dial it. (And, of course, the reason that most of us can’t multiply 78 by 43 without a pencil and paper is that we can’t keep the four intermediate products in memory for long enough to be able to add them up. This is, when you think about it, rather pathetic.) The central virtue of a writing system or an abacus is that it overcomes this limitation. Once the marks are made on the page or the beads are arranged in a certain position, they stay put, so that we can shift our attention to other things and come back to them later. The result is a massive increase in computational power.

This sort of “solution” to the limitations of our working memory system is the perfect example of a kluge. After all, writing things down doesn’t fix the underlying problems with our working memory system, it simply allows us to work around them. Rational thinking, as we have seen, is made possible through an extensive system of kluges. Although many of them rely upon aspects of the environment, there is no useful inside/outside distinction to be drawn. Some kluges are environmental; some are psychological. Many more start out being environmental and later become psychological, through internalization. The important thing to recognize is that human reason is not merely “enhanced” by these environmental kluges, any more than human digestion is “enhanced” by the presence of intestinal flora. It depends upon them. In fact, psychologists have shown that a huge amount of human irrationality can be provoked by taking people out of their usual environment and putting them into a situation where they lack all of the scaffolding that they normally use to make decisions. Indeed, the typical psychology study, in which students are escorted into an empty room, seated in front of a computer, and then left alone in silence to answer a series of questions, is not all that different from an alien abduction. It’s no surprise that people perform poorly under such circumstances, since our biological brains are not very good at reasoning all by themselves.

Imagine that, in order to abduct people, the aliens used a teleportation device that screened out all nonhuman DNA, so that when you arrived on the spaceship you were perfectly cleansed of all other organisms. You might enjoy being a couple pounds lighter, but unfortunately, your body would immediately start to malfunction in all sorts of ways (intestinal disorders, vitamin deficiency, skin infections, etc.). After several days, the aliens might look at you and wonder how humans manage to survive at all—we seem totally dysfunctional. But the mistake is on their side. No wonder you don’t work properly: they abducted only a part of your digestive system when they left all the bacteria behind, just like they abducted only a part of your rational mind when they left your pencil and paper behind.

As we have seen, the central advantage of a serial processing system is that it is able to chain together a sequence of operations, in which the content of what comes later depends upon what was determined earlier. This is what allows it to reason. This advantage, however, comes with a disadvantage. A serial processor, by its very nature, does one thing at a time. The world, unfortunately, often demands that we do more than one thing at a time. The only way that a serial processor can handle this is by taking turns, doing a bit of one task, then a bit of another, then returning to the first and doing a bit more. There is a huge bottleneck here, which in turn creates an extremely complex optimization problem. How much time should be allocated to each task? In what order should the tasks be performed? Unfortunately, the way that our biological brains try to solve this problem is very far from ideal, mainly because they are not adapted for serial processing.

To see what an optimal solution to the problem looks like, consider how multitasking is achieved in a computer. Multitasking—the ability to run several applications or processes simultaneously—is actually an illusion. The computer is doing only one thing at a time; it is just alternating very quickly between them. In order to manage all this, the operating system has something called a scheduler, whose job is to ration CPU time between all of these different processes. Apart from having an algorithm that it applies in order to determine task sequence, the scheduler has two special powers. The first is the power of preemption—if a process is taking too long, the scheduler has the power to interrupt it and move on to other things. Of course, since not all tasks are equally important, each task is assigned a priority level, and preference is given to high-priority tasks. The priority system, however, gives rise to potential abuse. What is to stop unscrupulous software designers from making their programs run faster, by assigning them urgent priority levels when the tasks they seek to perform are of only moderate importance? To avoid this problem, the scheduler exercises a second important power, which is the ability to control the priority level. The highest priority levels are completely controlled by the system, so that programs must ask for permission in order to be assigned a high level. The scheduler also reserves the right to reassign priority levels if it finds that certain processes are hogging CPU time or taking too long to finish. It can also terminate them at will if it feels that they are misbehaving, are no longer useful, or have crashed.

The reason this is worth knowing is that the problems posed for a computer by the need to multitask are exactly the same as the problems posed for a human. Our intuitive judgments are generated by a parallel processing system, which is capable of genuine multitasking. With explicit, rational thought, on the other hand, we are limited to one task at a time. The human equivalent of CPU time is attention, and as we all know, you can pay attention to only one thing at a time. (People who consider themselves good at multitasking in the realm of explicit cognition sometimes think that they are performing multiple tasks simultaneously, but in reality they aren’t. They are simply shifting their attention from one thing to another. Furthermore, despite thinking of themselves as good at multitasking, there is some evidence to suggest that what they actually are is bad at concentrating.15 They multitask only because they are easily distracted, as a result of which they tend to perform worse on all tasks when compared to those who have a more “plodding” style.16) Because of the bottleneck it creates, attention must be carefully rationed, in the same way that CPU time in a computer is rationed. Our scheduler, however, lacks some of the characteristics that a good system should possess. First, it has only limited powers of preemption and, second, it doesn’t seem able to reassign priority to tasks. As a result, we wind up using all sorts of kluges in order to keep things running with at least a semblance of order.

There is general agreement among psychologists that attention is metered out through a system of competition between stimuli. At any given time, your brain is inundated with potential information. (Timothy Wilson estimates that our brains are receiving about 11 million discrete bits of information per second, of which no more than 40 can be consciously processed.)17 Thus an enormous winnowing must occur. If you think of just your own body, there are usually several dozen spots on your skin that would like to be itched, some muscles that are slightly uncomfortable and would like you to shift your weight, a rumble in your stomach that is asking to be fed, a cut or bruise that is generating a slight pain, a roughness in your throat that would like you to cough, a fuzziness in your head that could be remedied with a short nap … The list goes on and on. When you’re doing something engaging, such as watching a good movie or playing tennis, all of these impulses get ignored. They don’t actually go away—they are still running processes—they simply lose out in the competition for attention. They get no CPU time. You notice them only when you start to do something slightly less engaging, like sitting quietly and reading a book. Then all of a sudden you find yourself scratching and shifting around, falling asleep, or even remembering things that you had been forgetting to do. This is because low-priority tasks are finally managing to break through into consciousness and attracting a bit of attention to themselves.

Surprisingly, there don’t seem to be any stimuli with an automatic override in this system. Pain, for example, doesn’t have any sort of dedicated channel, but rather has to compete on all fours with other stimuli in order to get noticed. This explains the well-documented phenomenon of soldiers in combat (or, less dramatically, athletes in competition) suffering serious injuries but not noticing the pain until things start to calm down.18 It is also amazing what people are able to overlook when they are paying attention to something else. In one particularly famous experiment, subjects were shown a video of students in different-colored shirts passing around some basketballs, and were asked to count the number of times that someone wearing a white shirt passed the ball. Halfway through the video, a woman in a gorilla suit walked through the shot, stopped and thumped her chest, and walked off. Afterward, subjects were asked how many passes they saw, and then whether they had seen anything unusual in the video. Almost half of all subjects said no, and in fact had no recollection of the woman in the gorilla suit at all. When shown the video again, some insisted that it was a trick, and that the investigators must have been showing them a different video.19

It’s not clear which is more bizarre: not noticing that your leg has been blown off while you’re under enemy fire or not noticing a person in a gorilla suit right in front of your eyes, simply because you’re trying to count. Either way, we have an amazing ability to ignore things. Unfortunately, we also have very little control over it. Part of the reason we find it difficult to believe that people could fail to notice traumatic injuries is that we have all had the experience of trying to ignore pain and failing. This is because we are powerless to reassign priority levels to stimuli the way that a multitask scheduler does. So we often find ourselves unable to ignore a nagging pain or an unwanted thought or the sound of a car alarm nearby, despite the fact that concentrating on these things serves no useful purpose. We therefore resort to kluges—often we try to distract ourselves, generating an alternate stimulus powerful enough to outcompete the annoying stimulus, thereby leading us to disregard it.

Our powers of preemption are also limited—although not entirely nonexistent. Many people who have suffered a traumatic brain injury, especially to the frontal lobe, exhibit perseveration, which is basically an inability to terminate thought processes in an appropriate and timely fashion. They find themselves unable to stop thinking about something, unable to move on to a new topic of conversation, and unable to abandon unsuccessful problem-solving strategies. While the symptoms in these cases are pathological, they are in many ways just an extreme version of a difficulty that we all face. If you keep track of your thoughts over the course of a day, you’ll find that you are subject to an enormous amount of what psychiatrists call unwanted ideation—things you have difficulty stopping yourself from thinking about. It could be an irritating comment made by a co-worker, a video game you were playing yesterday, an anxiety related to your children, or a sexual image or fantasy. Straightforward inhibition of such thought processes is very difficult, and so most of us rely upon kluges—we try to think of something else, something more distracting, in order to drown out the thought we are trying to get rid of.

For most purposes, the system that we have works well enough. The mere fact that our brains don’t straight-up crash is a significant accomplishment (software engineers have yet to invent an uncrashable computer system). Yet for certain tasks, particularly those that require a great deal of concentration, we start to bump up against design constraints. While certain complex reasoning tasks are highly rewarding (say, plotting out an elaborate revenge against a hatred rival), others are far less so. Consider the task of reading a textbook. Imagine that the information is presented in a dry, factual format, without the benefit of amusing anecdotes and alien abduction scenarios. Imagine that the benefits of learning the material are nonobvious and far removed in time (suppose you’re reading it just because you feel it is something you should know). Many people find it absolutely impossible to concentrate under these conditions. They can read for no more than five minutes before other thoughts begin to intrude and eventually take over.

The problem is that the task you’re trying to perform generates an incredibly low level of stimulus and therefore easily gets trumped by almost anything else that comes along. Because we are unable to directly control our thoughts, we need kluges. Some of this may involve making what we are doing seem more exciting, but the standard basket of strategies involves manipulating the environment in such a way as to make everything else less exciting. The first thing you need is to be alone. Then you need a place that is quiet: no music, no irritating noises. It’s also good to have an environment that is familiar: nothing new or interesting to attract your attention, just the same old chair, with the same old lamp. And obviously, you need to get rid of anything that is even vaguely reminiscent of sex (since at least half the population finds sex second only to pain in its ability to command attention). It also helps to be healthy, fed, and well rested.

Think of these as the seven kluges of highly effective people. There are some environments in which it is literally impossible to think. The most basic trick when trying to concentrate is simply to close your eyes, an expedient that we all resort to at one time or another. It is important to recognize that this is, in its own way, a kluge. We cannot control our attention directly, so we resort to a second-best solution, which is to block out one of the channels through which our environment impinges on consciousness.20 Unfortunately, there are many tasks that cannot be completed with our eyes closed. The next-best way to enhance our ability to think is to create an environment that is conducive to thinking. As Clark put it, one of the central features of human cognition is that “we build ‘designer environments’ in which human reason is able to far outstrip the computational ambit of the unaugmented biological brain.”21 When you walk into someone’s office or study or even car—wherever that person does his or her thinking—you are typically entering one of these designer environments.

It is important to understand that these environments are not just ones in which we happen to feel most comfortable, like in a house set at room temperature. Our body temperature stays pretty constant, regardless of what the temperature in the room is—the only impact of the ambient temperature is on how much energy it takes to maintain a constant body temperature. Thinking, however, is not like this. Unlike body temperature, which you can maintain all on your own, concentration is not something that you can maintain on your own. Your biological brain simply lacks the tools needed to accomplish this. And yet concentration is absolutely fundamental to the task of reasoning. And so we try to achieve it through manipulation of the environment. To understand the relationship between your brain and the surrounding world, it is better to think of yourself as being like a cold-blooded creature, whose metabolism actually slows down and eventually stops altogether as the outside temperature drops. When lizards bask in the sun or retreat to the shade, they are using the external environment as a way of achieving an optimal body temperature. This is how our brains use the environment: we fiddle with it in order to get ourselves thinking right. We are like cognitive ectotherms.22

These examples are all intended to illustrate Dennett’s claim that our brains are extremely inefficient when it comes to reasoning. The philosophers of the first Enlightenment inherited from both the ancient Greek and the medieval Christian traditions a view of reason as a tiny island of perfection in a sea of corruption and decay. Reason was regarded as a direct imprint of the divine intelligence, and therefore as possessing the same perfections—unity, order, simplicity, and goodness. The central mistake made by early Enlightenment thinkers lay in their failure to break with this tradition. They adopted a theory of reason that was not only untrue, but in many ways the opposite of what is true. Far from reflecting a divine intelligence, the structure of human reasoning systems is below even the (already low) standards of evolutionary “design,” because it is not adapted for the job it is currently being asked to perform. When you hear the word reason, you should think not of angels beating their wings, but rather of homemade Nigerian helicopters.

Seeing things in this light allows us to better understand the failings of the first Enlightenment. The partisans of reason assumed something like an “anything you can do I can do better” stance toward all the products of the human spirit. Their goal was to replace tradition, authority, and intuition with the exercise of pure reason. This is one of the factors that made rationalist politics, from the very beginning, incline toward revolutionary politics. One can see this most clearly in the rise of social contract theory, an approach to thinking about political questions that was shared almost universally by first-generation Enlightenment thinkers. This theory invites us to imagine a “state of nature” in which there are absolutely no institutional constraints or rules: no state, no laws, no economy, no educational system, and in many cases not even the family. It then says, “Suppose you could rebuild all of these institutions, from the ground up, from scratch—how would you do it?” From this intellectual sweeping away of existing institutions, it was not such a great leap to a sweeping away in practice. Thus from the French Revolution through to the communist revolutions of the twentieth century, there was a widespread desire to rebuild both state and society from scratch, in accordance with rational or scientific principles.

To be fair, it should be acknowledged that by the middle of the eighteenth century, “reason” had accomplished a number of astonishing revolutions in the realm of scientific belief. Theories that had been held unchallenged for millennia had been completely overturned. Thus the general authority of tradition had been greatly eroded. Intuition tells us that the earth must be standing still; otherwise, we would fly off it. Yet Copernicus had shown that it moves. Aristotle said that there can be “no motion without a mover,” and for thousands of years everyone had deferred to him. Yet Isaac Newton showed that it is only change in motion that requires explanation; objects in motion will continue that way until stopped.

For Europeans, who had spent centuries believing that their own civilization was inferior to that of the ancient Romans and Greeks, this sudden discovery of massive error in the ancient worldview created an enormous crisis of confidence, not just in ancient belief systems, but also in ancient institutions. Worship of ancient wisdom—Aristotle in particular—came to be seen as a major impediment to the progress of knowledge. It was not so great a leap to imagine that deference to ancient institutions—the church, the monarchy, Roman law—might be an impediment to progress as well. And of course, there were all sorts of problems with these institutions. Abandonment of the principle that the king and his subjects must share a religion, for instance, represented a major advance.

Yet there was considerable overreach in the Enlightenment project. Reason wound up being assigned all sorts of tasks that, in the end, it simply was not powerful enough to perform. At the same time, because partisans of the first Enlightenment conceived of reason in purely individualistic terms, as something that works away inside the brains of discrete persons, they wound up inadvertently dismantling much of the scaffolding that reason requires in order to function correctly. As a result, they kneecapped reason just as they were sending it onto the field to face a much larger and more brutish opponent. It is no surprise, then, that rather than improving various social institutions, in many cases they wound up making things a lot worse.

Psychologist Gerd Gigerenzer has an amusing story that illustrates precisely the trap that early Enlightenment thinkers fell into. It concerns a baseball coach who, frustrated that his outfielders were missing too many catches, became convinced that it was because they were running too slowly, taking too much time to get to the ball. And it’s true—if you look at baseball outfielders, they often run at something much less than top speed when they are moving toward the spot where a pop fly is about to land. The coach decided that if the players simply ran to the spot more quickly, then they would have an easier time making small adjustments to improve their chances of catching the ball. So he gave them a set of new instructions on how to catch fly balls: look and see where it’s going to land, run as fast as you can to get there, then look up and make whatever adjustments are required in order to catch it. Unfortunately, when the players tried to follow these instructions, they found that their ability to catch the ball had been completely undermined. They wound up standing nowhere near where the ball was going to land.

Why is that? Intuitively, anyone can sense what the problem would be. Imagine that you’re standing around the outfield, slightly bored, enjoying the nice summer day. Suddenly there’s a pop fly. You look up into the sky, and you see the white baseball moving against the clouds and the sun. You know it’s your job to catch it. But how do you know where it is going to land? The answer is, you just know. Even people who are terrible at actually making the catch know—they usually manage to get to the general vicinity of where it is going to land. If you close your eyes and imagine the baseball, you can even feel what needs to be done in order to catch it. And what you know, in your body, is that adjusting your running speed is part of how you do it. That’s why you see baseball players slowing down and speeding up as they move toward the ball.

What they’re doing, in fact, is following a very simple heuristic. The mathematical calculations involved in figuring out where a flying baseball is going to land are much too difficult for us to carry out in real time. What we use, instead, is a simple little shortcut, which Gigerenzer calls the gaze heuristic. The rule is something like this: “Adjust your running speed so that your angle of gaze to the baseball remains constant.”23 If you follow this rule with a descending ball, you will initially start out running slowly, then gradually speed up until, as if by magic, you arrive at the ball just as it comes level with your head. (The rule for positioning yourself with respect to an ascending ball is slightly different, but just as simple.) If you override this heuristic by fixing your running speed—the way the coach wanted his players to do—the trick no longer works, and so you’re likely to wind up nowhere near the ball at all.

The coach here exemplified—on a very small scale—the hubris of modern rationalism. He took a form of behavior that he didn’t really understand, examined it superficially, noticed a few details that didn’t make sense to him, and said, “Okay, stop what you’re doing, it makes no sense, I have a new system that will be much better.” He ignored the fact that baseball players have been doing things this way forever; he simply assumed that he knew better, and that there was nothing to be learned from intuition or from the accumulated wisdom of ages. He then presented his own, “more rational” solution. Yet the mere fact that something can be done does not mean that it can be done rationally. The coach put into place a system that completely failed, that was far worse that what it replaced. In his quest to improve things, he wound up breaking them.

This is a script that has been replayed countless times, often with far more serious consequences. The literature on development aid, for instance, contains literally thousands of stories of Westerners showing up and messing things up: replacing inefficient local irrigation schemes with large-scale projects that don’t work at all; pressuring farmers to switch their seed, only to find that the new crops won’t grow; bringing in complex equipment that breaks down and can’t be repaired; clearing vast areas of forest, only to provoke large-scale soil erosion. Here is an example, taken almost at random from the literature:

In Malawi’s Shire Valley from 1940 to 1960 British officials tried to teach the peasants how to farm. They offered the standard solution of ridging to combat soil erosion, and were at a loss to understand how Malawian farmers resisted the tried-and-true technique of British farmers. Unfortunately, ridging in the sandy soils of the Shire Valley led to more erosion during the rainy season, while exposing the roots of the plants to attacks by white ants during the dry season.24

There is an interesting parallel between these two examples. The first—catching the baseball—involves overestimating the power of reason by underestimating the effectiveness of nonrational cognitive systems. The second—choice of farming techniques—involves overestimating of the power of reason by underestimating the power of evolutionary processes in society. If farmers in Malawi are not able to offer a sophisticated explanation for the soil management practices that they use, there is a temptation to regard these practices as irrational, unjustified, “merely traditional.” And yet people have been farming in the Shire Valley for thousands of years. Chances are their soil management practices are reasonably well adapted to the local environment. Furthermore, the chance that a total stranger is going to be able to walk into this complex ecology and figure out from first principles how things should best be organized is quite remote. And yet time and again, this is precisely what rationalists have done.

Modern conservatism was born as a reaction against this sort of Enlightenment hubris. It is well summarized in G. W. F. Hegel’s powerful yet opaque pronouncement that “the real is rational.” What Hegel meant was that if you look hard enough, you will find that there is usually a reason for the way that things are, even when the way that things are seems to make no sense. People may not be able to say what this reason is, and in the end it may not be the best reason, but you need to understand what it is before you start fiddling with things, much less breaking them down and trying to rebuild them. Thus the conservative temperament was born, as a defense of tradition against the tendency of Enlightenment rationalism to take things apart without knowing how to put them back together again, much less improve them.

In this respect, the core of the conservative critique was absolutely correct. The question is, once we acknowledge this, is the only alternative to fall back into an uncritical acceptance of tradition? Or is it possible to use this insight as the basis for a more successful form of progressive politics?