3. The nuts and bolts of civilization

Where conservatism gets it right

Many things that appear to be the product of intelligent design are actually the product of evolution. This is true not just in the realm of biology, but in human culture as well. For instance, no one just sits down and designs an airplane—or at least, not one that flies. Every airplane in the sky today is a descendant of the first successful airplane, fashioned by the Wright brothers in 1903. The process through which these primitive airplanes evolved into modern ones bears a remarkable resemblance to biological evolution.1 First, there is variation: throughout the nineteenth and early twentieth centuries, literally hundreds of different “flying machines” were created: gliders, kites, dirigibles, ornithopters, and fixed-wing craft. Eventually, someone hit upon a solution to the key problem—which was not actually getting the machine into the air, but rather controlling it once it was up there.

Second, there is inheritance and differential reproductive success (in this case through imitation). After the first successful airplane design was established, a process of piecemeal refinement began. Other inventors copied the basic three-axis control system and wing plan used by the Wright brothers but then made small changes of their own (substituting, for instance, ailerons for wing warping as a way of controlling roll). From that day forward, anyone planning to design an airplane, rather than starting from scratch, began with an existing design that worked, then made changes. A clear pattern of “descent with modification” was established.

Of course, the key difference between cultural and biological evolution is that change in the biological sphere is blind; it comes about by accident. This is sometimes the case in the domain of culture, but more often modifications that arise in culture occur because people are developing intelligent solutions to outstanding problems. Thus cultural evolution is guided in a way that biological evolution is not.2 What makes the process similar to biological evolution is that despite each specific modification being the product of individual intelligence, these modifications eventually accumulate to the point where the artifact as a whole could never have been a product of individual intelligence. It doesn’t matter how smart your engineers are—no one can sit down and design an Airbus A380 from scratch. And yet people do design Airbus A380s. They are able to do so precisely because they do not have to design them from scratch but are able to build upon the work done by previous generations.

Now if you can’t design an aircraft from scratch, why would anyone think you could design a society from scratch? People are considerably more complicated and less predictable than mechanical parts. And yet these were precisely the pretensions of social contract theory. Idealists and visionaries were encouraged to imagine themselves in a state of nature, with no settled institutions, and then ask themselves what they would rationally agree to accept by way of constraints on their freedom. What sort of economy would they choose? What form of government? What sort of judicial system? Why not reorganize the family as well? Enlightened critics of society spent their time sketching rational utopias without, however, taking seriously the question of whether people could ever be persuaded to live in this way.

In his justly famous critique of the French Revolution, Edmund Burke argued this sort of social engineering is doomed to fail. Successful institutions are built piecemeal, over time. “The work itself,” he argued, “requires the aid of more minds than one age can furnish.”3 The model that Burke had in mind was, of course, the English system of government, which, rather than having been created from scratch through the adoption of a written constitution (in the Enlightenment style), was the result of parliamentary conventions, royal prerogatives, and judicial rulings, as well as laws and treaties, adopted and modified over the course of centuries. While offering exemplary stability, the British system is not obviously inferior to any of the constitutions produced through intentional design. Indeed, it remains the most widely copied system of democratic governance in the world (far more so than the American, which no one has ever seen fit to copy, and which Americans themselves do not even try to reproduce after having brought about “regime change” in other countries).

When one looks at a democratic political system, it is easy to see that it has an incredible number of moving parts. Creating a democracy involves much more than just letting people vote for their leaders. Successful democracies are also characterized by the rule of the law, the separation of powers, judicial protection of individual rights and liberties, competitive political parties, freedom of assembly and debate, an independent and free press, and a variety of deliberative and consultative processes (hearings, royal commissions, committee inquiries, etc.), not to mention a set of practices that are widespread but not universal, such as bicameral legislative bodies and constitutional review of legislation. Furthermore, the way that these parts are put together is different in every single country, largely in response to differences in national culture, institutions, and informal social norms.

This is why it is so difficult, especially for outsiders, to create democracies in previously nondemocratic countries. Not only are there a lot of moving parts, and not only do the parts function differently in different cultural contexts, but we don’t even have a satisfactory theory of what makes the parts work together properly in our own society. Indeed, if one looks up “democratic theory” to see what political scientists have to say about democracy, one will find that the field is sharply divided into three separate camps, each wedded to a different theory of what makes a democratic society democratic.4 So “reason” is not much use when it comes to bottom-up design. If our own political institutions were somehow to disappear overnight, along with our precise memory of how they were organized, we wouldn’t know how to rebuild them from first principles. Is it any wonder then that we have such difficulty exporting the model to other countries? (Indeed, it takes a special sort of nerve for Westerners to lecture other nations about the virtues of democracy when our own experts can’t even agree about what institutions and practices are essential to democratic governance.)

What Burke was reacting against, in the French Revolution specifically and in Enlightenment thinking generally, was the massive downgrading of tradition. The mere fact that people can’t “give reasons” for a particular arrangement doesn’t mean there are no reasons. In many cases, the way that things are is the product of many small adjustments that have been made over the years. Thus there may be an enormous amount of accumulated wisdom embedded in traditions and institutions, even if no one is able to articulate exactly what that wisdom is. If we insist on rebuilding everything from scratch with each new generation, then it is impossible to engage in any sort of cumulative learning process, whereby each new generation makes a slight improvement to what it has received then hands it down to the next.

Consider Thomas Jefferson’s proposal, made in 1789, that all laws—up to and including the United States Constitution—should automatically expire after thirty-four years. His reasoning was a classic piece of Enlightenment rationalism. “No society can make a perpetual constitution, or even a perpetual law,” he wrote. “Every constitution, then, and every law, naturally expires at the end of thirty-four years. If it be enforced longer, it is an act of force, and not of right … This principle, that the earth belongs to the living and not to the dead, is of very extensive application and consequences in every country, and most especially in France.”5

The consequences of implementing such a proposal would be catastrophic, precisely because it seeks to eliminate the mechanism that allows for evolutionary change within our institutions. Imagine every generation having to refight every battle that was ever fought over legislation, not just about abortion, the death penalty, social insurance, and civil rights, but even over habeas corpus, private property, and freedom of conscience. There are many, many issues that we rightly regard as settled but that could easily fail to pass a free vote if they were put forward today as proposals. The recent normalization of torture in American political culture, along with the development of a significant pro-torture faction within the Republican Party, provides an instructive lesson in this regard.

Although many conservatives defend their local traditions merely out of a fondness for the content of those traditions, Burke’s argument has an entirely different structure. In Burke’s view, a general presumption in favor of the status quo is important because, when combined with a willingness to tinker and adjust, it makes processes of cumulative improvement possible. If everyone insists on reinventing everything, we’ll never get anywhere, simply because no one is smart enough to understand all the variables and grasp all of the reasons that things are done exactly the way they are. Instead of trying to change everything at once, it’s better to take things more or less as given, change one thing, then wait and see what happens. This is, of course, not really a full-blooded defense of tradition. It is more a species of neotraditionalism.6 It is a rational argument in favor of deferring to tradition. As such, it does not claim that tradition is always right or that we should honor and obey our parents and elders in all matters. It says that in specific instances, reason is likely to do a bad job at figuring things out, so we may be better off relying on evolutionary processes.

When is reason likely to be bad? One area in which it is certain to be weak is in dealing with very complex systems, where it is difficult to trace causal connections, or when there is a long delay between intervention and outcome.7 Consider, for example, the task of raising children. Anyone who starts out reading parenting books, in order to see what the “experts” have to say, will quickly discover that the experts completely disagree with one another about pretty much everything. Even something relatively simple, like how to deal with picky eaters, is subject to completely contradictory advice. Some say you should never force children to eat anything, or they will be traumatized and suffer for the rest of their lives. Others say that many foods are acquired tastes, so you should force your children to eat. But how many times, before you give up? Some say three, others say five, others say dozens. And should you bribe them with dessert? Some say yes, others say no.

We’re not talking about a complicated question here, like how to get them into Yale. We’re just talking about how to get them to eat vegetables. The reason the advice is so contradictory, however, is that questions involving nutrition are complicated—often too complicated to admit of controlled experimentation. Furthermore, the feedback loop is extended in time, making it difficult to track the consequences of different policies. And finally, kids are different, so some are likely to respond in different ways than others. Basically, given the constraints of resources and time, it’s impossible to figure out. How then do most people handle the problem? By doing whatever their parents did.

This is why most people are conservatives when it comes to raising their children (and perhaps why raising children turns many people into conservatives). Most of the big experiments of the twentieth century in “alternative” childrearing and “alternative” education were either ridiculous failures or else made little difference to the way children grew up. As a result, most people today practice descent with modification. They take the way that they were raised by their own parents, modify it slightly to eliminate the parts they most disliked or that they think were most counterproductive, and then go with that. There is considerable wisdom in this approach, simply because “reason” is unlikely to do better than tradition in this area.

Children talk to themselves a lot. If you listen carefully to what they’re saying, it’s actually quite interesting. Not so much in terms of the content, but in terms of what they are trying to accomplish. This is because a significant fraction of the talk—in some circumstances more than half—is aimed at controlling their own actions. Furthermore, the talk is not just about how best to get things done (for example, a child doing a puzzle, saying, “Does this piece fit? Maybe turn it …”). A lot of it is aimed at achieving self-control (for example, a child repeating to himself “Don’t cry, don’t cry” when he is upset). Children talk to themselves when, as psychologist Laura Berk put it, they need “to take charge of their own behavior.”8

We have already seen how the internalization of this capacity leads to the development of rational self-control. Unfortunately, like everything else about human reason, the implementation leaves much to be desired. As we all know, just making up your mind to do something doesn’t automatically lead to getting it done. Our ability to control our own behavior is far from perfect. Of course, the fact that we have any control at all is something to be thankful for. One of Jane Goodall’s fabulous observations was of a chimpanzee who, having discovered a cache of uneaten bananas, clamped his hand over his own mouth in an attempt to suppress his own cries of excitement.9 He wanted to avoid attracting the attention of others to his discovery, but unlike humans, chimpanzees don’t have the cortical brain structures required to suppress these vocalizations internally.10 So he opted for the kluge of holding his own mouth shut.

If you look at how human self-control works, a lot of what we do is not all that more sophisticated. The ideal, of course, is of the individual able to exercise complete control of his actions through internal willpower—the monk or the buddha, the individual who has overcome all of his more primitive desires and impulses and is able to carry though any plan that he adopts without resistance. For this paragon of self-control, “thinking makes it so” in the practical realm. Most of us, however, rely on an enormous amount of environmental scaffolding in order to get things done. In the same way as we build designer environments that enhance the computational abilities of our biological brains, we also build environments that allow us to exercise rational control over the behavioral impulses of our biological brains.11

The “old mind,” as we have seen, lives in the present. It acts on the basis of what are called occurrent psychological states—things that you are feeling right now. One of the central functions of rational thought is that it allows us to formulate and pursue long-term goals. In order to do so, however, we must constantly inhibit and override automatic behavioral impulses arising from these occurrent psychological states.12 In other words, we must resist temptation. Yet our ability to resist temptation through brute force—simply saying “no” to ourselves—is very limited.13 Most of us rely instead upon an extensive bundle of tricks, which we use to get ourselves to do things that we ourselves know to be best.

For instance, one way of defeating temptation is to recognize that it is extremely time-sensitive. A lot of tempting things are tempting only when they are right at hand. A snack is very tempting when you only need to reach into your desk drawer to get it, not so much if you have to run out to the store. Checking your email is very tempting, but not if you have to turn on your computer and wait a couple minutes while it boots up. Because of this, we can often arrange things in advance—at a time when we are not having any self-control issues—so that certain things will be less tempting. For instance, if you exercise control over what you buy when you are in the grocery store or the liquor store, you don’t need to exercise as much self-control when you are back home. This is true even with highly addictive behaviors, such as smoking. Many pack-a-day smokers will smoke a pack of cigarettes a day regardless of whether the pack contains twenty or twenty-five cigarettes.14 (One proposed strategy for reducing the damage done by smoking is to reduce the number of cigarettes in a pack.) Part of this is no doubt psychological, but part of it must also have to do with the fact that many smokers buy their cigarettes one pack at a time, and so starting up a new pack requires a trip to the store.

If you take a look at any of the designer environments people create for themselves, you can see that they contain a huge amount of both cognitive and motivational scaffolding. With respect to motivation, the objective is to make certain kinds of activities either easier or harder at certain times.15 Many writers, for example, prefer to work on an old computer, one that is extremely slow. Because word processing is so undemanding, almost any computer will be able to carry out work-related tasks at a reasonable speed. Try surfing the internet, however, and the whole system will slow to a crawl. This delay is just long enough to eliminate the lure of instant gratification that the internet usually presents.

Naturally, if we were creatures of pure reason, none of this would be necessary. Because we are not, we rely heavily upon systems, both environmental and social, in order to keep ourselves on task.

Perhaps the most fundamental insight that reason has to offer—the one that really allows us to rise up out of the muck and slime of our evolutionary prehistory—is that we have a powerful interest in cooperating with one another. There are many circumstances in which everyone can be made better off by having everyone exercise some restraint in the pursuit of their individual interests. Cooperation, therefore, involves setting aside one’s self-interest in order to do one’s part in promoting the collective interest.

Consider, for example, the practice of lining up to escape from a burning building.16 Initially, this may seem odd. After all, if the building is actually burning down, what is the point of the queue? It seems like an excessive preoccupation with orderliness to expect people to line up under such circumstances. The reason for it, however, becomes clear when one looks at what happens when people do not line up, as often happens with fires in nightclubs. When everyone rushes the exit, the doorway can become blocked, people are trampled, and typically fewer people wind up escaping from the building alive than would be the case if an orderly queue had been maintained. And so it makes sense to accept a compromise. Rather than taking a gamble between the best outcome (pushing your way to the front) and the worst outcome (getting trampled by someone else), everyone accepts a second-best outcome (waiting patiently in line), on the grounds that it offers a better expected outcome. This is what we call cooperation. By not pushing other people, you diminish your own personal chances of getting out alive, but you also increase everyone else’s chances of getting out. When everyone does the same, the overall outcome is better for everyone, including you.

People figured out a long time ago that most of “justice” and “morality” has exactly the same structure.17 There are many circumstances in which it is in our interest (narrowly defined) to assault, murder, slander, lie, fornicate, shirk, cheat, and steal. Nevertheless, we are all more or less better off if everyone more or less refrains from doing these things. While each of us gives up the advantages that can come from acting in this way, what we get in return is the reasonable assurance of not being at the receiving end of such behavior. Of all the pragmatic arguments in defense of justice, this is easily the oldest on the books, having been articulated with exemplary clarity by Plato in The Republic.18

Despite this insight, however, motivating ourselves to act cooperatively can be an extraordinarily difficult task. Even though we know that we’re all better off if everyone follows the rules, the free-rider incentive is always there, dangling in front of our noses, creating a constant temptation to defect. Enlightenment rationalism suggests that as soon as people see the superior benefits that come from cooperation, they will just naturally fall into line. A lot of anarchist schemes, in particular, are based on the assumption that if everyone can see that something is in everyone’s interest, then people will be naturally inclined to do what it takes to make it happen. In reality, getting people to do their part in a cooperative scheme can be very difficult.

And yet, given the incentives that exist to act uncooperatively, it is interesting to observe that instances of straight-up antisocial behavior are less common than one might expect. How often do you see someone just ignore a queue and push their way to the front (and not because they’re from a culture where “single pile” is the norm, but because they genuinely don’t care what other people think)? Even incarcerated criminals who have engaged in flagrant violations of the rules seldom admit to having done so out of naked self-interest. Indeed, the way self-interest undermines cooperation is typically not by overpowering it directly, but rather by biasing people’s beliefs so that they adopt self-serving justifications for their crimes.19 One famous study showed that convicted embezzlers often defended their actions on the grounds that they had merely “borrowed” the money, with every intention of paying it back; or that they had acted out of higher loyalties, to their family for instance; or that they were punishing the firm for its corruption, stealing money that was itself stolen; or that their actions produced no harm, since the amount was too small for anyone even to notice; and so on.20 These are not just rationalizations: it is by adopting such views prior to committing the act that the criminal convinces himself that he is still a decent, perhaps even good, person, and therefore that it is permissible for him (it’s usually a him) to break the rules.

The second tendency that works to undermine cooperation is the fact that people retaliate against one another. Whenever you get a reasonable-sized group together in any cooperative project, there will always be a few who refuse to play along (often because of the availability of the rationalization that because the group is so large, it doesn’t matter if they defect). But then seeing a few people defect, others get upset and say, “I’m not going to play along if she doesn’t.” If the interaction is repeated, this will tend to erode cooperation over time (contrary to the expectation of standard economic theory, which says that repeated interaction should make it easier for people to cooperate, often it has the opposite effect, by setting off self-reinforcing cascades of mutual recrimination and punishment).21

The problem is that our intuitive responses are all geared toward supporting limited cooperation in the context of small-scale societies—such as were found in the environment of evolutionary adaptation. We have a set of “tribal social instincts” that make us naturally inclined to cooperate with family and friends but, beyond that, tend to serve as more of a hindrance than an aid.22 For example, the retaliatory impulse, which is particularly well developed in humans, is highly effective at discouraging free-riders in small-scale communities, where everyone knows everyone else’s name and can keep track of who is doing what. The love of gossip is often thought to serve a similar function. And yet in large-scale, anonymous interactions, these two proclivities start to have the opposite effect. As the number of cooperators increases, the chances that someone will defect, even if only by accident, increases proportionately. Under these circumstances, an over-weened retaliatory impulse is simply incompatible with the requirements of ongoing cooperation. It is more likely to generate unending cycles of tit-for-tat retaliation, like a blood feud, than it is to promote cooperation. Furthermore, gossip can greatly amplify the impact of bad behavior, making it seem as though defection is more common or more problematic than it actually is. Thus forms of behavior that are quite effective at promoting cooperation in small groups can make it all but impossible to achieve the same effect in large groups.23

So what do we do? We use kluges. We trick ourselves into cooperating with one another by exploiting other cognitive biases, turning them against one another. For example, we all suffer from a powerful in-group bias when it comes to cooperation. We don’t mind sticking our necks out a bit for other people, as long as we feel that the beneficiary is “one of us.” And if helping one of us requires harming “one of them,” then so much the better.

Luckily for us, the way that we identify the other person as one of us can easily be manipulated. The in-group bias remains very powerful, even when we know that the distinction between “us” and “them” has been drawn arbitrarily. In one particularly famous experiment, Henri Tajfel and his colleagues started by showing subjects a series of paintings by Paul Klee and Wassily Kandinsky, then divided them up into two groups, depending on which painter they preferred. Subjects were then instructed to play a game, in which they could choose to act cooperatively or selfishly with any of the other players. The game was designed so that everyone would be better off if everyone acted cooperatively but each individual had a free-rider incentive to act selfishly while still receiving the benefits of the cooperative acts of others. Thus everyone had an incentive to act selfishly, but if everyone did that, everyone would wind up worse off. What Tajfel found was a strong tendency for subjects to act more cooperatively than usual toward members of their own group (e.g., the Klee-lovers) but less cooperatively than usual toward members of the other group (the Kandinsky-lovers), even though the game itself had absolutely nothing to do with art and art preferences. People took the arbitrary division of the population into groups and used it as a basis for heightened solidarity within the group, combined with heightened antagonism toward those who were not members.

This result should come as no surprise, since we have been using the same trick for thousands of years to increase the level of social solidarity within institutions. Having a common enemy makes people more likely to cooperate with one another. This disposition is sufficiently well established and manipulable that we often try to invent an enemy, or create an artificial one, in order to get people to act more cooperatively within large “impersonal” organizations. A simple strategy, used in bureaucratic institutions everywhere, is to divide people up into teams or work groups and then pit them against one another. One can see this strategy in the popular “house” system used in secondary schools, as well as “colleges” in universities. (These institutional arrangements are direct descendants of the system of curia and collegium established in ancient Rome, used to bring greater cohesion to the republic and, later, the empire.) It responds to a fairly simple organizational challenge. Primary schools in North America and Europe are typically rather small, so that students within a cohort all know each other by name. This means that basic “tribal” instincts are enough to maintain a reasonable level of social cohesion. Secondary schools, on the other hand, are often several times larger, easily exceeding the size of the largest ancestral village. As a result, informal mechanisms of social control begin to fail. Interactions become a lot more anonymous—for example, teachers will often not know the names of most students in the hall. This creates the potential for both alienation (students may have difficulty “fitting in”) and antisocial behavior (vandalism, theft, etc.).

The most popular solution to all this has been permanently etched into popular consciousness by the Harry Potter books (“Ten points for Gryffindor!”). You take the students and arbitrarily divide them up into groups and give them a complete set of tribal markers, including a catchy name, a special color, a homeroom, and maybe even a crest. You then stage a set of completely artificial competitions between the groups (for example, instituting a weird “point” system that doesn’t seem to generate any tangible reward). This promotes a huge amount of antisocial behavior between the groups but greatly enhances social solidarity within each group. The trick then is to set things up so that the uncooperative behavior is confined to largely symbolic activities that have no real-world significance (for example, quidditch matches) while the cooperative, pro-social behavior that occurs within the groups helps individuals to achieve genuinely significant goals (for example, studying and understanding course material, or defeating the dark lord).

This is a classic kluge. We can’t fix the underlying problem, which is that individuals in large groups become alienated and so start to act less cooperatively (they become less likely to volunteer for jobs that need to be done, they are more likely to steal or vandalize public resources, they form fewer close ties with fellow group members, and so on). If people were perfectly rational, their willingness to cooperate would be determined entirely by the benefits of cooperation; the number of other people in the cooperative scheme would not matter. Motivationally, however, this is difficult, and so reason must resort to subterfuge in order to get its way. We therefore promote cooperation by creating artificial groups then using the “red versus blue” mentality this creates to promote greater solidarity. In many cases we don’t even do this intentionally—the practices have simply arisen through a process of cultural evolution. As Peter Richerson and Robert Boyd put it, “social innovations that make larger-scale society possible, but at the same time effectively simulate life in a tribal-scale society, will tend to spread.”24 The division of the population into small “parishes,” for example, was essential to the stability of European societies before the rise of the modern state.25

This phenomenon is also well known to those who study the military—which faces a particularly acute motivational challenge, since it must create a social environment in which soldiers are willing to follow orders that will foreseeably result in their own death. This is why the platoon, or squad, is the focus of the most intense social bonding. As the sociologist Edward Shils observed, soldiers care surprisingly little about “the total symbols of the military organization as a whole, of the state, or of the political cause in the name of which the war is fought.” What they care about is their immediate comrades. “The soldier’s motivation to fight is not derived from his perceiving and striving toward any strategic or political goals; it is a function of his need to protect his primary group and to conform to its expectations. The military machine thus obtains its inner cohesion … through a system of overlapping primary groups.”26

Thus the military strives to create a real community—a “primary group”—that can serve as an object of intense loyalty and identification. On top of that, it adds several layers of what Benedict Anderson called “imagined communities.”27 So the organization of soldiers into small groups is accompanied by a hierarchy of larger units, such as divisions, up until the different services—army, navy, air force—that constitute the military as a whole. The objective is to promote interservice rivalry in largely symbolic areas, as a way of generating intraservice cohesion in areas where cooperation is most important, such as on the battlefield.

An even more dramatic example can be found in the division of the entire world’s population into separate nations (again, with tribal markers: a flag, a national anthem, military parades, etc.). This creates the illusion that we are all members of a club, or members of the same tribe, even when we’re not. No one has ever succeeded in constructing a modern nation-state without cultivating precisely this sort of collective illusion among its people. And it had not been for want of trying. Rationalist political movements have always been contemptuous of nationalism, precisely because it motivates people through an appeal to irrational biases. Communists, in particular, believed that the international solidarity of working people would overcome the forces of “bourgeois nationalism”—hence the long-standing persistence of the “socialist international” (a sentiment immortalized in the lyrics of “The Internationale”: “Reason thunders in its volcano / This is the eruption of the end / Of the past let us make a clean slate”). And yet, when push came to shove, not one communist nation was able to forgo the collective benefits of nationalism. This became particularly evident at times when the need for collective action was greatest. (This is why the Second World War is still known, in the former Soviet Union, as the “Great Patriotic War,” and why “The Internationale” became its de facto national anthem.) Liberal democratic societies are no different; they are all highly nationalistic. This is not surprising, since democracy also makes significant demands on citizens, in terms of the level of cooperativeness it requires (voting, accepting defeat when your party doesn’t win, paying taxes, refraining from political violence, etc.). So it is not surprising that there are a lot of tricks underlying the practice of democracy, tricks designed to get people to behave themselves.

The Nazi philosopher Carl Schmitt exposed the dirty secret on nationalism, in 1927, when he argued that the central function of the modern state was to divide the world into “friend” and “enemy.”28 Warfare was central to the mission of the state, he argued, because it constituted the mechanism through which this distinction was preserved. The risk, of course, with this sort of thinking is that things may get out of hand, and that the negative consequences of intergroup rivalry will start to undermine the positive benefits of intragroup solidarity. The First World War provided a set of instructive lessons about how nationalism can generate war even when there is nothing in particular to fight about. The Second World War provided a more dramatic lesson, showing how destructive war could become when engaged in by modern nation-states able to mobilize their entire populations. Most people now agree that it is better to see these energies channeled into competition in the Olympic games or the World Cup of soccer.

Unfortunately, because of the success of these kluges, we sometimes tend to overestimate our own abilities. When it comes to large-scale cooperation, we humans have clearly exceeded our programming. We have become what biologists call an ultrasocial species, despite having a set of social instincts that are essentially tailored for managing life in a small-scale tribal society. It’s crucial to recognize, however, that we have not accomplished this by reprogramming ourselves or overcoming our innate design limitations. We have accomplished this in large measure by tricking ourselves into feeling as though we are still living in small-scale tribal societies, even when we are not. Unfortunately, the trick works so well that we sometimes forget that we’re using it, and so imagine that we can create large-scale systems of cooperation based on nothing more than our rational insight into the need for such institutions. This invariably leads to disappointment.

Consider, for instance, the problem of global warming. This is a very straightforward collective action problem. If everyone continues to burn fossil fuel, then the increase in global temperature will produce outcomes that are much worse for everyone. And so we all have an incentive to limit emissions. Yet the incentive to cheat on any such agreement is enormous. Hence the need for cooperation on a global scale. Unfortunately, there are almost no instances in recorded history of humanity as a whole agreeing to cooperate to solve a problem and then carrying through on that intention. The only mechanism that we have to solve big problems like this is the nation-state, but one of the major devices that states use to motivate their citizens to cooperate is rivalry with other nations. This makes genuine global cooperation very difficult to achieve (particularly when the issue is one where the public at large will be noticeably affected, and so the cooperative scheme cannot be implemented through elite consensus alone). There are, simply put, no assurances that we are capable of cooperating with one another to resolve problems of this scale. Instead, the free-rider incentive will bias cognition, leading large segments of the population simply to deny that there is a problem. And individuals will get locked into retaliation, refusing to do anything until others have made amends or done their fair share. All of these forces conspire with one another in such a way as to guarantee that nothing will be done to correct the problem.

It is a standard trope of science fiction that human history will continue on through a series of bloody and destructive wars until first contact with an alien species is made. It is then, and only then, that planetary unity will be achieved, a world government will be formed, and humanity will takes its place in the stars among the “advanced” races in an interplanetary civilization. There is an insight here that it so commonplace, its significance is in danger of being overlooked. Human beings are able to work together best when they have an enemy to fight against. Until we have a common enemy, or at least an “other,” we cannot all be friends. The major advances in human civilization over the past fifty years have probably come from our ability to domesticate this impulse. We have learned to create symbolic rivalries, so that we can get the benefits of enhanced solidarity while stopping short—for the most part—of actually killing one another. But we should not kid ourselves about how much we have achieved. Our civilization is built on a kluge, one that works well enough for the moment but that might easily fail us someday.29

Perhaps the most disconcerting finding of twentieth-century social science was that most of what we like to think of as “morality” is actually not in our heads, but depends upon environmental scaffolding as well. This comes as a surprise to many people, particularly those who are inclined to think of “conscience” or some other type of “inner voice” as the wellspring of morality. In fact, when it comes to acting morally, we rely to an inordinate extent upon social cues—in particular, the behavior, expectations, and sanctions of others—in order to decide what to do. This is easy to prove; all you have to do is put people in a situation that generates the wrong cues, then wait and see what they do. What was discovered, in a series of now classic social psychology experiments, is that the average person is capable of perpetrating great evil under such circumstances.

Historically, there was a tendency to think that criminals and sinners were somehow “degenerate,” that there was some sort of physiological or psychological difference between them and the average person. With the development of the social sciences in the late nineteenth century, in particular with the rise of psychology, some doubts about this hypothesis began to arise. Many criminals are moderately more impulsive than the average person, but beyond that there is very little to distinguish the psychological profile of a typical prisoner from a member of the general population (of comparable age, gender, and social status).30 Thus early criminologists found themselves having great difficulty pinning down any one trait or combination of traits that could plausibly be thought to be responsible for criminal conduct.

The real crisis of confidence, though, arose as the scale of Nazi crimes during the Second World War became widely known. What many researchers found so extraordinary was the level of complicity of large segments of the population—including, but not limited to, members of the military—in policies that one would think anyone in their right mind could immediately see to be evil. If one looks at soldiers assigned to work in death camps, for instance, it is surprising to learn that very few suffered any disciplinary action for refusing to perform their duties.31 It is even more surprising to discover how few such refusals needed to be dealt with. The Nazis encountered a very large number of organizational challenges, particularly in the later years of the war, but apparently convincing large numbers of soldiers to spend their entire day systematically murdering defenseless civilians was not one of them.

The circumstances in this case were admittedly extreme. But as the full implications of what had happened during the war began to sink in, psychologists started to wonder whether a similar phenomenon could not be reproduced on a smaller scale. This is what motivated Stanley Milgram’s famous experiments on “obedience to authority.”32 Milgram tricked experimental subjects into thinking that they were being asked to deliver an increasingly powerful series of electric shocks to another subject. No threats were involved. When subjects expressed reservations about the experiment—which increasingly they did, as the “victim” screamed louder and louder and began to plead for mercy—they were simply told that “the experiment requires that you continue.” Although initially skeptical about people’s willingness to comply under these conditions, Milgram discovered, to his surprise, that more than two-thirds of subjects were willing to administer the shocks up into what they believed was the lethal range.

Milgram’s experiments were focused on authority relations and the willingness of people to obey orders. But an almost equally famous experiment, carried out by Philip Zimbardo in 1971, the “Stanford Prison Experiment,” examined the way that roles and role expectations determine people’s behavior.33 Zimbardo took a group of students and divided them arbitrarily into a group of “prisoners” and “guards,” then set them up with appropriate props and cells to create a mock prison. The students, however, took to their roles so completely that within two days a riot had broken out, guards were imposing sadistic punishments, and several “prisoners” had to be removed from the experiment as a result of emotional trauma. By day six, both sanitary and moral conditions had degenerated to the point where the entire experiment had to be called off. (Lest it be thought that experimental subjects were merely play-acting, it should be noted that neither Milgram’s nor Zimbardo’s experiment can be reproduced in a modern setting, because—ironically—modern research ethics protocols would not permit them. This is because many of the participants became extremely distraught after the experiment had ended, thinking back about the way that they had behaved. They were, in effect, traumatized by the discovery that they could behave so immorally, with so little prompting.)

Less dramatic experiments have consistently confirmed the finding that people rely, to an inordinate degree, on their social surroundings as a way of patterning their behavior, so it takes only modest encouragement to get them to behave immorally. Student cheating is one area that has been particularly well studied, simply because psychologists have such easy access to large populations of undergraduate students.34 What researchers have found is that large majorities of students can be induced to cheat, or to refrain from cheating, through very small environmental adjustments. If the professor seems unconcerned about cheating, students will be more likely to cheat; if students think that other students are cheating, then they will be more likely to cheat; and so on. What the students are responding to in these cases is not just opportunity, but also perceived social signaling. While personality differences have been studied extensively and have been shown to have some impact on the decision to cheat, situational factors—most importantly, perceptions of “peer behavior”—have been shown to be far more important.35

Although the great thinkers of the Enlightenment disagreed profoundly about the nature of morality—some thought that it was a product of reason, others that it was based on emotion—they all agreed that it was to be found somewhere inside the head of the individual. Thus they thought that the violence in human history was a product of ignorance, prejudice, and tradition. Get rid of tradition, they thought, let people decide for themselves how to act, and all will be well. What they discovered, to their dismay, is that tearing down social institutions, or even changing them too quickly, can create a sort of moral disorientation, wherein people really do lose their sense of what is right and wrong. This usually ends badly. Sometimes society dissolves into a state of alienation, selfishness, and criminality. More often, people gravitate toward charismatic authority—someone who promises a new set of rules, better than the ones that came before. Whether these rules are in fact better is a crapshoot. When they are not better (think Robespierre, Stalin, Mao), the consequences can be catastrophic, because the mechanism that adjusts individual behavior to social expectations can create systems of highly organized immorality (such as the bureaucratized mass killing that became such a characteristic feature of the twentieth century).

This is why, despite what Christopher Hitchens and others have claimed,36 if you had to choose between reason and faith based purely on body count, it’s not obvious that reason would come out ahead. There is no question that the most murderous regimes in the twentieth century were either explicitly atheistic (the Soviet Union under Stalin, China under Mao) or difficult to classify (Germany under Hitler). This is a bit of an unfair comparison, though, simply because people with completely unscientific beliefs tend not to be very good at building weapons—precisely for that reason—and so aren’t able to kill each other quite as effectively. Christians and Muslims spent several centuries doing all they could to annihilate one another, the only thing that kept them from succeeding was the fact that they had to do it manually, one person at a time.

What we have learned, however, is that when you release people from the yoke of tradition, they don’t automatically gravitate toward greater freedom and equality. It’s perhaps not so surprising to discover that people use a variety of environmental kluges in order to motivate themselves to act morally. The shocking discovery is that we all rely quite heavily upon our environment in order to judge moral questions as well. Even when we have the rational insight that we should act more cooperatively, our willingness to do so depends very heavily upon our expectations about what others will do. If everyone else is taking bribes, then we assume that it is “no big deal” to take a bribe and, furthermore, that it is pointless as an individual to refrain. And if everyone else is torturing prisoners and killing civilians, then we tend also to assume that it is “no big deal” to torture prisoners and kill civilians.

Thus morality is best thought of not as something that lies within our hearts or our heads, but as a complex cultural artifact, that gets reproduced and modified over time, and that “lives” primarily in the interactions between individuals. Strip that away and people really can become quite unhinged.

Samuel Johnson once observed of a dog walking on its hind legs that even if it is not done well, “you are surprised to find it done at all.” Similarly, the fact that a creature such as ourselves, capable of rational thought, should have evolved through natural selection is a remarkable thing. (After all, if the adaptive advantages were obvious, or if the pathway were direct, then we would expect evolution to have produced dozens of different species with our type of intelligence. Being able to see has obvious advantages, which is why the eye is thought to have evolved independently at least ten different times. The advantages of being able to reason must be considerably less obvious in order for it to have evolved only once.) The fact that we can reason at all makes us rather extraordinary; that we are able to do it well would be expecting far too much.

Thus it is unsurprising to find that our capacity for rational reflection and for rational control of our behavior is underpowered. The “new mind” is cobbled together out of bits of the old.37 This is why reason has no hope of ever being able to “go it alone”; it simply does not have the computational power or efficiency. The central weaknesses of reason are easily enumerated: it is slow, requires a lot of effort, and suffers from limited attention, a working memory bottleneck, and unreliable long-term memory.38 Unfortunately, we are easily lulled into thinking that reason is much more powerful than it is because we ignore all the environmental scaffolding and kluges that we have constructed over time in order to assist it in its operations. Indeed, if pressed to identify the one most powerful feature of human interaction with the environment, people often mention our ability to use tools, such as hammers and levers. The more powerful phenomenon, which typically goes unnoticed, is the way that we employ elements of our environment to augment our own cognitive powers. Just as the blind man begins to feel the end of his cane as an extension of his fingertip, so we lose track of where our own minds stop and the environment begins. Trivially, we think that we can do mathematics, while forgetting that we are unable to do so without pencil and paper. More significantly, we forget the contribution that thousands of years of human culture and civilization have made to our ability to accomplish almost anything. In particular, we assume that we are able to engage in productive, peaceful cooperation with one another while we ignore how terrible the human track record is in this regard and how much we currently depend upon the institutional arrangements that have been painstakingly built up over time. Even our ability to avoid patently self-destructive behavior is heavily dependent upon the environment that we find ourselves in.

There have always been people of conservative temperament, but conservatism as a political philosophy was born as a reaction against Enlightenment rationalism. In the beginning, it was clearly a defense of tradition against reason. This was the correct insight at the heart of Burke’s critique of the French Revolution. We have a lot of social arrangements that seem quite arbitrary. Some of these are meaningless relics of the past. Some of them, however, are essential kluges, without which we would be unable to sustain our achieved level of civilization. (“Never tear down a fence,” conservatives like to say, “until you know why it was built.”) Utopians and rationalists going all the way back to Plato have found institutions like the family, for example, to be a constant source of irritation. The family seems so arbitrary, so inefficient, and often serves as a more powerful object of loyalty than the state. And yet all attempts to “abolish” the family and create some system of collective childrearing have been a disaster. More recently, attempts to abolish nationalism have been equally unsuccessful, even when existing national borders are known to have been drawn in a completely arbitrary way (consider the fate of “pan-Arabism” in the Middle East).

Despite this kernel of truth, however, it is easy to get carried away by the conservative critique. In some respects, tradition may be the accumulation of generations of wisdom, but in other respects it may simply be the accumulation of generations of prejudice.39 One need only consider traditional attitudes toward women. Furthermore, there is a big difference between drawing attention to the limitations of human reason and glorifying its opposite. Yet what has happened to conservatism in recent years, particularly in its American variants, is that it has become a defense not of tradition against reason, but rather of intuition against reason. The origins of this transformation are somewhat complicated, but its consequences are clear. While there is much that is sound in our intuitions, there is also much that is faulty. Reason may not be as powerful or as capacious as the first generation of Enlightenment thinkers supposed, yet there remain many things that only reason can do. Understanding clearly what these are is the only way of advancing the progressive agenda that found its first, flawed expression in the French Revolution.