The most common of all follies is to believe passionately in the palpably not true. It is the chief occupation of humankind.
—H. L. MENCKEN
HERE IS AN INTERESTING EXPERIMENT TO CONSIDER: suppose you starve a pigeon to about 75 percent of its initial weight. Then you put it into a cage and release some food at regular intervals, making sure there is no connection between the behavior of the pigeon and the reception of food. A rather strange thing will happen: the pigeon becomes superstitious. B. F. Skinner famously did carry out such experiments, and he found that
one bird was conditioned to turn counter-clockwise about the cage, making two or three turns between reinforcements. Another repeatedly thrust its head into one of the upper corners of the cage. A third developed a “tossing” response, as if placing its head beneath an invisible bar and lifting it repeatedly. Two birds developed a pendulum motion of the head and body, in which the head was extended forward and swung from right to left with a sharp movement followed by a somewhat slower return. The body generally followed the movement and a few steps might be taken when it was extensive. Another bird was conditioned to make incomplete pecking or brushing movements directed toward but not touching the floor.
What was going on with Skinner’s pigeons? They were trying to repeat whatever behavior they were engaged in right before they got the food, presumably in the (unconscious) hope that repeating that behavior would again make the food appear.
Philosophically speaking, the pigeons were committing a classic logical fallacy (again, unconsciously) known as post hoc ergo propter hoc, fancy Latin for “after this, therefore because of this.” If you think it’s funny when pigeons do it, just consider that this is arguably the most common reasoning mistake made by human beings. Anyone who wears his “lucky” shirt—or socks, or cap, or whatever—before an important event (an exam, a job interview, a sports event), on the grounds that at one time he wore it and the event had a favorable outcome, is behaving like Skinner’s pigeons. If anything, the main difference between human beings and other animals when it comes to superstition is that other species abandon the silly behavior almost immediately once they figure out that it doesn’t actually work. Humans, on the other hand, find endlessly fascinating ways to rationalize away the repeated failures of their behaviors, while constantly emphasizing the one or few times when they actually “worked.”
Superstition—as we discussed in the last chapter—is ingrained in the human brain and is at the root of religious belief. With 4,200 or so cataloged religions, this is a phenomenon that concerns all human beings, religious or not, and one that we need to understand in order to intelligently navigate our way through the maze of human existence. To put it as our old acquaintance David Hume famously did (in The Natural History of Religion): “As every enquiry which regards religion is of the utmost importance, there are two questions in particular which challenge our attention, to wit, that concerning its foundation in reason, and that concerning its origin in human nature.”
At the cost of disappointing some readers, I will ignore the first of Hume’s questions, that of the rational foundations of religious belief. In my opinion, there are none, and that case has been made very eloquently in plenty of other books. The interested reader can easily find superb accounts of the standard arguments for and against the existence of gods, and several authors have restated and updated the case against gods more recently; thus, I feel under no compulsion to waste space and your time repeating it here. We are left with the second question raised by Hume: what are the origins of religious belief? This is a fascinating, though actually quite complex, question, and we need to be a bit more precise before proceeding.
First of all, let us make a distinction between superstitious beliefs in unseen causal entities (gods, ghosts, spirits, and so on) and religion as an organized set of beliefs accompanied and propagated by certain social structures. Clearly, the latter wouldn’t exist without the former, but that does not mean that understanding the origin of superstition is the same as understanding the origin of religion. Just think of the obvious fact that, as we have seen, other animals display superstitious behavior, but as far as we know human beings are the only animals with religion. To make things yet more complicated, even understanding the origin of a phenomenon may tell us little about its maintenance—about why it is widespread. For instance, the popular social networking site Facebook started out as a way for Harvard alumni to keep in touch and network, but it is now maintained because—rather unexpectedly perhaps—hundreds of millions of people derive pleasure from connecting at one level or another, not just with friends and family but with a large number of perfect strangers.
Second, a phenomenon may have more than one cause, and these causes may work at several different levels, affording us multiple layers of understanding. This was famously pointed out by Aristotle, who distinguished four types of causes that we can illustrate with the following example. If you walk around Union Square in Manhattan, at the corner of Sixteenth Street you will see a statue of a distinguished gentleman in eighteenth-century clothes. If you were to ask how the statue happens to be there—what is the cause of the presence of that statue in the middle of Manhattan—according to Aristotle you should look for four different, and complementary, answers. First off, the statue is made of bronze, which answers the question of what sort of material the thing is made of (what Aristotle called the material cause). Second, you may ask what the statue represents (Aristotle’s formal cause): it turns out to be a rendition of the French Marquis de Lafayette. Third, you may also naturally want to know who made the statue (Aristotle’s efficient cause), the answer being the sculptor Frédéric-Auguste Bartholdi, the same guy who designed the Statue of Liberty. Lastly, you would probably ask why the statue is there to begin with (Aristotle’s final cause), and the reason is that the French government wished to thank the city of New York for its help to Paris during the Franco-Prussian War. (The French government picked Lafayette to be represented because he was a hero of the American Revolution.)
As you can see, even an apparently simple question like “How did that statue happen to be there?” admits of several levels of causal analysis. The important thing to notice is that these levels are not mutually exclusive: the four causes identified by Aristotle all contribute to the explanation for the existence of the statue, and do so in different ways. It’s not that, say, the final cause (Why is the statue there?) somehow trumps or is more fundamental than the material cause (What is the statue made of?). Keep this in mind for the rest of this chapter, since confusion about different types of explanations abounds in discussions of superstition and religion.
When it comes to explaining the conjunction of superstition and religion, we are presented with three possibilities: cognitive science explanations, which, as we saw in the preceding chapter, tell us about how the human brain engages in superstitious thinking; and biological and cultural explanations, which aim to interpret religion as the result, respectively, of evolutionary and societal forces. Armed with our knowledge of Aristotle’s awareness of multiple levels of causality, we can immediately recognize that these three explanations are not mutually exclusive accounts of religion, but rather different and reciprocally reinforcing levels of explanation. More explicitly, religion is made possible by the neurobiological characteristics of the human brain that make us prone to superstitious thinking; this is the answer to the question of how religion happens (and the subject of Aristotle’s first three causes). But we equally want to know why religion happens (Aristotle’s final cause). What sort of biological and cultural forces made it so that human beings not only share other animals’ propensity for superstition but have pushed the notion so far as to make it into a fundamental and near-universal aspect of their lives?
Perhaps all this talk about causes and explanations sounds a bit weird when applied to religion. Indeed, it is the result of a relatively recent trend in science. For a long time scientists have stayed as far away as possible from religion, possibly because of the rather turbulent history of the relationship between the two (think Galileo, or even worse, Giordano Bruno), or perhaps because the topic was considered to be, well, a bit too sacred for an intrinsically secular enterprise like science. Yet a simple analogy will suffice to show that there is nothing strange or awkward about attempting a science-based understanding of religion. Consider language, another universal attribute of the human species, which is also, as far as we know, unique to Homo sapiens. If we wish to understand language, we can use the same three types of explanation we are about to discuss in the case of religion: cognitive science, biology, and culture.
There is no question that language is made possible by certain structures of the human brain. Plenty of fascinating literature in neurobiology, for instance, documents which areas of the brain are deputized to the workings of language and what happens if they are somehow injured by disease or accident. Broca’s area is the region of the brain that appears to control the proper use of syntax and how we combine words into sentences, while Wernicke’s area help us analyze and understand other people’s sentences. Accordingly, damage to Broca’s area results in ungrammatical but sensible speech, whereas damage to Wernicke’s area produces sentences that are grammatically correct but meaningless. There is even a third area involved, the Sylvian fissure, which physically separates the typically human regions involved in language from the neurological structures that are present in other animals.
Notice, however, that while neurobiology gives us a satisfying explanation of how it is possible for us to have language, it provides no clue to two other important aspects of language: why we acquired the ability to begin with, and why there have been so many different languages in human societies. (About 7,000 recognized languages are spoken today in the world, and just as with religion, enormously varying numbers of people use them—from Mandarin, which is spoken by 845 million, to Ter Sami of the Urali Mountains, allegedly spoken by just two individuals. English, in case you were wondering, comes in third, at 328 million speakers).
That is where biological and cultural factors come in. Let us start with the cultural factors, since they are easier to understand as far as language is concerned. Reasonably abundant historical records enable us to trace the evolution of human languages over the past several thousand years. For instance, Danish, Icelandic, Norwegian, and Swedish all originated from Old Norse, and we can match the evolution of these languages with the migration patterns of the corresponding populations. We can also trace the evolution of individual words, as documented for English in every edition of the Oxford English Dictionary. The word dictionary, for instance, traces back to the early sixteenth century, and its root is the medieval Latin term dictionarium, which means a “manual of words”; that term, in turn, traces back to the Latin dictio, which means “word.” Indeed, languages are still evolving: for instance, the word meme (which we will encounter again later) refers to a unit of cultural inheritance analogous to a gene (the unit of biological inheritance) and was coined in 1976 by Richard Dawkins, who used it in his bestselling book The Selfish Gene.
So now we have a good picture of both how language is possible and how it changes over time. But what explains why we have language to begin with? The answer cannot simply be “to communicate,” because plenty of other species communicate without using language (which is defined not just as the ability to transmit meaningful sounds or signs but as having a grammar). It can also not be something along the lines of “how else could we carry out complex tasks, like sending humans to the moon?” (or building bridges, or using Facebook, or any other culturally complex activity). For most of human history we were not engaged in space travel (or bridge building, or virtual social networking), and so biological processes such as natural selection could not possibly have led to language for those reasons. Why did it happen then?
We don’t really know. Several hypotheses have been proposed, of course, such as the idea that the ability for language evolved in order to better coordinate the hunting of large animals (though the paleontological evidence seems to show that humans rarely did that) or to facilitate social communication (but again, it is not clear why early humans needed such a complex tool to handle the social necessities of groups that were probably smaller than 150 people). It certainly wasn’t because we would eventually get Shakespeare and the blogosphere. What we do know is that language evolved relatively late in the human lineage, surely after we evolved an erect posture. This is pretty clear from the fact that early hominids (such as the famous Lucy, a member of Australopithecus afarensis) were already bipedal and yet retained a small brain that probably lacked the anatomical areas associated with language. Indeed, it took us a long time to get there: the genus Homo (to which our species belongs) began its evolution something like 2.5 million years ago, but our best estimates tell us that language was developed only during the last 30,000 to 100,000 years.
As we saw in Chapter 4, it is very difficult to test hypotheses about the evolution of human behavior, particularly behaviors that are uniquely or almost uniquely human, because there are too few comparisons that can be made with other species and the fossil record is usually not very helpful. In fact, although the fossil record tells us a pretty good story about the physical characteristics connected with walking erect first and developing language later (in terms of the development of not only human brain size but also the voice box, the anatomical feature that allows us to articulate sounds in order to speak), it has little or nothing to say about the evolutionary processes that led to the behavioral characteristic of being able to talk. As we shall see in a minute, this limitation on what we can reasonably infer about evolution applies also to the study of the evolution of religion, and for similar reasons.
Much has been written of late about the biological and cultural evolution of religion, and I certainly cannot do justice to even the popular, let alone the technical, literature on the topic here. Nonetheless, the first thing to note probably is that to talk about “biological” evolution in opposition to culture is a mistake, for two reasons. One is that cultures themselves evolve, though in a different fashion from the noncultural attributes of organisms. Moreover, cultures are just as “biological” as anything else. Humans are not the only animals to have culture, if by that we mean certain types of social behavior that are not directly traceable to an animal’s genetic makeup. The distinction, therefore, should be between genetic evolution and cultural evolution, both of which are biological in nature, just like anything else done by (biological) organisms.
Let’s talk first about genetic evolution. The idea here is that the modern version of the Darwinian theory of evolution by natural selection and other means can provide insights into why we engage in certain behaviors as human beings. As we saw in Chapter 4, I think this is essentially correct: biology can inform some debates in both philosophy (in that case, ethical philosophy) and psychology. Indeed, ever since the demise of Freudianism as an overarching theory of psychology, that discipline has been left with tons of fascinating empirical data about human behavior but not much of an underlying theory to make sense of it. What I do not believe—as we shall see in a few paragraphs—is that evolutionary biology by itself can provide such theory.
Whenever biologists think about the genetic evolution of any trait, behavior included, they are faced with three broad categories of explanations: the trait in question evolved by natural selection because it is adaptive (it increases the organism’s fitness), or it evolved by chance processes (technically known as “random drift”), or it came to be as a by-product of other characteristics, which are in themselves adaptive. Consider some examples, such as the fact that we have a heart to pump blood; this is most certainly the result of natural selection. The heart is a complex organ with a specific and vital function, so there is no way that it could have evolved by chance, nor is it likely that it arose as a by-product of some other characteristic. Genetic drift, on the other hand, is probably responsible for the type of variation that simply does not matter to survival: for instance, the number of hairs on men’s chests. It may be important, in certain climates, to be particularly hairy, but the precise number of hairs probably makes no difference, and so the trait evolves by random sorting of whatever genes cause different degrees of hairiness. What about by-products? A good example is represented again by the heart, this time not by its main function but by the fact that it makes a noise when it pumps. The noise is not necessary to survival and was therefore not selected for by natural selection. It is there because pumps make noise and there is no way to avoid that; it is therefore a by-product of the evolution of something else (the ability to pump blood).
What about religion? We can safely exclude drift as a possible cause. The notion is well established in biology that drift does not lead to the evolution of complex and “expensive” (in terms of energy) structures, and religion unquestionably fits that bill. It must be either selection or by-product. In terms of selection, several authors have proposed that religion might have been favored because it fosters prosocial behavior. The idea is that groups of individuals that are more cohesive, whose members are more willing to help and sacrifice for other members of the group, outcompete groups whose members are less inclined toward cohesive behavior. In a sense, religion is a type of social glue that makes people more collaborative by way of a combination of threats and rewards (such as eternal damnation or a permanently blissful afterlife). It is very hard to evaluate this sort of hypothesis empirically. For instance, the evidence that religious communes in nineteenth-century America persisted longer than secular communes would seem to support this hypothesis. Then again, once researchers statistically controlled for different degrees of costly requirements for membership in a commune, the effect disappeared—suggesting that it is having to pay a high price to belong to a group that makes people stick with it, not religiosity per se. Moreover, the “prosocial behavior” hypothesis relies on a form of group selection (natural selection favors groups with certain characteristics, not individuals), and there are good technical reasons in evolutionary theory for being skeptical that group selection actually happens very frequently in nature. Finally, it should be pointed out that other primates also exhibit prosocial behaviors, which are enforced through a set of rewards and punishments meted out by other members of the group, presumably without the necessity to develop religions.
According to another possible scenario, natural selection could have favored the evolution of religion through the standard mechanism of selection to augment individual (as opposed to group) fitness. As we saw in the last chapter, engaging in superstitious behavior tends to alleviate anxiety and stress because it makes one feel somewhat in control (however illusory such control may be) of whatever is causing the anxiety or stress. As we also discussed, arguably the most stressful thought we have throughout our lives is that of our own demise. We referred to it as the “tragedy of cognition”: once an animal is conscious of its existence as independent from the rest of the universe, it also becomes conscious of the fact that such existence is limited—and few people can contemplate permanent personal annihilation with the equanimity of Epicurus, who famously said, “Death does not concern us, because as long as we exist, death is not here. And when it does come, we no longer exist.” That religion evolved to soothe our brain when it contemplates its own annihilation is therefore an appealing possibility, backed up by some degree of neurobiological evidence. But there are several objections that can be raised against it, the most obvious of which is that perhaps the evolution of superstition can be explained that way, but religion is a much more complex social—not just individual—phenomenon; while rooted in superstition (in the sense that religion would not be possible without it), religion goes well beyond it. Natural selection is a rather minimalist sort of process: it produces results that are good enough for what needs to be done, so it is hard to imagine that the highly complex social phenomenon of religion would evolve if the simple individual phenomenon of superstition could suffice.
Of course, there are many other possible scenarios linking natural selection and religion as an adaptive trait, but we’ve got enough of a flavor of that possibility to turn now to its most convincing potential rival explanation: the idea that religion did evolve, but as a by-product of something else. Of what, exactly? There are two characteristics of the human mind that are very good candidates to make us prone to superstitious belief and may have started Homo sapiens down the road that led to the cultural phenomenon of religion. The first such characteristic, our ability to pick up nonrandom patterns in the world around us, is shared by a large number of animal species. That ability is obviously advantageous for the survival of an animal. It is necessary, for instance, to figure out the succession of seasons to know what to hunt or gather at different times of the year (and later in human evolution, when to sow seeds and harvest crops). It is clearly good to be able to tell which places are more likely to provide food or water and where potential predators may lurk.
However, there is a major problem with pattern seeking, one that we saw in the last chapter when we were talking about how high levels of dopamine increase our tendency to see patterns where there are none: inevitably an animal will make a mistake and confuse random noise for meaningful information. Natural selection hasn’t eliminated this type of error (which in scientific parlance is called a “false positive”) because it probably isn’t very costly, especially when compared to its opposite—ignoring a signal thinking that it is noise (called a “false negative”). The typical example brought up in the context of this discussion is that if you hear a rattling of leaves nearby, you can judge it to be the result of either wind (random signal) or a predator lurking nearby (meaningful pattern). If you go for predator and it is in fact the wind, not much harm is done, other than getting a scare and wasting a bit of energy engaged in an escape maneuver. But if you think it is the wind when in fact there’s a predator ready to attack, that judgment could literally be the last mistake you ever make in your life. (Incidentally, one naturally wonders why there has to be a trade-off between false positives and false negatives: isn’t there a way to minimize the likelihood of both? As it turns out, there is, but it requires gathering additional information, which in itself is a costly and potentially risky enterprise.) We have already seen that pattern-seeking behavior generates superstition-like conditions in other animals, as shown by several studies demonstrating how widespread it is in our species. You can think of this first pillar of superstition, then, as the result of an imperfect mechanism for attributing causality: sometimes we imagine specific causes for things that are actually the result of random or not particularly meaningful processes.
The second characteristic that may have made us prone to superstition and eventually religion is the fact that human beings (and perhaps a few other primates) have what is called a “theory of mind” engrained in their behavioral repertoire. This is not really a “theory” in the scientific sense of the word, but the very useful ability (for social beings) to project agency onto others in order to understand, predict, and correctly react to their actions. Just as with pattern-seeking behavior, agency projection can go beyond what’s useful—as when we get rather irrationally mad at our computer (note, not at the people who built it or programmed it, but at the machine) for not responding properly to our commands.
How do we combine pattern seeking and agency projection to explain religion? The most common and simple type of religious belief is animism, the idea that natural phenomena have a soul of some sort, a type of diffuse natural divinity. It is from animism that more sophisticated conceptions of religion eventually arose, from pantheism (the identification of individual gods with natural objects, such as the moon, the sun, the planets, and other natural phenomena) all the way to polytheism and monotheism (the personification of gods in humanlike form). It is easy to see that animism may have arisen naturally from pattern seeking and agency projection, and that both these characteristics still underlie all forms of religious belief. It is in this sense that religion can be seen as a by-product of evolution, since the two behavioral traits underlying it do have adaptive significance (in the sense of augmenting an individual’s fitness) and were likely to have been the result of natural selection (though this is by far more clear in the widespread case of pattern seeking—which we share with many species—than with agency projection, since we don’t know much about the evolution of consciousness in prehuman species).
All of this being said, we have limited our analysis to the issue of genetic evolution of superstition and religion, be it as a direct result of natural selection or, as I think more likely, as a by-product of preexisting human characteristics. Clearly, however, religion has also been affected by a dramatic degree of cultural evolution, to which we now turn in order to complete our picture of how religion came to be such a dominant component of most people’s lives. To begin with, let us establish why genetic evolution is not sufficient to explain the totality of the religious phenomenon. We can understand this through two types of consideration. First, it is easy to show that there simply isn’t enough genetic variation in the human population to explain the variety of religious beliefs and practices. Perhaps this is a bit like shooting fish in a barrel, since very few people would seriously maintain that every aspect of human behavior can be accounted for entirely by our genes (though you’d be surprised!). Still, the point is that the number of gene variants affecting behavior that are present in the human population is probably much smaller than what it would take if genes really were the only, or even chiefly, causal factor as far as our beliefs are concerned, particularly our beliefs in the supernatural. There just aren’t enough genes in the human genome to go around.
To better appreciate this first argument, it is helpful to introduce my second one, since the two are related. Recall my early example of the distinction between the ability to have language and the variety of actual human languages. I said that the ability to have language is genetically engrained, while the variety of languages is the result of cultural evolution. That is, our genes teach us (somehow, the details are far from being clear, despite some spectacular recent advances) how to speak, but not which language to speak. Indeed, there wouldn’t be enough genes in the human genome to codify all the words in a typical advanced human language, like English, Chinese, or Spanish—let alone to genetically encode all seven thousand currently used languages (and the many more that are no longer spoken). My argument is that the difference between language ability and spoken languages nicely parallels the difference between a propensity for superstition and the bewildering cultural phenomenon of religion. If you are still not convinced, here is yet another example that makes clear the limits of genetic explanations of human cultural phenomena, of whatever nature. It is undoubtedly true that we have a natural craving for fatty and sugary foods, which in turn is likely to be genetically encoded in us so that we take advantage of every opportunity we encounter to store fats and sugars—after all, it used to make the difference between death and survival. This explanation, however, tells us next to nothing (or if you prefer, tells us only “trivial truths”) about a huge range of cultural phenomena related to food, from fast food to gourmet restaurants, from cookbooks to Top Chef, and so on.
At this point it is pretty much de rigueur to examine the concept summarized by a word we briefly encountered at the beginning of this chapter: meme. A meme is supposed to be a unit of cultural evolution, and memetics is the (alleged) science that studies the evolution of memes. As we have seen, the concept (and term) can be traced back to 1976 and the publication of Dawkins’s The Selfish Gene, which was aimed at explaining some counterintuitive aspects of evolutionary theory to the general public. Dawkins drew an analogy between memes and genes, the units of biological hereditary information, though to his credit he did not push the idea beyond the status of mere metaphor (others did so afterward).
Let me be blunt about this: I do not think memetics is in the least helpful in understanding how cultural evolution takes place. Although a thorough discussion of memetics would be beyond both the scope and the point of this book, we have to tackle the issue because you as a reader are bound to encounter it whenever the broader topic of genetics versus culture comes up. There is no question at all that cultural traits evolve, in a sense, above and beyond genetic evolution. It is also equally clear that they do so by following non-Darwinian dynamics. While genes are inherited vertically (for the most part, there being exceptions), that is, from one generation to the next, cultural traits are inherited both vertically (from parents to offspring—by far the best statistical predictor of your religious or political affiliation is your parents’ religion or political sympathy) and horizontally, that is, through learning from other individuals. But the disanalogy between genes and memes runs much deeper than this.
To begin with, memes (unlike genes) are so ill defined that everything from a catchy tune that gets stuck in your head to “religion” can be called a meme, making it an extremely vague and highly heterogeneous category of cultural phenomena. Second, while we know what genes are made of (a couple of different types of nucleic acids), memes have no definable physical basis. They are ideas, after all, and ideas can be instantiated as neural patterns in someone’s head, as a book, as bits stored on a computer’s hard drive, or as something else entirely. Third, while we know what it means, chemically, for a gene to mutate, the analogous process for memes is once again hopelessly vague—and without an understanding of memetic mutation no scientific theory of memes can be generated. Fourth, and most crucially, the fundamental attraction of the analogy between genes and memes is that we gain an explanation (not just a description) of what goes on in cultural evolution. Except that we actually don’t.
You see, what allows the theory of evolution to work as a scientific theory—as opposed to an empty truism—is that we know enough about what genes do to be able to make predictions about which genes should be favored by natural selection and which ones shouldn’t. We can then go out there in the real world, test those predictions, and modify our understanding accordingly. That is the way science works.
But we have no idea at all how the “functional ecology” (to use the proper term from the biological sciences) of memes is supposed to work. We don’t know why a particular tune, or a particular religion, should be favored by memetic selection while other tunes or religions shouldn’t be. As a consequence, to say that a particular meme (say, the tune of the A-Team TV series) spreads because it is favored by selection really amounts to saying that the tune spreads because it spreads, thereby making memetic theory tautological. It is a pretty metaphor entirely devoid of actual science, though I cannot say whether this is because memetics is fundamentally flawed as an idea for a research program, or simply because it is a relatively young enterprise (thirty-five years and counting). We need to keep in mind, however, that the science of genetics at the same age had made spectacularly more progress than memetics has so far. Be that as it may, memetics is currently not helpful at all in understanding cultural evolution, and in fact it muddies the water significantly.
The general picture of the evolution of religion that I see emerging so far, then, is similar to the one we have seen for the evolution of morality, or the one that was probably responsible for the evolution of language. Genetic evolution provided the building blocks of pattern-seeking and agency projection behaviors, getting the process started as a by-product of behaviors that were adaptive for other reasons (understanding threats and opportunities in nature and being able to understand and predict other people’s behavior, respectively). It is also possible—though I am not convinced by the arguments and evidence adduced thus far—that natural selection directly favored religious behavior either because it reduces stress in individual humans or because of the prosocial behavior that makes some groups more competitive than others. The move from superstitious and simple religious beliefs to the bewildering variety and complexity of modern religious cultures, however, was a result of cultural evolution, a process that takes place on top of and by distinct mechanisms from the standard genetic-Darwinian one. All of this provides a reasonable explanation for one of the most important and puzzling human phenomena, and certainly one that should seriously be considered if our goal is to form as rational an understanding of the world as is humanly possible. Needless to say, gods are not actually excluded from the picture I have painted, but they are also very clearly not required. In the next chapter, we will see that even if gods did exist, we still wouldn’t need them for much that’s of any importance to living a meaningful and moral life.