Wizard’s First Rule
Hope and Fear’s Anti–Sweet Spot
In Terry Goodkind’s fantasy novel Wizard’s First Rule, the wizard’s first rule is that people have a tendency to believe things they hope or fear to be true.1 The protagonists use this rule to understand and manipulate others in the novel. In real life, too, hope and fear are foundations of compellingness. Let’s talk about fear first.
Drinking from glass containers is dangerous, because minute bits of glass naturally chip off from the jostling and bumping of the container. These sharp pieces get into your intestines and slowly, over time, tear the cells of the digestive tract.
If you don’t know a lot about materials science, you just might have believed that. Even if you didn’t believe it, you might feel the pull, the temptation, to take the warning seriously. But before I get into the reasons why, please know that I made all that up, and as far as I know glass containers are close to the safest things you can drink from. Cheers.
Fear makes us pay attention, and makes us believe things we otherwise might not. Fear isn’t just feeling scared—it changes our whole mental state to prepare for danger. It shifts attention, heightens perception, affects goals, and primes certain actions, such as running or fighting. According to the idea of negativity bias, we pay more attention to, and are more likely to believe, negative information.
Why would we do this? As mentioned in the last chapter, one of the reasons language is extraordinarily useful is that we don’t have to experience everything ourselves—we can hear someone else’s account of it and form beliefs about what we heard. Epistemologists call information that is transferred in this way “testimonial belief.” If you disbelieve what you hear, testimony won’t help you very much. We all know that we shouldn’t believe everything we hear, but it turns out that we do, unless we have some specific reason to doubt what we hear or read. This is sometimes called “the default rule.”
Communication is especially useful for conveying information about danger. If you hear that there are venomous snakes on a certain mountain, you can just stay away. This bit of testimony just might save your life. Of course, soon after evolution gave human beings the ability to communicate, the ability to deceive people was greatly enhanced. Perhaps the person told you that there were snakes to keep you from some resource she discovered there. Nonetheless, it’s not hard to imagine how the people who believed the reported danger survived and the ones who ignored warnings didn’t.
We seem to have “better safe than sorry” built into our cognitive architecture. We have a hyperactive danger detector. Psychologist Maha Nasrallah found evidence of our sensitive danger detector in the fact that we are faster at detecting negative words (such as war) than positive ones. People have been shown to be more likely to believe generalizations if they are about dangerous things, according to a study by psychologist Andrei Cimpian.2 In support of this evolutionary explanation for the fear bias, some things seem to be innately frightening. More accurately, it appears that we have a predisposition to learn to be frightened by certain stimuli more than others. Phobias, for example, can emerge after a single bad experience and can be very hard to cure. The most common phobias are of primal dangers: closed spaces, heights, snakes, and spiders. Strangely, the most dangerous things in modern life, such as cars and knives, are rarely the objects of phobias.
Snakes are a particularly interesting example, because some animals need no exposure to snakes at all to fear them. Even though there are no snakes in New Zealand, and there never were, pigeons fear rubber toys of them, presumably from evolutionary pressures from the places the pigeons’ ancestors lived before the birds migrated to the islands.3
Psychologist Susan Mineka studied monkeys’ reactions to snakes. In her lab, she raised monkeys that were never exposed to any outside monkeys. Her monkeys were not afraid of snakes. Indeed, they would reach out and try to play with them. She found, however, that it was very easy to teach them to be afraid of snakes. All that it took was showing them a film of another monkey reacting with fear to a snake. Interestingly, she could not teach them to be afraid of flowers, even when the flower replaced the snake in an otherwise identical film clip.4
What is going on here? It appears that there are certain “intuitive” concepts that are easily learned, even under less-than-ideal learning conditions. Nonintuitive concepts are difficult to acquire, even under ideal learning conditions. This appears to be an example of the controversial Baldwin effect, named after its creator, psychologist James Mark Baldwin. This is a theoretical evolutionary process in which what is passed on is not a particular psychological disposition but a mental state that can learn that disposition very easily. In this case, Mineka’s monkeys were primed to be afraid of snakes.5
These examples show that fears can have evolutionary effects as well as effects on our beliefs. And what people believe, and why, can have serious real-world ramifications. Where people might otherwise think critically, their fear bias results in their accepting mere rumors as truth. For example, people will often believe hearsay regarding disease-causing agents.
Fear even affects our political affiliation. Conservatives are more sensitive to disgust and more interested in purity. Their eyes linger longer than those of liberals on repellent images, such as excrement and car wrecks, as shown in experimental research by political scientist John Hibbing.6 In fact, a survey by psychologists Ian McGregor and Paul Nail showed that even making people more frightened in the laboratory makes them temporarily more conservative in their views, and on average people became a bit more conservative after 9/11.7 Why? It could be that being fearful makes one risk averse and more likely to advocate tried-and-true solutions and courses of action.
* * *
The flip side of fear is hope. Although it’s strange to think of it this way, one of the ultimate reasons we do anything is so that we will have beliefs that make us happy—for example, when you have a good job, you are happy because you believe it to be true. Usually, we come to have such beliefs and happiness by achieving our goals. Suppose we have a goal to walk outside. As we stroll through the forest, our perceptual systems indicate that we are indeed outside. This is rewarding because we had a goal to be outside that is now satisfied. It’s not being outside that gives us pleasure, technically, but our belief that we are outside. People with delusions of grandeur get the happiness of believing that they are, perhaps, rich, when in reality they are not. Similarly, there are people who have objectively good lives but, for whatever reason, don’t believe it, and don’t enjoy the happiness normally associated with a good life.
Being productive gives us a rush of satisfaction. Drugs that give us a pleasurable high allow us to feel the rush without having to actually accomplish anything. Taking recreational drugs cheats the part of our minds that drives us to do important things.
People naturally like sweet tastes. Eating sweet things is pleasurable. But we can fool our taste buds with artificial sweeteners that contain no sugar at all. Just as an artificial sweetener fools our mind into thinking there’s sugar without giving us the nutritional benefits of sugar, our minds can latch onto ideas we want to be true even when the evidence is poor.
Suppose a doctor informs someone that they are very ill. The person has an active desire to be well, and this desire will not rest until he can believe that he is well. The desire will be satisfied no matter how the belief comes about. Perhaps there’s a medical cure that will allow the patient to get better, to reach his goal of health. However, the process of actually getting cured might be painful, expensive, time consuming, or straight-up impossible, thwarting the person’s goal to be well.
Now imagine a well-meaning friend (or a greedy charlatan) tells him that she believes the doctor is wrong. Perhaps she tells him that instead of undergoing this treatment, all he has to do to be completely cured is to bathe himself in the Nile River. Now the sick person must make a choice: to believe his doctor, who is telling him he needs a painful treatment, or his unqualified friend, who is telling him he can just swim in the Nile. Whom does he believe? Let’s weigh the pros and cons for our hypothetical patient.
What are the benefits of believing the doctor? The patient will have the satisfaction of choosing to believe someone who is an authority and will feel rational and have true, or at least best-informed, beliefs about the world. The downside of believing the doctor is that the belief that one is seriously ill is depressing and frightening. The goal to be well will not necessarily be satisfied, and people don’t like to be in a state of goal frustration.
If he instead believes his friend, then he has to suffer the nagging feeling that he is believing someone who doesn’t know what she is talking about and that he is being irrational. On the other hand, it allows him to believe that he can be well, which feels very good. The terror will evaporate, a great weight will lift. His goal to be well will, undeservedly, be satisfied. It also allows him to avoid the costly and painful treatment.
If the patient believes his friend, the scientifically minded among us will probably say he is in denial. But sometimes the good feelings associated with believing some fact are greater than the discomfort we get from knowing that, at some level, we have no business believing it. Denial, a defense mechanism that denies the truth of painful thoughts, is a perfect example of this. We get the happiness of believing what we want to believe without the hassle of achieving goals or suffering consequences of painful alternative beliefs. People tend to disbelieve things that threaten their view of the world, even if those things are verifiable facts. So there is a motivation to just believe what you want to be true all of the time—simply believe you are married to your dream partner, and you will feel more satisfied with your spouse!
Unfortunately for all creatures that can suffer, the fuel of evolution is not contentment, but reproduction. Believing things just because you hope they are true is the mind cheating itself, and evolution is probably responsible for making sure there are functions in our mind to keep this from happening—a veridical view of the state of the world, no matter how miserable, has a clear adaptive advantage.
Nonetheless, it’s not a perfect system, and people still manage to be in denial about things that others find obviously, painfully wrong. People want to feel good about things, especially themselves and the group they associate themselves with. They want to be successful, healthy, happy, well-liked, attractive, and smart. Thinking highly of ourselves is so pervasive that psychologists have coined at least 15 terms to refer to these “positive illusions.” These illusions exist in spite of the mind’s valiant attempts to keep denial in check.
One class of these positive illusions makes us believe we are better than other people. When good or bad things happen to a person, there are multiple factors involved. When we are successful, we tend to attribute it to our stable traits (for example, “I’m smart”) or to our behaviors (for example, “I got this promotion because I’m a hard worker”). When something bad happens to us, we blame the situation (thinking, for example, “I got fired because my teammates screwed this project up”). This is known as the self-serving bias, which was explored in 1975 by psychologists Dale Miller and Michael Ross. Illusory superiority refers to people’s tendency to overestimate their positive qualities and underestimate their negative ones relative to other people. In general, most people think they are above average on most traits and skills (interestingly, for difficult skills, such as riding a unicycle, average people tend to think they are below average). The stronger this bias is, the harder it is for people to find fault with themselves, no matter what happens. It’s as though each person has an unfalsifiable theory that no possible observation can refute.
We like information that supports the view of the world we already have. When we encounter such information, we pay more attention to it, remember it better, and use it to build further support for our beliefs. People who read science papers that disagree with their views discount the research (and are less confident in science in general, according to a study by psychologist Geoffrey Munro).8 The classic example of this is the persistent belief in the full-moon effect, which holds that people act crazily and more violently when there’s a full moon. Many nurses and police officers swear that this is true, but every careful study has found that it isn’t. But what does happen is this: after hearing about the effect, people tend to think of the full-moon effect when there’s a full moon and people are acting crazy, but not when either one of the two is missing. So when people recall what they think of as relevant information regarding the effect, they only bring to mind the supporting evidence.
This is called “confirmation bias,” and once you start looking for it, you see it everywhere. In fact, seeing confirmation bias everywhere is partly due to confirmation bias.
A related bias is “congruence bias,” which is like confirmation bias except it affects what information is sought rather than how passively received information is treated. People seek information that supports their beliefs rather than evidence that would contradict them.
In one chilling experiment, psychologist Deanna Kuhn looked at how juries determine guilt or innocence. Participants were given some evidence and asked to reach a verdict. You would hope that people would take a look at all the evidence and then decide on a verdict. But instead, participants created a story about what they thought happened, and then sifted through the evidence, looking for information that would support their theory.9
For example, if one believed that people who own guns tend to commit murder more than people who don’t, the congruence bias would make the person more likely to look at the number of people who own guns and commit murder, rather than the number of people who own guns but don’t commit murder, or the people who commit murder but don’t own guns.10
Is confirmation bias a different effect, at a psychological level, from the congruence bias? Just as the positive illusions might not all be psychologically distinct, congruence and confirmation biases might be different behaviors originating from the same internal psychological mechanism. Scientists want to make names for themselves, so they have an incentive to coin a new term to make it look like they’ve discovered something new, when in fact they might just be renaming something old. My friend Darren McKee, from the science podcast The Reality Check, suggested that this itself is an effect. Like other effects, Darren and I understand that the McKee-Davies bias might be caused by some other underlying positive illusion, leading to an interesting conclusion: The McKee-Davies bias might be the first effect that was created as a result of itself.
I claim that people will be riveted by what they hope or fear is true, but these two emotions are opposites. It would be a weak theory that predicts that we will be compelled by everything. Instead, it predicts that things we feel neutral about will be particularly uncompelling. If we look at hope and fear as a continuum,11 my theory predicts that people care about things on the extremes, but not in the boring middle: an anti–sweet spot.
The two conflicting drives, coming from fear and hope, stem from different psychological needs. Our need to experience fear comes from our need to learn to avoid danger. Hope can help us start difficult or frightening tasks, but the desire to experience hope can also come from bypassing the normal routes to the gratification of being right and feeling good about oneself.
* * *
One of the reasons we like certain works of art is because they remind us, perhaps unconsciously, of real things that we have an interest in. Things that make us hopeful or fearful in real life make for compelling art.
Some of these things involve imagery of elements proven to be beneficial to us in our evolutionary environment. This is most clear in our preference for certain kinds of landscapes in paintings and photographs, and in views from our houses.
Although there is a good deal of abstract and nonrepresentational art, simple landscapes and still lifes are usually preferred by people with no interest in art. In an unscientific but nonetheless fascinating project, artists Vitaly Komar and Alexander Melamid tailor-made paintings for each nation based on the results of survey questions. The surprising outcome was the consistency. People worldwide tended to prefer pretty landscapes (often with people in them) to paintings of other subjects.12
It turns out that a lot of what we prefer to see in our landscape paintings can be traced to our survival needs, particularly those of our ancestral evolutionary environment on the African savanna.13 Psychologist John Balling and educational researcher John Falk ran a study showing that eight-year-old (and younger) children prefer images of the African savanna to any other. After that age, they prefer landscapes similar to where they grew up. Humans spent the majority of their evolutionary history in the African savanna and thus have a genetic predisposition to prefer it, which is overcome by our experiences as we grow up.14
The landscape scenes we tend to like, in reality and in depictions, have these elements: a view of water, animal life, and diverse flora, especially flowering and fruiting plants. This has a clear survival advantage, in that it indicates the presence of things to eat and drink.
We also like to be able to see the horizon, presumably because we want to be able to see what’s coming. We like low-branching trees, perhaps because they are easiest to climb. We like to look from a place of refuge—where we can see, but are sheltered from the elements and perhaps the envious eyes of others.15 Balconies and porches are attractive because they provide a view but also offer refuge from the world. Our preferences for how our interiors are designed appear to be influenced by our feelings of safety. A study by psychologists Matthias Spörrle and Jennifer Stich had people arrange bedroom furniture in floor plans. They found that people prefer putting their beds in a place from which they can see the door—but where they are not easily seen from the door.16
Perhaps most interestingly, we like open spaces with low grasses and with a few trees. Although we might think of grasses cut low in this way as being particularly artificial, research suggests that our concept of nature in its wild state is actually wrong. Most of what we consider nature reserves in the modern world are changed by human beings—we change them to allow certain species (and not others) to survive. There is one reserve in the Netherlands that bucked this trend and operates with as little intervention as possible. On this reserve there are herds of deer and other herbivores, 20 percent of which die every winter. And the grass looks like a putting green from all the eating. Trees are relatively rare because the animals eat them before they can grow tall enough to be recognizable as trees. It looks more like a city park than the idea of nature that we bring to mind—a dark forest.17 This suggests why we prefer our lawns mowed and like our parks the way we do—they actually represent a natural state that we evolved in. We mow our lawns to mimic what the herds of herbivores we’ve killed off did for us in the past.
It makes a great deal of sense that we would evolve to prefer landscapes we can prosper in. Even differences between groups of people can predict some of the kinds of landscapes we like. For example, a theory about human hunter-gatherer history suggests that males tended to specialize in hunting and exploration and women in foraging (although both did at least a bit of both). I should note that this idea is hotly debated and by no means universally accepted. Nevertheless, as one would predict from this theory, women indeed prefer pictures with vegetation more than men do, and men prefer pictures featuring a good view more than women do. Preferences can be learned, too. Botanist Elizabeth Lyons used a picture-preference study to find that modern farmers overcome the built-in landscape aesthetics and prefer landscapes that would make for good farmland. Anthropologists Erich Synek and Karl Grammer found that as children grow up they prefer more rugged landscape, presumably because they are better able to traverse it.18
We still have deep preferences for nature, even though we are not always aware of it. Walking in a natural environment has even been found to improve cognition, where walking around downtown does not. Research by psychologist John Zelenski has found that walking outside makes you happier than walking inside.19
The theory that people like to look at what’s good for them predicts that people would like to look at pictures of food. On the face of it, this appears to be false. People do not decorate their walls with paintings of foods they regularly eat. But if we think of trees and animals as food sources, rather than just as things we see when we take walks in the woods, then indeed we do like to look at food, albeit in its natural state. Perhaps cooked food is too evolutionarily recent for us to have a genetic predisposition to like looking at it, which explains why we like pictures of deer but not pictures of roasts. Or, perhaps, prepared food doesn’t keep long enough for it to be painted. There is a genre of art called the “game-piece,” which shows deceased, hunted animals. Perhaps this is an intermediate form, lying between live animals and hamburgers.
The overall tendency to prefer landscapes in paintings can be interpreted as an unconscious desire to fool ourselves into thinking we’re in a beautiful, natural, safe place. This fits into the “hope” theory of this chapter.
How does our desire to attend to and believe things we fear affect our choices of art? Most people don’t want scary paintings on their walls. However, people also find themselves compelled to look at scary things when they are encountered, such as fires and car accidents, and they like to watch exciting and scary films.
We can also see the cultural effect of fear in musical preferences. Popular music reflects the fears of the times. A study of popular music from 1955 to 2003 by psychologists Terry Pettijohn II and Donald Sacco Jr. found that during threatening social and economic times, popular music has longer songs, more meaningful content, and is more comforting, more romantic, and slower.20
Why do people like sad stories? One reason might be that they make people happy—about their own lives. One study by communications scholar Silvia Knobloch-Westerwick found that people were happier about their own lives after watching a sad movie. It made them count their own blessings.21
Why do some people like horror, which, on the face of it, is scary and unpleasant? Our danger bias is a part of it, but we also like horror movies for the same reason we like to play—they are a safe danger, giving us vicarious practice for dangerous situations. Even though the experience of watching a horror movie might be terrifying, we feel, deep down, that we are watching something important, that we are learning something about the dangers of the world we live in; similarly, we can’t help but watch a car accident or a fight in the street.22 The evidence for this is in our dreams.
One current theory of dreams, the activation-synthesis hypothesis, holds that they consist of random activation of memories or interpretations thereof. However, dreams occur during rapid eye movement (REM) sleep, which is known to help encode memories of how to do things. More relevant to the present discussion is the threat simulation theory.23 This theory holds that a major function of dreams is to provide virtual practice for threatening events (two-thirds of dreams are threatening). Indeed, people who have experienced trauma have more nightmares. If this theory is correct, and parts of our minds accept what we see in the media as true, then scary movies should provoke nightmares just like real-world life-threatening events do. This in fact happens. After watching five seasons of The Sopranos I had anxiety-ridden nightmares about having to deal with violent mafiosi. Nightmares about horror movies are called “nonsense nightmares” because they are training you to deal with problems that don’t really exist, such as how to survive a zombie apocalypse.24
Interestingly, people get lots of practice surviving zombie apocalypses in computer games. In this case, the game is, as far as many parts of our minds are concerned, real practice. Indeed, a study by psychologists Jayne Gackenbach and Beena Kuruvilla found that high-end gamers experience fewer dreams involving threats.25 According to this theory of dreams, gaming provides rehearsal for how to appropriately respond to threats.
We know that parts of our minds think scary stories are important because our minds find them important enough to dream about. If scary stories are important, then we are compelled to experience them. “Please watch The Terminator again,” our lizard brain says, “I need to brush up on how to deal with killer robots from the future.”
At the same time, many stories, even harrowing ones, feature happy endings. Danger is more valuable to watch if combined with a way to escape it. There are cultural differences in how stories tend to end. American plays (as of 1927, anyway) feature happy endings more often than those of other cultures, such as Germany’s.26 In spite of some regional differences, however, my theory predicts that, worldwide, stories that can be classified as having happy endings will be more common than stories that can be classified as having unhappy endings.
* * *
Some things are compelling in a way that makes us passionate, or give us pleasure. Others are compelling in an addictive, obsessive way: we don’t really get a whole lot of pleasure from the activity, but we don’t want to stop, either.27 Experts debate what kinds of things one can become addicted to in a medical sense, but I’m using the term casually, where it can be a positive description of a food or activity that we find so rewarding that we can’t help but continue to partake, such as being addicted to potato chips or to a television show, or playing Tetris.
I’m not aware of any scientific research on the subject of what makes a computer game addictive, but I will speculate that we get addicted to games and puzzles because we feel we are getting better at something. Getting better at something makes us feel good about ourselves, fitting into the theme of hope in this chapter.
Grinding is a computer-game term for a situation in which a player must repeat actions over and over again for some kind of (usually in-game) reward. For example, a player might have his character kill monsters, over and over again, to gain powers or status. For some games, grinding is perceived as essential, because meaningful content is expensive to create and consumed far more rapidly by gamers than can be created by game designers. It’s hard to make grinding fun, but game designers have found clever ways to make grinding addictive. One way to do this is to give a very small reward for each action. In the Lego Star Wars games, nearly every action gives some kind of reward. The player feels that he is constantly making some progress, however small. The external reward in the game is matched with an internal reward in the gamer’s mind. Often this mechanical reward (e.g., experience points or virtual gold pieces) is accompanied by an intrinsically rewarding stimulus, such as a bright icon and a pleasant sound.
Slot machines are computer games that work by grinding. The player does a very simple action—pulling a lever—over and over again, in hope of financial reward. Unlike Lego Star Wars, however, slot machines do not reward every time. Shouldn’t they be less addictive?
Actually, no. It turns out that intermittent reward reinforces behavior even more strongly than reliable reward. Some games take advantage of this by rewarding the player at random intervals. In many computer games, for example, X-Men: Legends, the player can destroy boxes, but the boxes only occasionally reveal valuables. This encourages the player to addictively destroy every box she sees.
Slot machines have another insidious aspect to them—one can feel like one is getting better at them. To understand this, let’s take a familiar example of a child trying to hit a tree with a thrown rock. When the kid throws a rock and it hits the tree, she gets a surge of positive feeling. This is the brain’s internal reward system. It rewards all of the ways her muscles moved, so that in the future she will be more likely to hit the tree again. However, if she fails to hit the tree but comes close to hitting it, there is still reward, albeit not as much. She perceives that she is getting closer to the target, getting better at the task. More like that, the brain says. It feels pretty good to almost hit the tree.
The same thing happens to her when she is an adult using a slot machine, even though in the slot machine case this behavior is irrational. When she gets two “bar” results but not the third, she feels (subconsciously, of course) that she got “close” to the desired result. Her brain assumes she is getting better at the task, so there is a self-generated reward, as Catharine Winstanley found in a brain study.28 It feels good to get close, even in a slot machine. It is completely irrational because the slots are random, getting two “bar” results is not really “close” to getting three, and of course how one pulls the bar has no effect whatsoever on the outcome of the machine. It’s completely different from throwing a rock at a tree, but our minds cannot make this distinction.
The vast majority of people at Gamblers Anonymous meetings are not addicted to tabletop blackjack or the roulette wheel—they are addicted to machines. Slot machines and video poker are computer games that take your money even more quickly than the prototypical computer games people think of when they hear the term. What is even worse is that unlike most computer games, casino slot machines collect an incredible amount of data on how the design of their games affects how much money is made. As a result, these gambling computer games are probably the most addictive artifacts ever designed.
* * *
The world experts at catering to our hopes and fears are the news media, who created the phrase “if it bleeds, it leads.” They know that terrifying stories will get them attention, so they end up reporting as many of them as they can.
Sometimes the news will cleverly play on our hopes and fears one after the other, with a headline or teaser like “is your child being mistreated in preschool? find out what you need to know.” Like religion, the news scares you and follows up with hope for a solution.
Sometimes the news caters only to hope. No doubt many readers have heard that having a pet increases happiness and health. But few know that the studies reporting no effect are just as numerous. A study showing no such effect is something nobody wants to read. It’s not scary nor hopeful enough to grab anyone’s attention. It’s in the boring dead zone, the anti–sweet spot. As a result, in this case only the positive gets reported.
Like a lot of news, contemporary legends (popularly known as urban legends) tend to be scary. That is because the scary ones are more likely to be retold, as was found in an experiment by psychologists Jean Fox Tree and Mary Susan Weldon.29 According to my theory, we find cautionary tales compelling because of fear and hope.
* * *
While in China, I got acupuncture for some digestive problems. After I was treated my symptoms got worse. I don’t feel that I have enough information to know if the acupuncture had any effect at all, good or bad. My acupuncturist, however, had no reservations: she claimed that the Western medicine I was taking (for the same sickness) was interfering with the acupuncture, and, even though she was not a doctor, recommended I stop taking it. I never went back.
After the news, nowhere is the desire to believe what we hope or fear more evident than in the case of bad science, particularly in the realm of medicine. Public superstitions show how easily people get scared. This fear bias results in accepting rumors or hearsay as truth in situations in which people should be skeptical.
For example, people will often believe hearsay regarding disease-causing agents. Polio is a horrendous disease that might have been eliminated once and for all from the face of the earth if not for our fear bias. In areas where the vaccine is distributed, rumors that the polio vaccine causes the disease abound, ironically leading to many polio deaths.30 The folk belief that vaccines are dangerous is not limited to third-world countries struggling with polio.
In the Western world, some people, including some very famous and attractive people (I won’t mention names, to protect Jenny McCarthy), believe vaccines cause autism. This belief is not supported by scientific evidence, but people tend to believe it in part because it’s frightening. As in the case with polio, it is not the vaccine but the fear of vaccines that is the real public health threat. When people are afraid of vaccines, fewer people get them. This means that a higher percentage of the population will be likely to catch an infectious disease, but it’s not just those who fail to get the vaccine who are at risk. When a group of people fail to participate, it endangers everybody. But as pediatrician and leading vaccine advocate Paul Offit says, “It’s not hard to scare people, but it’s extremely difficult to unscare them.”31
I don’t think it’s an exaggeration to suggest that the engine of medical quackery is fueled by our desires to believe what we hope or fear. The confirmation bias allows us to notice cures when they work and ignore them when they don’t, or indeed, when they make us worse.
Medical quackery is seriously enhanced by the placebo effect as well. The placebo effect occurs when the perception that a person is receiving medical treatment positively affects their health and well-being. If a sick person takes a sugar pill (one with no medicinal ingredients), then she is likely to feel better. In cases where the problem is psychosomatic, the result of hypochondria, or has a psychological aspect (e.g., problems triggered by stress), the placebo might actually make you better.
Note that there is a placebo effect with real medicine too—because the active ingredient in a drug and the psychological effects of believing you are being treated are independent; the medicine will have its effect and the placebo will have an effect on top of that. The placebo effect is so strong (and, over the years, getting stronger) that placebo-controlled trials are absolutely required in all pharmaceutical research. However, when people evaluate cures in day-to-day life (e.g., my cold goes away whenever I take echinacea), or in unscientific studies (e.g., 70 percent of people who took this snake oil had cold symptoms reduced), the placebo effect, in combination with the confirmation bias, causes people to believe in treatments that don’t work, spend money on them, and, worst of all, neglect more effective treatments. Given the strong effects of these biases, it is unwise to put any credibility in anecdotal observations of medicine. If you hear something that you hope or fear is true, treat it with skepticism, because there are forces within you suppressing your doubts.
Like medical quackery, religion too is filled with hope and fear. Christianity, Hinduism, and Islam, in particular, have a great deal to say about what risk one takes by not following their edicts and the benefits to be experienced by following them, both now and in an afterlife.
One thing that can be absolutely terrifying is the thought of dying. Some religions provide some attenuation of this fear by promoting belief in an afterlife. Indeed, even subtly reminding people of death can make them report being more religious and more likely to believe in God, as shown in a study conducted by psychologist Ian Hansen. A study by Chris Jackson and Leslie Francis found that anxious people are more likely to think religion is important than people who have no anxieties.32
Dying isn’t the only fear that can affect our religious beliefs. Psychological studies have shown that people are more likely to be religious if they feel lonely, have experienced terrorism, or are feeling financial or physical insecurity. Just making an unempowered person more anxious makes him or her a more zealous religious believer.33
Religion tends to show up where life is hard. Religion is more common in nations that are dysfunctional, have a lower standard of living (as measured by divorce rates, public health expenditures, doctor-to-population ratios, per capita GDPs, adult literacy rates, and access to safe drinking water), have a higher income inequality, have less trust in general, and are less democratic.34 Predominately atheist nations tend to be more peaceful than religious ones (as measured by rates of infant mortality, murder, AIDS, abortion, and corruption). People who have not experienced war, have good health, do not feel under threat of terrorism, have stable employment, and are well educated tend to be the least religious. These are the people in least need of reassurance. Finally, religious belief is correlated strongly with hope in individuals.35 All of this makes it sound like an open-and-shut case. Indeed, many people believe that religion’s ability to delude people into a false sense of comfort is the only explanation for religion that is needed.
However, some religions have no completely benevolent gods at all. The religions of the pre-Columbian Mexicans and Mayans were like this.36 Although the evidence cited above indicates that unstable places are more likely to have religion, this does not necessarily mean that the religion they have is of the particularly comforting variety, though it is suggestive. One of the most comforting religions, New Age mysticism, came about in one of the most affluent and secure nations in history. If wish fulfillment were really the only thing going on, you would expect to find particularly comforting religions in the most unpleasant places. Anthropologist Pascal Boyer actually claims that the opposite is true.37 Perhaps people in dysfunctional environments yearn for some kind of understanding or the feeling that some god is in control. People turn to supernatural agents to understand negative and unpredictable events, rather than blaming these events on chance. Fear makes them turn to religious beliefs.
Psychologist Nicholas Epley ran a study in which people were told that they would end up lonely in life, and they found that this group reported a stronger belief in supernatural agents than a group that was told that they would have a future full of rewarding relationships. People who feel little control of their lives are more likely to believe that the future can be supernaturally predicted. Simply making people feel out of control in the laboratory does things to their minds, making them more likely to perceive patterns in noise, causes between unrelated events, and conspiracies. Psychologist Aaron Kay found that making people anxious makes them more religious.38
Cognitive scientist Jesse Bering makes an excellent point that beliefs in hopeful things like the afterlife can’t be strictly a function of people being hopeful. Indeed, there are many ideas that would make us happy if we were to believe them, but those ideas are patently absurd.39 For example, it might make you happy to believe that all the dead batteries in your house will turn into money. Silly, right? Without our natural belief in psychological continuity, the very idea of the mind surviving independently of a living body—the idea of an afterlife—would be considered just as silly.40
But there’s another fact to consider with regard to religion as a denial of death: not all visions of the afterlife are particularly happy. Some religions, including some denominations of Christianity, believe in a bad afterlife, often called “hell,” for people who do bad things. The idea of hell is perhaps comforting when thinking about the bully who reduced your child to tears. This is the common view of religion as wish fulfillment. However, the idea of hell is not so comforting when you’re scared of being sent there yourself. But scary things feel important to know about, so the idea of hell is compelling. It might be that hell is particularly compelling for worriers. Political scientist Daniel Treisman found with surveys that people who believe in hell tend to worry about a lot of other things too.41 Many contemporary Christians tend not to think they are going to hell. This is not the case with contemporary Muslims, who tend to think of hell as a real possibility for themselves. As a result, according to anthropologist Lee Ellis, Muslims have even more death anxiety than nonbelievers!42
Religion can scare people plenty, even without the idea of an afterlife. Anthropologist Pascal Boyer reports that in places in Melanesia people feel they are under constant threat of witchcraft. Of course, there are rituals one can do to try to avoid being attacked by witchcraft, which reduces worry. Sometimes religion brings comfort because it reduces the anxiety the religion creates in the first place. Examples like the afterlife in Islam cast doubt on wish fulfillment and comfort as a magic bullet, a simple explanation for religion. But that is not to say that it is not often compelling because of the hope and comfort it brings; it’s just that there is more than one factor at work.
Many readers probably take it for granted that the afterlife is an important part of religion in general. But this is not true. Many religions have only vaguely defined ideas of what happens to you after you die, and the nature of the afterlife, if any, has nothing to do with how you behave in life. However, the most popular religions do have some kind of afterlife beliefs. Why is this?
Pascal Boyer argues that religion and other beliefs tend to get interpreted and embellished by different peoples as the religion spreads.43 According to Boyer, having scripture, or a doctrine, helps keep the religion what it is, inhibiting splintering into different sects. For the religion to stay what it is, it needs to be one-size-fits-all, and it needs to be written down to prevent cultural diversity. We call these religions “doctrinal,” because they have a specific doctrine. The many religions that do not have doctrine splinter much more often.
However, each geographical area might have its own ideas about gods and spirits. Doctrinal religions demote these to “lesser” gods. The religious leaders of the doctrinal religions are not better at dealing with these local gods than the local shamans, so the religion evolves to pay less attention, relative to primarily non-doctrinal religions, to solving problems of the here and now and more attention to what comes next—the afterlife.
So religions with no doctrine tend to splinter. Vast religions that try to claim dominion over local gods will have fewer believers over time. Without a good claim on the here-and-now gods, nor the afterlife, a religion will appear useless. As a result of all these cultural forces, the most popular religions are doctrinal and focus on some kind of afterlife. So the reason it’s easy to think that all religions have an emphasis on an afterlife is because the religions we hear about most would not be the most popular if they didn’t! For a religion, belief in an afterlife is adaptive.
* * *
An interesting thing happens when we start to doubt our views. This doubt causes cognitive dissonance, which is a discomfort as a result of contradictory feelings and thoughts. When people believe in, say, the power of prayer and encounter evidence (be it scientific or from personal experience) that it doesn’t work, it causes dissonance. People will act to resolve the dissonance, sometimes irrationally.
A study by psychologists David Gal and Derek Rucker showed that people whose closely held beliefs were undermined engaged in more proselytizing behaviors. Why would this be so? When those around us believe the same things we do, we take this as reason to believe it. So if we can convince the people around us of the belief, then the dissonance is attenuated.44
Confirmation bias has a great influence on belief in religion and conspiracy theory. In religion, once people believe in a benevolent God, they start to interpret worldly events differently. People attribute good things that happen to God’s will, but not bad things. For example, people might hold God responsible for people being saved from a disaster, but the idea that God might have caused the disaster in the first place might not even cross their minds. Without confirmation bias, peoples’ views about God would likely be much more complicated. But because they do not consider bad events (or good things happening to bad people) to be God’s responsibility, and perceive good things as coming from God, every good thing that happens supports their theory that God is good.
Likewise, if one’s conception of God involves punishment, then bad events get attributed to God as well. For example, when the tsunami struck Thailand in 2004, there were religious people who claimed that it was an act of God to punish Thailand for its wicked ways. Similar attributions to God were made regarding AIDS and gay men.
The fear of divine retribution is tied up with social compellingness as well. Recall Jonathan Haidt and Jesse Bering’s idea that our tendency to have religion evolved to help maintain the social order. This requires that the gods have knowledge of us and of what we do. In a major study across many religious groups, historian Raffaele Pettazoni found that the central gods of many religions had a Santa-like intimate knowledge of individuals and what they did.45 As societies get larger, there is less accountability for your actions—not everybody can know you personally, so reputation means less. As a result, larger societies are more likely to feature religions with gods concerned with human morality, as supported by a study by evolutionary scientists Frans Roes and Michel Raymond.46 When your fellow citizens can’t keep you in line, they have gods step in.
This is how people, in their day-to-day lives, avoid the problem of how an all-good, all-powerful god allows such terrible things to happen. Good things are clearly the result of the goodness of God and bad things are the result of people acting immorally—for instance, Pascal Boyer reports that religions the world over hold that incest can cause natural disasters. As good and bad events are interpreted according to their worldviews, people feel as though they have experienced a great deal of evidence in support of their views. In fact they are counting their own biased perceptions and selective memories as objective observations.
What does it mean to believe that God is “all good?” Naturally, it means that God agrees with whatever you believe. And when we change our minds about something, we tend to either think that God changed his mind too or that we have become aware of what his actual opinion was all along. Psychologist Nicholas Epley changed people’s minds with persuasive essays and found that people attributed their new beliefs to God.47
Boyer relates a description of the Kwaio people and how they interact with invisible spirits, or adalo, that reflects the confirmation bias very well as it applies to religious ideas.48 When something bad happens, such as when a sickness spreads, the Kwaio do divination with a root to get yes or no answers from the adalo. They try to find out what the spirit wants as a sacrifice. Often it’s pork. If the problem does not go away, they reason that they must have been communicating with the wrong adalo, not the one who caused the problem. They repeat this cycle until the problem goes away. Then they reason that the requested sacrifice worked. No matter how many times the sacrifice fails, their belief system provides a way for them to keep on believing in it. Their belief systems are immune to counterevidence. Unless you are Kwaio, this probably sounds pretty silly. But I hope that you can also see how someone might come to find this whole thing rather sensible, given certain presuppositions of how the world works.
Let me bring it closer to home. Similar things can happen in Christianity and other monotheistic religions. If you get a good job, you might thank God for helping you. If the job turns out to feature an abusive boss, then you might reason that in fact God gave you that job to test you and make you stronger. You can fit just about anything that happens into the framework you have. What’s alarming about this is that it will feel like you’ve found more evidence for the framework itself.
* * *
Confirmation bias is particularly pernicious with conspiracy theorists. Not only do they seek out and give more value to information that supports their theories (the congruence confirmation biases)—as all of us do unless we’re being very careful—but they interpret disconfirming evidence as further support of the notion that there’s been a cover up. This is one important difference between conspiracy theories and other bad ideas, such as astrology and Reiki: conspiracy theories have embedded within them beliefs about people actively concealing evidence, and thereby are able to incorporate counterevidence into the conspiracy narrative. Any information that appears to disconfirm the conspiracy theorists’ explanation of what happened can readily be explained by the part of the theory that describes the cover up.
It is possible that one attraction of conspiracy theories is that they help us make some kind of sense of a chaotic, frightening world. In support of this idea, a study by political psychologist Marina
Abalakina-Paap found that people who believe in conspiracy theories tend to feel alienated from society.49
* * *
We are compelled by things we fear because some parts of our minds treat frightening information as important, be it truth, fiction, or lies. We are compelled by things that give us hope because hope makes us feel happy or better prepared to encounter future difficulties.