4
The Emotional Brain
It is remarkable how many horrible ways we could die. Try making a list. Start with the standards like household accidents and killer diseases. After that, move into more exotic fare. “Hit by bus,” naturally. “Train derailment, ” perhaps, and “stray bullet fired by drunken revelers.” For those with a streak of black humor, this is where the exercise becomes enjoyable. We may strike a tree while skiing, choke on a bee, or fall into a manhole. Falling airplane parts can kill. So can banana peels. Lists will vary depending on the author’s imagination and tolerance for bad taste, but I’m quite sure that near the end of every list will be this entry: “Crushed by asteroid.”
Everyone knows that deadly rocks can fall from the sky, but outside space camps and science-fiction conventions, the threat of death-by-asteroid is used only as a rhetorical device for dismissing some worry as real but too tiny to worry about. I may have used it myself once or twice. I probably won’t again, though, because in late 2004 I attended a conference that brought together some of the world’s leading astronomers and geoscientists to discuss asteroid impacts.
The venue was Tenerife, one of Spain’s Canary Islands that lie off the Atlantic coast of North Africa. Intentionally or not, it was an ideal setting. The conference was not simply about rocks in space, after all. It was about understanding a very unlikely, potentially catastrophic risk. And the Canary Islands are home to two other very unlikely, potentially catastrophic risks.
First, there are the active volcanoes. All the islands were created by volcanic activity, and Tenerife is dominated by a colossus called Teide, the third-largest volcano in the world. Teide is still quite active, having erupted three times in the last 300 years.
And there is the rift on La Palma mentioned in the last chapter. One team of scientists believes it will drop a big chunk of the island into the Atlantic and several hours later people on the east coast of North and South America will become extras in the greatest disaster movie of all time. Other scientists dispute this, saying a much smaller chunk of La Palma is set to go, that it will crumble as it drops, and that the resulting waves won’t even qualify as good home video. They do agree that a landslide is possible, however, and that it is likely to happen soon in geological terms—which means it could be 10,000 years from now, or tomorrow morning.
Now, one might think the residents of the Canary Islands would find it somewhat unsettling that they could wake to a cataclysm on any given morning. But one would be wrong. Teide’s flanks are covered by large, pleasant towns filled with happy people who sleep quite soundly. There are similarly no reports of mass panic among the 85,000 residents of La Palma. The fact that the Canary Islands are balmy and beautiful probably has something to do with the residents’ equanimity in the face of Armageddon. There are worse places to die. The Example Rule is also in play. The last time Teide erupted was in 1909, and no one has ever seen a big chunk of inhabited island disappear. Survivors would not be so sanguine the day after either event.
But that can’t be all there is to it. Terrorists have never detonated a nuclear weapon in a major city, but the mere thought of that happening chills most people, and governments around the world are working very hard to see that what has never happened never does. Risk analysts call these low-probability /high-consequence events. Why would people fear some but not others? Asteroid impacts—classic low-probability/high-consequence events—are an almost ideal way to investigate that question.
The earth is under constant bombardment by cosmic debris. Most of what hits us is no bigger than a fleck of dust, but because those flecks enter the earth’s atmosphere at speeds of up to 43 miles per second, they pack a punch all out of proportion to their mass. Even the smallest fleck disappears in the brilliant flash of light that we quite misleadingly call a shooting star.
The risk to humans from these cosmic firecrackers is zero. But the debris pelting the planet comes in a sliding scale of sizes. There are bits no bigger than grains of rice, pebbles, throwing stones. They all enter the atmosphere at dazzling speed, and so each modest increase in size means a huge jump in the energy released when they burn.
A rock one-third of a meter across explodes with the force of two tons of dynamite when it hits the atmosphere. About a thousand detonations of this size happen each year. A rock one meter across—a size commonly used in landscaping—erupts with the force of 100 tons of dynamite. That happens about forty times each year.
At three meters across, a rock hits with the force of 2,000 tons of dynamite. That’s two-thirds of the force that annihilated the city of Halifax in 1917, when a munitions-laden ship exploded in the harbor. Cosmic wallops of that force hit the earth roughly twice a year.
And so it goes up the scale, until, at thirty meters across, a rock gets a name change. It is now called an asteroid, and an asteroid of that size detonates in the atmosphere like two million tons of dynamite—enough to flatten everything on the ground within 6 miles. At 100 meters, asteroids pack the equivalent of 80 million tons of dynamite. We have historical experience with this kind of detonation. On June 30, 1908, an asteroid estimated to be 60 meters wide exploded five miles above Tunguska, a remote region in Siberia, smashing flat some 1,200 square miles of forest.
Bigger asteroids get really scary. At a little more than a half mile across, an asteroid could dig a crater 9 miles wide, spark a fireball that appears twenty-five times larger than the sun, shake the surrounding region with a 7.8 earthquake, and possibly hurl enough dust into the atmosphere to create a “nuclear winter.” Civilization may or may not survive such a collision, but at least the species would. Not so the next weight class. A chunk of rock 6 miles across would add humans and most other terrestrial creatures to the list of species that once existed. This is what did in the dinosaurs.
Fortunately, there aren’t many giant rocks whizzing around space. In a paper prepared for the Organization for Economic Cooperation and Development, astronomer Clark Chapman estimated that the chance of humanity being surprised by a doomsday rock in the next century is one in a million. But the smaller the rock, the more common it is—which means the smaller the rock, the greater the chance of being hit by one. The probability of the earth being walloped by a 300-meter asteroid in any given year is 1 in 50,000, which makes the odds 1 in 500 over the course of the century. If a rock like that landed in the ocean, it could generate a mammoth tsunami. On land, it would devastate a region the size of a small country. For a 100-meter rock, the odds are 1 in 10,000 in one year and 1 in 100 over the next 100 years. At 30 meters, the odds are 1 in 250 per year and 1 in 2.5 over the next 100 years.
Figuring out a rational response to such low-probability/high-consequence risks is not easy. We generally ignore one-in-a-million dangers because they’re just too small and life’s too short. Even risks of 1 in 10,000 or 1 in 1,000 are routinely dismissed. So looking at the probability of an asteroid strike, the danger is very low. But it’s not zero. And what if it actually happens? It’s not one person who’s going to die, or even one thousand or ten thousand. It could be millions, even billions. At what point does the scale of the loss make it worth our while to deal with a threat that almost certainly won’t come to pass in our lifetime or that of our children or their children?
Reason has a typically coldhearted answer: It depends on the cost. If it costs little to protect against a low-probability/high-consequence event, it’s worth paying up. But if it costs a lot, we may be better off putting the money into other priorities—reducing other risks, for example—and taking our chances.
For the most part, this is how governments deal with low-probability/ high-consequence hazards. The probability of the event, its consequences, and the cost are all put on the table and considered together. That still leaves lots of room for arguments. Experts endlessly debate how the three factors should be weighted and how the calculation should be carried out. But no one disputes that all three factors have to be considered if we want to deal with these dangers rationally.
With regard to asteroids, the cost follows the same sliding scale as their destructive impact. The first step in mitigating the hazard is spotting the rock and calculating whether it will collide with the earth. If the alarm bell rings, we can then talk about whether it would be worth it to devise a plan to nudge, nuke, or otherwise nullify the threat. But spotting asteroids isn’t easy because they don’t emit light, they only reflect it. The smaller the rock, the harder and more expensive it is to spot. Conversely, the bigger the rock, the easier and cheaper it is to detect.
That leads to two obvious conclusions. First, asteroids at the small end of the sliding scale should be ignored. Second, we definitely should pay to locate those at the opposite end. And that has been done. Beginning in the early 1990s, astronomers created an international organization called Spaceguard, which coordinates efforts to spot and catalog asteroids. Much of the work is voluntary, but various universities and institutions have made modest contributions, usually in the form of time on telescopes. At the end of the 1990s, NASA gave Spaceguard annual funding of $4 million a year (from its $10 billion annual budget). As a result, astronomers believe that by 2008 Spaceguard will have spotted 90 percent of asteroids bigger than a half mile across.
That comes close to eliminating the risk from asteroids big enough to wipe out every mammal on earth, but it does nothing about smaller asteroids—asteroids capable of demolishing India, for example. Shouldn’t we pay to spot them, too? Astronomers think so. So they asked NASA and the European Space Agency for $30 to $40 million a year for ten years. That would allow them to detect and record 90 percent of asteroids 140 meters and bigger. There would still be a small chance of a big one slipping through, but it would give the planet a pretty solid insurance policy against cosmic collisions—not bad for a one-time expense of $300 million to $400 million. That’s considerably less than the original amount budgeted to build a new American embassy in Baghdad, and not a lot more than the $195 million owed by foreign diplomats to New York City for unpaid parking tickets.
But despite a lot of effort over many years, the astronomers couldn’t get the money to finish the job. A frustrated Clark Chapman attended the conference in Tenerife. It had been almost twenty-five years since the risk was officially recognized, the science wasn’t in doubt, public awareness had been raised, governments had been warned, and yet the progress was modest. He wanted to know why.
To help answer that question, the conference organizers brought Paul Slovic to Tenerife. With a career that started in the early 1960s, Slovic is one of the pioneers of risk-perception research. It’s a field that essentially began in the 1970s as a result of proliferating conflicts between expert and lay opinion. In some cases—cigarettes, seat belts, drunk driving—the experts insisted the risk was greater than the public believed. But in more cases— nuclear power was the prime example—the public was alarmed by things most experts insisted weren’t so dangerous. Slovic, a professor of psychology at the University of Oregon, cofounded Decision Research, a private research corporation dedicated to figuring out why people reacted to risks the way they did.
In studies that began in the late 1970s, Slovic and his colleagues asked ordinary people to estimate the fatality rates of certain activities and technologies, to rank them according to how risky they believed them to be, and to provide more details about their feelings. Do you see this activity or technology as beneficial? Something you voluntarily engage in? Dangerous to future generations? Little understood? And so on. At the same time, they quizzed experts—professional risk analysts—on their views.
Not surprisingly, experts and laypeople disagreed about the seriousness of many items. Experts liked to think—and many still do—that this simply reflected the fact that they know what they’re talking about and laypeople don’t. But when Slovic subjected his data to statistical analyses it quickly became clear there was much more to the explanation than that.
The experts followed the classic definition of risk that has always been used by engineers and others who have to worry about things going wrong: Risk equals probability times consequence. Here, “consequence” means the body count. Not surprisingly, the experts’ estimate of the fatalities inflicted by an activity or technology corresponded closely with their ranking of the riskiness of each item.
When laypeople estimated how fatal various risks were, they got mixed results. In general, they knew which items were most and least lethal. Beyond that, their judgments varied from modestly incorrect to howlingly wrong. Not that people had any clue that their hunches might not be absolutely accurate. When Slovic asked people to rate how likely it was that an answer was wrong, they often scoffed at the very possibility. One-quarter actually put the odds of a mistake at less than 1 in 100—although 1 in 8 of the answers rated so confidently were, in fact, wrong. It was another important demonstration of why intuitions should be treated with caution— and another demonstration that they aren’t.
The most illuminating results, however, came out of the ranking of riskiness. Sometimes, laypeople’s estimate of an item’s body count closely matched how risky they felt the item to be—as it did with the experts. But sometimes there was little or no link between “risk” and “annual fatalities.” The most dramatic example was nuclear power. Laypeople, like experts, correctly said it inflicted the fewest fatalities of the items surveyed. But the experts ranked nuclear power as the twentieth most risky item on a list of thirty, while most laypeople said it was number one. Later studies had ninety items, but again nuclear power ranked first. Clearly, people were doing something other than multiplying probability and body count to come up with judgments about risk.
Slovic’s analyses showed that if an activity or technology were seen as having certain qualities, people boosted their estimate of its riskiness regardless of whether it was believed to kill lots of people or not. If it were seen to have other qualities, they lowered their estimates. So it didn’t matter that nuclear power didn’t have a big body count. It had all the qualities that pressed our risk-perception buttons, and that put it at the top of the public’s list of dangers.
1. Catastrophic potential: If fatalities would occur in large numbers in a single event—instead of in small numbers dispersed over time— our perception of risk rises.
2. Familiarity: Unfamiliar or novel risks make us worry more.
3. Understanding: If we believe that how an activity or technology works is not well understood, our sense of risk goes up.
4. Personal control: If we feel the potential for harm is beyond our control—like a passenger in an airplane—we worry more than if we feel in control—the driver of a car.
5. Voluntariness: If we don’t choose to engage the risk, it feels more threatening.
6. Children: It’s much worse if kids are involved.
7. Future generations: If the risk threatens future generations, we worry more.
8. Victim identity: Identifiable victims rather than statistical abstractions make the sense of risk rise.
9. Dread: If the effects generate fear, the sense of risk rises.
10. Trust: If the institutions involved are not trusted, risk rises.
11. Media attention: More media means more worry.
12. Accident history: Bad events in the past boost the sense of risk.
13. Equity: If the benefits go to some and the dangers to others, we raise the risk ranking.
14. Benefits: If the benefits of the activity or technology are not clear, it is judged to be riskier.
15. Reversibility: If the effects of something going wrong cannot be reversed, risk rises.
16. Personal risk: If it endangers me, it’s riskier.
17. Origin: Man-made risks are riskier than those of natural origin.
18. Timing: More immediate threats loom larger while those in the future tend to be discounted.
Many of the items on Slovic’s list look like common sense. Of course something that puts children at risk presses our buttons. Of course something that involves only those who choose to get involved does not. And one needn’t have ever heard of the Example Rule to know that a risk that gets more media attention is likely to bother us more than one that doesn’t.
But for psychologists, one item on the list—“familiarity”—is particularly predictable, and particularly important. We are bombarded with sensory input, at every moment, always. One of the most basic tasks of the brain is to swiftly sort that input into two piles: the important stuff that has to be brought to the attention of the conscious mind and everything else. What qualifies as important? Mostly, it’s anything that’s new. Novelty and unfamiliarity—surprise—grab our attention like nothing else. Drive the same road you’ve driven to work every day for the last twelve years and you are likely to pay so little conscious attention that you may not remember a thing you’ve seen when you pull into the parking lot. That is if the drive is the same as it always is. But if, on the way to work, you should happen to see a naked, potbellied man doing calisthenics on his front lawn, your consciousness will be roused from its slumber and you will arrive at work with a memory you may wish were a little less vivid.
The flip side of this is a psychological mechanism called habituation. It’s the process that causes a stimulus we repeatedly experience without positive or negative consequences to gradually fade from our attention. Anyone who wears perfume or cologne has experienced habituation. When you buy a new scent and put it on, you catch a whiff of the fragrance all day long. The next day, the same. But if you wear it repeatedly, you gradually notice it less and less. Eventually, you may smell it only the moment you put it on and you will hardly pay attention to it even then. If you’ve ever wondered how the guy in the next cubicle at work can stand to reek of bad cologne all day, wonder no more.
Habituation is particularly important in coping with risk because risk is everywhere. Have a shower in the morning and you risk slipping and breaking your neck. Eat a poached egg and you could be poisoned. Drive to work and you may be crushed, mangled, or burned alive. Walk to work and carcinogenic solar radiation may rain down on you, or you may be hit by a bus or have a heart attack or be crushed by an asteroid. Of course, the chance of any of these horrible things happening is tiny—exposure to sunshine excepted, of course—and it would be a waste of our mental resources to constantly be aware of them. We need an “off” switch. That switch is habituation.
To carry out her famous observations of chimpanzees, primatologist Jane Goodall sat very still in their midst and watched them go about their ordinary business hour after hour, something that was possible only because the chimpanzees essentially ignored Goodall. To get the chimps to do that, Goodall had to show up and sit down, day after day, month after month, until the animals’ alarm and curiosity faded and they stopped paying attention to her. The same process can be observed in other species. As I am writing this sentence, there is a black squirrel on my windowsill eagerly chewing birdseed without the slightest regard for the large omnivore sitting in a chair just a couple of feet away. The birds that share the seed are equally blasé when I am in my backyard, although I would need binoculars to see them up close in a forest. As for humans, simply recall the white-knuckle grip you had on the steering wheel the first time you drove on a freeway and then think of the last time the sheer boredom of driving nearly caused you to fall asleep at the wheel. If you had been asked on that first drive how dangerous it is to drive on a freeway, your answer would be a little different than it is now that habituation has done its work.
Habituation generally works brilliantly. The problem with it, as with everything the unconscious mind does, is that it cannot account for science and statistics. If you’ve smoked cigarettes every hour, every day, for years, without suffering any harm, the cigarette in your hand won’t feel threatening. Not even your doctor’s warnings can change that, because it is your conscious mind that understands the substance of the warning, and your conscious mind does not control your feelings. The same process of habituation can also explain why someone can become convinced it isn’t so risky to drive a car drunk, or to not wear a seat belt, or to ride a motorcycle without a helmet. And if you live quietly for years in a pleasant Spanish town, you’re unlikely to give a second thought to the fact that your town is built on the slopes of the world’s third-largest active volcano.
For all the apparent reasonableness of Paul Slovic’s list of risk factors, however, its value is limited. The problem is the same one that bedevils focus groups. People know what they like, what they fear, and so on. But what’s the source of these judgments? Typically, it is the unconscious mind—Gut. The judgment may come wholly from Gut, or it may have been modified by the conscious mind—Head. But in either case, the answer to why people feel as they do lies at least partly within Gut. Gut is a black box; Head can’t peer inside. And when a researcher asks someone to say why she feels the way she does about a risk, it’s not Gut she is talking to. It is Head.
Now, if Head simply answered the researcher’s question with a humble “I don’t know,” that would be one thing. But Head is a compulsive rationalizer. If it doesn’t have an answer, it makes one up.
There’s plenty of evidence for rationalization, but the most memorable— certainly the most bizarre—was a series of experiments on so-called split-brain patients by neuroscientist Michael Gazzaniga. Ordinarily, the left and right hemispheres of the brain are connected and communicate in both directions, but one treatment for severe epilepsy is to sever the two sides. Split-brain patients function surprisingly well, but scientists realized that because the two hemispheres handle different sorts of information, each side can learn something that the other isn’t aware of. This effect could be induced deliberately in experiments by exposing only one eye or the other to written instructions. In one version of his work, Gazzaniga used this technique to instruct the right hemisphere of a split-brain patient to stand up and walk. The man got up and walked. Gazzaniga then verbally asked the man why he was walking. The left hemisphere handles such “reason” questions, and even though that hemisphere had no idea what the real answerwas, the man immediately responded that he was going for a soda. Variations on this experiment always got the same result: The left hemisphere quickly and ingeniously fabricated explanations rather than admitting it had no idea what was going on. And the person whose lips delivered these answers believed every word.
When a woman tells a researcher how risky she thinks nuclear power is, what she says is probably a reliable reflection of her feelings. But when the researcher asks the person why she feels the way she does, her answer is likely to be partly or wholly inaccurate. It’s not that she is being deceitful. It’s that her answer is very likely to be, in some degree, a conscious rationalization of an unconscious judgment. So maybe it’s true that what really bothers people about nuclear power are the qualities on Slovic’s checklist. Or maybe that stuff is just Head rationalizing Gut’s judgment. Or maybe it’s a little of both. The truth is, we don’t know what the truth is.
Slovic’s list was, and still is, very influential in the large and growing business of risk communication because it provided a handy checklist that allowed analysts to quickly and easily come up with a profile for any risk. Is it man-made? Is it involuntary? Its simplicity also appealed to the media. Newspaper and magazine articles about risk still recite the items on the list as if they explain everything we need to know about why people react to some dangers and not others. But Slovic himself acknowledges the list’s limitations. “This was the mid-1970s. At the time we were doing the early work, we had no real appreciation for the unconscious, automatic system of thought. Our approach assumed that this was the way people were analyzing risks, in a very thoughtful way.”
Ultimately, Slovic and his colleagues found a way out of this box with the help of two clues buried within their data. The first lay in the word dread. Slovic found that dread—plain old fear—was strongly correlated with several other items on the list, including catastrophic, involuntary, and inequitable. Unlike some of the other items, these are loaded with emotional content. And he found that this cluster of qualities—which he labeled “the dread factor”—was by far the strongest predictor of people’s reaction to an activity or technology. This was a strong hint that there was more going on in people’s brains than cool, rational analysis.
The second clue lay in something that looked, on the surface, to be a meaningless quirk. It turned out that people’s ratings of the risks and benefits for the ninety activities and technologies on the list were connected. If people thought the risk posed by something was high, they judged the benefit to be low. The reverse was also true. If they thought the benefit was high, the risk was seen as low. In technical terms, this is an “inverse correlation. ” It makes absolutely no sense here because there’s no logical reason that something—say, a new prescription drug—can’t be both high risk and high benefit. It’s also true that something can be low risk and low benefit— sitting on the couch watching Sunday afternoon football comes to mind. So why on earth did people put risk and benefit at opposite ends of a seesaw? It was curious but it didn’t seem important. In his earliest papers on risk, Slovic mentioned the finding in only a sentence or two.
In the years to come, however, the model of a two-track mind—Head and Gut operating simultaneously—advanced rapidly. A major influence in this development was the work of Robert Zajonc, a Stanford psychologist, who explored what psychologists call affect—which we know simply as feeling or emotion. Zajonc insisted that we delude ourselves when we think that we evaluate evidence and make decisions by calculating rationally. “This is probably seldom the case,” he wrote in 1980. “We buy cars we ‘like,’ choose the jobs and houses we find ‘attractive,’ and then justify those choices by various reasons.”
With this new model, Slovic understood the limitations of his earlier research. Working with Ali Alhakami, a Ph.D. student at the University of Oregon, he also started to realize that the perceived link between risk and benefit he discovered earlier may have been much more than a quirk. What if people were reacting unconsciously and emotionally at the mention of a risky activity or technology? They hear “nuclear power” and . . . ugh! They have an instantaneous, unconscious reaction. This bad feeling actually happens prior to any conscious thought, and because it comes first, it shapes and colors the thoughts that follow—including responses to the researchers’ questions about risk.
That would explain why people see risk and benefit as if they were sitting at opposite ends of a seesaw. How risky is nuclear power? Nuclear power is a Bad Thing. Risk is also bad. So nuclear power must be very risky. And how beneficial is nuclear power? Nuclear power is Bad, so it must not be very beneficial. When Gut reacts positively to an activity or technology— swimming, say, or aspirin—it tips the seesaw the other way: Aspirin is a Good Thing so it must be low risk and high benefit.
To test this hypothesis, Slovic and Alhakami, along with colleagues Melissa Finucane and Stephen Johnson, devised a simple experiment. Students at the University of Western Australia were divided into two groups. The first group was shown various potential risks—chemical plants, cell phones, air travel—on a computer screen and asked to rate the riskiness of the item on a scale from one to seven. Then they rated the benefits of each. The second group did the same, except that they had only a few seconds to make their decisions.
Other research had shown that time pressure reduces Head’s ability to step in and modify Gut’s judgment. If Slovic’s hypothesis was correct, the seesaw effect between risk and benefit should be stronger in the second group than the first. And that’s just what they found.
In a second experiment, Slovic and Alhakami had students at the University of Oregon rate the risks and benefits of a technology (different trials used nuclear power, natural gas, and food preservatives). Then they were asked to read a few paragraphs describing some of the benefits of the technology. Finally, they were asked again to rate the risks and benefits of the technology. Not surprisingly, the positive information they read raised student’s ratings of the technology’s benefits in about one-half of the cases. But most of those who raised their estimate of the technology’s benefits also lowered their estimate of the risk—even though they had not read a word about the risk. Later trials in which only risks were discussed had the same effect but in reverse: People who raised their estimate of the technology’s risks in response to the information about risk also lowered their estimate of its benefit.
Various names have been used to capture what’s going on here. Slovic calls it the “affect heuristic.” I prefer to think of it as the Good-Bad Rule. When faced with something, Gut may instantly experience a raw feeling that something is Good or Bad. That feeling then guides the judgments that follow: “Is this thing likely to kill me? It feels good. Good things don’t kill. So, no, don’t worry about it.”
The Good-Bad Rule helps to solve many riddles. In Slovic’s original studies, for example, he found that people consistently underestimated the lethality of all diseases except one: The lethality of cancer was actually overestimated. One reason that might be is the Example Rule. The media pays much more attention to cancer than diabetes or asthma, so people can easily recall examples of deaths caused by cancer even if they don’t have personal experience with the disease. But consider how you feel when you read the words diabetes and asthma. Unless you or someone you care about has suffered from these diseases, chances are they don’t spark any emotion. But what about the word cancer? It’s like a shadow slipping over the mind. That shadow is affect—the “faint whisper of emotion,” as Slovic calls it. We use cancer as a metaphor in ordinary language—meaning something black and hidden, eating away at what’s good—precisely because the word stirs feelings. And those feelings shape and color our conscious thoughts about the disease.
The Good-Bad Rule also helps explain our weird relationship with radiation. We fear nuclear weapons, reasonably enough, while nuclear power and nuclear waste also give us the willies. Most experts argue that nuclear power and nuclear waste are not nearly as dangerous as the public thinks they are, but people will not be budged. On the other hand, we pay good money to soak up solar radiation on a tropical beach and few people have the slightest qualms about deliberately exposing themselves to radiation when a doctor orders an X-ray. In fact, Slovic’s surveys confirmed that most laypeople underestimate the (minimal) dangers of X-rays.
Why don’t we worry about suntanning? Habituation may play a role, but the Good-Bad Rule certainly does. Picture this: you, lying on a beach in Mexico. How does that make you feel? Pretty good. And if it is a Good Thing, our feelings tell us, it cannot be all that risky. The same is true of X-rays. They are a medical technology that saves lives. They are a Good Thing, and that feeling eases any worries about the risk they pose.
On the other end of the scale are nuclear weapons. They are a Very Bad Thing—which is a pretty reasonable conclusion given that they are designed to annihilate whole cities in a flash. But Slovic has found feelings about nuclear power and nuclear waste are almost as negative, and when Slovic and some colleagues examined how the people of Nevada felt about a proposal to create a dump site for nuclear waste in that state, they found that people judged the risk of a nuclear waste repository to be at least as great as that of a nuclear plant or even a nuclear weapons testing site. Not even the most ardent anti-nuclear activist would make such an equation. It makes no sense—unless people’s judgments are the product of intensely negative feeling to all things “nuclear.”
Of course, the Example Rule also plays a role in the public’s fear of nuclear power, given the ease with which we latch onto images of the Chernobyl disaster the moment nuclear power is mentioned. But popular fears long predate those images, suggesting there is another unconscious mechanism at work. This illustrates an important limitation in our understanding of how intuitive judgment works, incidentally. By carefully designing experiments, psychologists are able to identify mechanisms like the Example Rule and the Good-Bad Rule, and we can look at circumstances in the real world and surmise that this or that mechanism is involved. But what we can’t do—at least not yet—is tease out precisely which mechanisms are doing what. We can say only that people’s intuitions about nuclear power may be generated by either the Example Rule or the Good-Bad Rule, or both.
We’re not used to thinking of our feelings as the sources of our conscious decisions, but research leaves no doubt. Studies of insurance, for example, have revealed that people are willing to pay more to insure a car they feel is attractive than one that is not, even when the monetary value is the same. A 1993 study even found that people were willing to pay more for airline travel insurance covering “terrorist acts” than for deaths from “all possible causes.” Logically, that makes no sense, but “terrorist acts” is a vivid phrase dripping with bad feelings, while “all possible causes” is bland and empty. It leaves Gut cold.
Amos Tversky and psychologist Eric Johnson also showed that the influence of bad feelings can extend beyond the thing generating the feelings. They asked Stanford University students to read one of three versions of a story about a tragic death—the cause being either leukemia, fire, or murder—that contained no information about how common such tragedies are. They then gave the students a list of risks—including the risk in the story and twelve others—and asked them to estimate how often they kill. As we might expect, those who read a tragic story about a death caused by leukemia rated leukemia’s lethality higher than a control group of students who didn’t read the story. The same with fire and murder. More surprisingly, reading the stories led to increased estimates for all the risks, not just the one portrayed. The fire story caused an overall increase in perceived risk of 14 percent. The leukemia story raised estimates by 73 percent. The murder story led the pack, raising risk estimates by 144 percent. A “good news” story had precisely the opposite effect—it drove down perceived risks across the board.
So far, I’ve mentioned things—murder, terrorism, cancer—that deliver an unmistakable emotional wallop. But scientists have shown that Gut’s emotional reactions can be much subtler than that. Robert Zajonc, along with psychologists Piotr Winkielman and Norbert Schwarz, conducted a series of experiments in which Chinese ideographs flashed briefly on a screen. Immediately after seeing an ideograph, the test subjects, students at the University of Michigan, were asked to rate the image from one to six, with six being very liked and one not liked at all. (Anyone familiar with the Chinese, Korean, or Japanese languages was excluded from the study, so the images held no literal meaning for those who saw them.)
What the students weren’t told is that just before the ideograph appeared, another image was flashed. In some cases, it was a smiling face. In others, it was a frowning face or a meaningless polygon. These images appeared for the smallest fraction of a second, such a brief moment that they did not register on the conscious mind and no student reported seeing them. But even this tiny exposure to a good or bad image had a profound effect on the students’ judgment. Across the board, ideographs preceded by a smiling face were liked more than those that weren’t positively primed. The frowning face had the same effect in the opposite direction.
Clearly, emotion had a powerful influence and yet not one student reported feeling any emotion. Zajonc and other scientists believe that can happen because the brain system that slaps emotional labels on things— nuclear power: bad!—is buried within the unconscious mind. So your brain can feel something is good or bad even though you never consciously feel good or bad. (When the students were asked what they based their judgments on, incidentally, they cited the ideograph’s aesthetics, or they said that it reminded them of something, or they simply insisted that they “just liked it.” The conscious mind hates to admit it simply doesn’t know.)
After putting students through the routine outlined above, Zajonc and his colleagues then repeated the test. This time, however, the images of faces were switched around. If an ideograph had been preceded by a smiling face in the first round, it got a frowning face, and vice versa. The results were startling. Unlike the first round, the flashed images had little effect. People stuck to their earlier judgments. An ideograph judged likeable in the first round because—unknown to the person doing the judging—it was preceded by a smiling face was judged likeable in the second round even though it was preceded by a frowning face. So emotional labels stick even if we don’t know they exist.
In earlier experiments—since corroborated by a massive amount of research—Zajonc also revealed that positive feeling for something can be created simply by repeated exposure to it, while positive feelings can be strengthened with more exposure. Now known as the “mere exposure effect, ” this phenomenon is neatly summed up in the phrase “familiarity breeds liking.” Corporations have long understood this, even if only intuitively. The point of much advertising is simply to expose people to a corporation’s name and logo in order to increase familiarity, and, as a result, positive feelings toward them.
The mere exposure effect has considerable implications for how we feel about risks. Consider chewing tobacco. Most people today have never seen anyone chew a wad, but someone who lives in an environment where it’s common is likely to have a positive feeling for it buried within his brain. That feeling colors his thoughts about chewing tobacco—including his thoughts about how dangerous it is. Gut senses chewing tobacco is Good. Good things don’t cause cancer. How likely is chewing tobacco to give you cancer? Not very, Gut concludes. Note that the process here is similar to that of habituation, but it doesn’t require the level of exposure necessary for habituation to occur. Note also that this is not the warm glow someone may feel at the sight of a tin of tobacco because it brings back memories of a beloved grandfather who was always chewing the stuff. As the name says, the mere exposure effect requires nothing more than mere exposure to generate at least a little positive feeling. Beloved grandfathers are not necessary.
Much of the research about affect is conducted in laboratories, but when psychologists Mark Frank and Thomas Gilovich found evidence in lab experiments that people have strongly negative unconscious reactions to black uniforms, they dug up corroboration in the real world. All five black-clad teams in the National Football League, Frank and Gilovich found, received more than the league-average number of penalty yards in every season but one between 1970 and 1986. In the National Hockey League, all three teams that wore black through the same period got more than the average number of penalty minutes in every season. The really intriguing thing is that these teams were penalized just as heavily when they wore their alternate uniforms—white with black trim—which is just what you would expect from the research on emotion and judgment. The black uniform slaps a negative emotional label on the team and that label sticks even when the team isn’t wearing black. Gilovich and Frank even found a near-perfect field trial of their theory in the 1979-80 season of the Pittsburgh Penguins. For the first forty-four games of the season, the team wore blue uniforms. During that time, they averaged eight penalty minutes a game. But for the last thirty-five games of the season, the team wore a new black uniform. The coach and players were the same as in the first half of the season, and yet the Penguins’ penalty time rose 50 percent to twelve minutes a game.
Another real-world demonstration of the Good-Bad Rule at work comes around once a year. Christmas isn’t generally perceived as a killer. It probably didn’t even make your list of outlandish ways to die. But it should. ’Tis the season for falls, burns, and electrocutions. In Britain, warns the Royal Society for the Prevention of Accidents (RSPA), holiday events typically include “about 1,000 people going to hospital after accidents with Christmas trees; another 1,000 hurt by trimmings or when decorating their homes; and 350 hurt by Christmas tree lights.” The British government has run ad campaigns noting that people are 50 percent more likely to die in a house fire during the holidays. In the United States, no less an authority than the undersecretary of Homeland Security penned an op-ed in which he warned that fires caused by candles “increase four-fold during the holidays. ” Christmas trees alone start fires in 200 homes. Altogether, “house fires during the winter holiday season kill 500 and injure 2,000 people,” wrote the undersecretary, “and cause more than $500 million in damage.”
Now, I am not suggesting we should start fretting about Christmas. Much of the public education around the holiday strikes me as a tad exaggerated, and some of it—like the RSPA press release that draws our attention to the risk of "gravy exploding in microwave ovens”—is unintentionally funny. But compared to some of the risks that have grabbed headlines and generated real public worry in the past—shark attacks, “stranger danger,” Satanic cults, and herpes, to name a few—the risks of Christmas are actuallysubstantial. And yet these annual warnings are annually ignored, or even played for laughs (exploding gravy!) in the media. Why the discrepancy? Part of the answer is surely the powerful emotional content of Christmas. Christmas isn’t just a Good Thing. It’s a Wonderful Thing. And Gut is sure that Wonderful Things don’t kill.
The fact that Gut so often has instantaneous, emotional reactions that it uses to guide its judgments has a wide array of implications. A big one is the role of justice in how we react to risk and tragedy.
Consider two scenarios. In the first, a little boy plays on smooth, sloping rocks at the seashore. The wind is high and his mother has told him not to go too close to the water. But with a quick glance to make sure his mother isn’t looking, the boy edges forward until he can slap his hands on the wet rocks. Intent on his little game, he doesn’t see a large wave roar in. It knocks him backward then pulls him tumbling into the ocean where strong currents drag him into deep water. The mother sees and struggles valiantly to reach him but the pounding waves blind her and beat her back. The boy drowns.
Now imagine a woman living alone with her only child, a young boy. In the community, the woman is perfectly respectable. She has a job, friends. She even volunteers at a local animal shelter. But in private, unknown to anyone, she beats her child mercilessly for any perceived fault. One night the boy breaks a toy. The woman slaps and punches him repeatedly. As the boy cowers in a corner, blood and tears streaking his face, the woman gets a pot from the kitchen and returns. She bashes the boy’s head with the pot, then tosses it aside and orders him to bed. In the night, a blood clot forms in the boy’s brain. He is dead by morning.
Two lives lost, two sad stories likely to make the front page of the newspaper. But only one will prompt impassioned letters to the editor and calls to talk radio shows, and we all know which one it is.
Philosophers and scholars may debate the nature of justice, but for most of us justice is experienced as outrage at a wrong and satisfaction at the denunciation and punishment of that wrong. It is a primal emotion. The woman who murdered her little boy must be punished. It doesn’t matter that she isn’t a threat to anyone else. This isn’t about safety. She must be punished. Evolutionary psychologists argue that this urge to punish wrongdoing is hardwired because it is an effective way to discourage bad behavior. “People who are emotionally driven to retaliate against those who cross them, even at a cost to themselves, are more credible adversaries and less likely to be exploited,” writes cognitive psychologist Steven Pinker.
Whatever its origins, the instinct for blame and punishment is often a critical component in our reactions to risks. Imagine there is a gas that kills 20,000 people a year in the European Union and another 21,000 a year in the United States. Imagine further that this gas is a by-product of industrial processes and scientists can precisely identify which industries, even which factories, are emitting the gas. And imagine that all these facts are widely known but no one—not the media, not environmental groups, not the public—is all that concerned. Many people haven’t even heard of this gas, while those who have are only vaguely aware of what it is, where it comes from, and how deadly it is. And they’re not interested in learning more.
Yes, it is an absurd scenario. We would never shrug off something like that. But consider radon. It’s a radioactive gas that can cause lung cancer if it pools indoors at high concentrations, which it does in regions that scientists can identify with a fair degree of precision. It kills an estimated 41,000 people a year in the United States and the European Union. Public health agencies routinely run awareness campaigns about the danger, but journalists and environmentalists have seldom shown much interest and the public, it’s fair to say, has only a vague notion of what this stuff is. The reason for this indifference is clear: Radon is produced naturally in some rocks and soils. The deaths it inflicts are solitary and quiet and no one is responsible. So Gut shrugs. In Paul Slovic’s surveys, the same people whose knees shook when they thought about radiation sources like nuclear waste dumps rated radon—which has undoubtedly killed more people than nuclear waste ever could—a very low risk. Nature kills, but nature is blameless. No one shakes a fist at volcanoes. No one denounces heat waves. And the absence of outrage is the reason that natural risks feel so much less threatening than man-made dangers.
The Good-Bad Rule also makes language critical. The world does not come with explanatory notes, after all. In seeing and experiencing things, we have to frame them this way or that to make sense of them, to give them meaning. That framing is done with language.
Picture a lump of cooked ground beef. It is a most prosaic object and the task of judging its quality shouldn’t be terribly difficult. There would seem to be few, if any, ways that language describing it could influence people’s judgment. And yet psychologists Irwin Levin and Gary Gaeth did just that in an experiment disguised as marketing research. Here is a sample of cooked beef, the researchers told one group. It is “75 percent lean.” Please examine it and judge it; then taste some and judge it again. With a second group, the researchers provided the same beef but they described it as “25 percent fat.” The result: On first inspection, the beef described as “75 percent lean” got much higher ratings than the “25 percent fat” beef. After tasting the beef, the bias in favor of the “lean” beef declined but was still evident.
Life and death are somewhat more emotional matters than lean and fat beef, so it’s not surprising that the words a doctor chooses can be even more influential than those used in Levin and Gaeth’s experiment. A 1982 experiment by Amos Tversky and Barbara McNeil demonstrated this by asking people to imagine they were patients with lung cancer who had to decide between radiation treatment and surgery. One group was told there was a 68 percent chance of being alive a year after the surgery. The other was told there was a 32 percent chance of dying. Framing the decision in terms of staying alive resulted in 44 percent opting for surgery over radiation treatment, but when the information was framed as a chance of dying, that dropped to 18 percent. Tversky and McNeil repeated this experiment with physicians and got the same results. In a different experiment, Tversky and Daniel Kahneman also showed that when people were told a flu outbreak was expected to kill 600 people, people’s judgments about which program should be implemented to deal with the outbreak were heavily influenced by whether the expected program results were described in terms of lives saved (200) or lives lost (400).
The vividness of language is also critical. In one experiment, Cass Sunstein—a University of Chicago law professor who often applies psychology’s insights to issues in law and public policy—asked students what they would pay to insure against a risk. For one group, the risk was described as “dying of cancer.” Others were told not only that the risk was death by cancer but that the death would be “very gruesome and intensely painful, as the cancer eats away at the internal organs of the body.” That change in language was found to have a major impact on what students were willing to pay for insurance—an impact that was even greater than making a large change in the probability of the feared outcome. Feeling trumped numbers. It usually does.
Of course, the most vivid form of communication is the photographic image, and, not surprisingly, there’s plenty of evidence that awful, frightening photos not only grab our attention and stick in our memories—which makes them influential via the Example Rule—they conjure emotions that influence our risk perceptions via the Good-Bad Rule. It’s one thing to tell smokers their habit could give them lung cancer. It’s quite another to see the blackened, gnarled lungs of a dead smoker. That’s why several countries, including Canada and Australia, have replaced text-only health warnings on cigarette packs with horrible images of diseased lungs, hearts, and gums. They’re not just repulsive. They increase the perception of risk.
Even subtle changes in language can have considerable impact. Paul Slovic and his team gave forensic psychiatrists—men and women trained in math and science—what they were told was another clinician’s assessment of a mental patient confined to an institution. Based on this assessment, the psychiatrists were asked, would you release this patient? Half the assessments estimated that patients similar to Mr. Jones “have a 20 percent chance of committing an act of violence” after release. Of the psychiatrists who read this version, 21 percent said they would refuse to release the patient.
The wording of the second version of the assessment was changed very slightly. It is estimated, the assessment said, that “20 out of every 100 patients similar to Mr. Jones” will be violent after release. Of course, “20 percent” and “20 out of every 100” mean the same thing. But 41 percent of the psychiatrists who read this second version said they would keep the patient confined, so an apparently trivial change in wording boosted the refusal rate by almost 100 percent. How is that possible? The explanation lies in the emotional content of the phrase “20 percent.” It’s hollow, abstract, a mere statistic. What’s a “percent”? Can I see a “percent”? Can I touch it? No. But “20 out of every 100 patients” is very concrete and real. It invites you to see a person. And in this case, the person is committing violent acts. The inevitable result of this phrasing is that it creates images of violence—“some guy going crazy and killing someone,” as one person put it in post-experiment interviews—which make the risk feel bigger and the patient’s incarceration more necessary.
People in the business of public opinion are only too aware of the influencethat seemingly minor linguistic changes can have. Magnetic resonance imaging (MRI), for example, was originally called “nuclear magnetic resonance imaging” but the “nuclear” was dropped to avoid tainting a promising new technology with a stigmatized word. In politics, a whole industry of consultants has arisen to work on language cues like these—the Republican Party’s switch from “tax cuts” and “estate tax” to “tax relief” and “death tax” being two of its more famous fruits.
The Good-Bad Rule can also wreak havoc on our rational appreciation of probabilities. In a series of experiments conducted by Yuval Rottenstreich and Christopher Hsee, then with the Graduate School of Business at the University of Chicago, students were asked to imagine choosing between $50 cash and a chance to kiss their favorite movie star. Seventy percent said they’d take the cash. Another group of students was asked to choose between a 1 percent chance of winning $50 cash and a 1 percent chance of kissing their favorite movie star. The result was almost exactly the reverse: 65 percent chose the kiss. Rottenstreich and Hsee saw the explanation in the Good-Bad Rule: The cash carries no emotional charge, so a 1 percent chance to win $50 feels as small as it really is; but even an imagined kiss with a movie star stirs feelings that cash does not, so a 1 percent chance of such a kiss looms larger.
Rottenstreich and Hsee conducted further variations of this experiment that came to the same conclusion. Then they turned to electric shocks. Students were divided into two groups, with one group told the experiment would involve some chance of a $20 loss and the other group informed that there was a risk of “a short, painful but not dangerous shock.” Again, the cash loss is emotionally neutral. But the electric shock is truly nasty. Students were then told the chance of this bad thing happening was either 99 percent or 1 percent. So how much would you pay to avoid this risk?
When there was a 99 percent chance of losing $20, they said they would pay $18 to avoid this almost-certain loss. When the chance dropped to 1 percent, they said they would pay just one dollar to avoid the risk. Any economist would love that result. It’s a precise and calculated response to probability, perfect rationality. But the students asked to think of an electric shock did something a little different. Faced with a 99 percent chance of a shock, they said they would pay $10 to stop it. But when the risk was 1 percent, they were willing to pay $7 to protect themselves. Clearly, the probability of being zapped had almost no influence. What mattered is that the risk of being shocked is nasty—and they felt it.
Plenty of other research shows that even when we are calm, cool, and thinking carefully, we aren’t naturally inclined to look at the odds. Should I buy an extended warranty for my new giant-screen television? The first and most important question I should ask is how likely it is to break down and need repair, but research suggests there’s a good chance I won’t even think about that. And if I do, I won’t be entirely logical about it. Certainty, for example, has been shown to have outsize influence on how we judge probabilities: A change from 100 percent to 95 percent carries far more weight than a decline from 60 percent to 55 percent, while a jump from 0 percent to 5 percent will loom like a giant over a rise from 25 percent to 30 percent. This focus on certainty helps explain our unfortunate tendency to think of safety in black-and-white terms—something is either safe or unsafe—when, in reality, safety is almost always a shade of gray.
And all this is true when there’s no fear, anger, or hope involved. Toss in a strong emotion and people can easily become—to use a term coined by Cass Sunstein—“probability blind.” The feeling simply sweeps the numbers away. In a survey, Paul Slovic asked people if they agreed or disagreed that a one-in-10 million lifetime risk of getting cancer from exposure to a chemical was too small to worry about. That’s an incredibly tiny risk—far less than the lifetime risk of being killed by lightning and countless other risks we completely ignore. Still, one-third disagreed; they would worry. That’s probability blindness. The irony is that probability blindness is itself dangerous. It can easily lead people to overreact to risks and do something stupid like abandoning air travel because terrorists hijacked four planes.
It’s not just the odds that can be erased from our minds by the Good-Bad Rule. It is also costs. “It’s worth it if even one life is saved” is something we often hear of some new program or regulation designed to reduce a risk. That may be true, or it may not. If, for example, the program costs $100 million and it saves one life, it is almost certainly not worth it because there are many other ways $100 million could be spent that would certainly save more than one life.
This sort of cost-benefit analysis is itself a big and frighteningly complex field. One of the many important insights it has produced is that, other things being equal, “wealthier is healthier.” The more money people and nations have, the healthier and safer they tend to be. Disaster relief people see this maxim in operation every time there is a major earthquake. People aren’t killed by earthquakes. They are killed by buildings that collapse in earthquakes and so the flimsier the buildings, the more likely people are to die. This is why earthquakes of the same magnitude may kill dozens in California but hundreds of thousands in Iran, Pakistan, or India. The disparity can be seen even within the same city. When a massive earthquake struck Kobe, Japan, in 1995, killing 6,200 people, the victims were not randomly distributed across the city and region. They were overwhelmingly people living in poor neighbourhoods.
Government regulations can reduce risk and save lives. California’s buildings are as tough as they are in part because building codes require them to be. But regulations can also impose costs on economic activity, and since wealthier is healthier, economic costs can, if they are very large, put more lives at risk than they keep safe. Many researchers have tried to estimate how much regulatory cost is required to “take the life” of one person, but the results are controversial. What’s broadly accepted, however, is the idea that regulations can inflict economic costs and economic costs can reduce health and safety. We have to account for that if we want to be rational about risk.
We rarely do, of course. As political scientist Howard Margolis describes in Dealing with Risk, the public often demands action on a risk without giving the slightest consideration to the costs of that action. When circumstances force us to confront those costs, however, we may change our minds in a hurry. Margolis cites the case of asbestos in New York City’s public schools, which led to a crisis in 1993 when the start of the school year had to be delayed several weeks because work to assess the perceived danger dragged on into September. Parents had overwhelmingly supported this work. Experts had said the actual risk to any child from asbestos was tiny, especially compared to the myriad other problems poor kids in New York faced, and the cost would be enormous. But none of that mattered. Like the cancer it can cause, asbestos has the reputation of a killer. It triggers the Good-Bad Rule, and once that happens, everything else is trivial. “Don’t tell us to calm down!” one parent shouted at a public meeting. “The health of our children is at stake.”
But when the schools failed to open in September, it was a crisis of another kind for the parents. Who was going to care for their kids? For poor parents counting on the schools opening when they always do, it was a serious burden. “Within three weeks,” Margolis writes, “popular sentiment was overwhelmingly reversed.”
Experiences like these, along with research on the role of emotion in judgment, have led Slovic and other risk researchers to draw several conclusions. One is that experts are wrong to think they can ease fears about a risk simply by “getting the facts out.” If an engineer tells people they shouldn’t worry because the chance of the reactor melting down and spewing vast radioactive clouds that would saturate their children and put them at risk of cancer . . . well, they won’t be swayed by the odds. Only the rational mind— Head—cares about odds and, as we have seen, most people are not accustomed to the effort required for Head to intervene and correct Gut. Our natural inclination is to go with our intuitive judgment.
Another important implication of the Good-Bad Rule is something it shares with the Rule of Typical Things: It makes us vulnerable to scary scenarios. Consider the story told by the Bush administration in support of the invasion of Iraq. It was possible Saddam Hussein would seek to obtain the materials to build nuclear weapons. It was possible he would start a nuclear weapons program. It was possible the program would successfully create nuclear weapons. It was possible Saddam would give those weapons to terrorists. It was possible that terrorists armed with nukes would seek to detonate them in an American city, and it was possible they would succeed. All these things were possible, but a rational assessment of this scenario would examine the odds of each of these events occurring on the understanding that if even one of them failed to occur, the final disaster would not happen. But that’s not how Gut would analyze it with the Good-Bad Rule. It would start at the other end—an American city reduced to radioactive rubble, hundreds of thousands dead, hundreds of thousands more burned and sick—and it would react. This is an Awful Thing. And that feeling would not only color the question of whether this is likely or not, it would overwhelm it, particularly if the scenario were described in vivid language—language such as the White House’s oft-repeated line, “We don’t want the smoking gun to be a mushroom cloud.”
Like terrorists armed with a nuclear weapon, an asteroid can also flatten a city. But asteroids are only rocks. They are not wrapped in the cloak of evil as terrorists are, nor are they stigmatized like cancer, asbestos, or nuclear power. They don’t stir any particular emotion, and so they don’t engage the Good-Bad Rule and overwhelm our sense of how very unlikely they are to hurt us. The Example Rule doesn’t help, either. The only really massive asteroid impact in the modern era was the Tunguska event, which happened a century ago in a place so remote only a handful of people saw it. There have been media reports of “near misses” and a considerable amount of attention paid to astronomers’ warnings, but while these may raise conscious awareness of the issue, they’re very different from the kind of concrete experience our primal brains are wired to respond to. Many people also know of the theory that an asteroid wiped out the dinosaurs, but that’s no more real and vivid in our memories than the Tunguska event, and so the Example Rule would steer Gut to conclude that the risk is tinier than it actually is.
There is simply nothing about asteroids that could make Gut sit up and take notice. We don’t feel the risk. For that reason, Paul Slovic told the astronomers at the Tenerife conference, “It will be hard to generate concern about asteroids unless there is an identifiable, certain, imminent, dreadful threat.” And of course, when there is an identifiable, certain, imminent, dreadful threat, it will probably be too late to do anything about it.
Still, does that matter? It is almost certain that the earth will not be hit by a major asteroid in our lifetime or that of our children. If we don’t take the astronomers’ advice and buy a planetary insurance policy, we’ll collectively save a few bucks and we will almost certainly not regret it. But still—it could happen. And the $400 million cost of the insurance policy is very modest relative to how much we spend coping with other risks. For that reason, Richard Posner, a U.S. appeals court judge and public intellectual known for his hard-nosed economic analysis, thinks the astronomers should get their funding. “The fact that a catastrophe is very unlikely to occur is not a rational justification for ignoring the risk of its occurrence,” he wrote.
The particular catastrophe that prompted Posner to write those words wasn’t an asteroid strike, however. It was the Indian Ocean tsunami of 2004. Such an event had not happened in the region in all of recorded history, and the day before it actually occurred, experts would have said it almost certainly would not happen in our lifetime or that of our children. But experts would also have said—and in fact did say, in several reports—that a tsunami warning system should be created in the region because the cost is modest. The experts were ignored and 230,000 people died.
That disaster occurred three weeks after the Canary Islands conference on asteroids ended. Just hours after waves had scoured coastlines from Indonesia to Thailand and Somalia, Slava Gusiakov, a Russian expert on tsunamis who had attended the conference, sent an emotional e-mail to colleagues. “We were repeatedly saying the words low-probability/high-consequence event,” he wrote. “It just happened.”