6
The Herd Senses Danger
You are a bright, promising young professional and you have been chosen to participate in a three-day project at the Institute of Personality Assessment and Research at the University of California in sunny Berkeley. The researchers say they are interested in personality and leadership and so they have brought together an impressive group of one hundred to take a closer look at how exemplary people like you think and act.
A barrage of questions, tests, and experiments follows, including one exercise in which you are asked to sit in a cubicle with an electrical panel. Four other participants sit in identical cubicles next to you, although you cannot see each other. Slides will appear on the panel that will ask you questions, you are told, and you can answer with the switches on the panel. Each of the panels is connected to the others so you can all see one another’s answers, although you cannot discuss them. The order in which you will answer will vary.
The questions are simple enough at first. Geometric shapes appear and you are asked to judge which is larger. At the beginning, you are the first person directed to respond. Then you are asked to be the second to answer, which allows you to see the first person’s response before you give yours. Then you move to the number-three spot. There’s nothing that takes any careful consideration at this point, so things move along quickly.
Finally, you are the last of the group to answer. A slide appears with five lines on it. Which line is longest? It’s obvious the longest is number 4 but you have to wait before you can answer. The first person’s answer pops up on your screen: number 5. That’s odd, you think. You look carefully at the lines. Number 4 is obviously longer than number 5. Then the second answer appears: number 5. And the third answer: number 5. And the fourth: number 5.
Now it’s your turn to answer. What will it be?
You clearly see that everyone is wrong. You shouldn’t hesitate to flip the switch for number 4. And yet there’s a good chance you won’t. When this experiment was conducted by Richard Crutchfield and colleagues in the spring of 1953, fifteen people out of fifty ignored what they saw and went with the consensus.
Crutchfield’s work was a variation on experiments conducted by Solomon Asch in the same era. In one of psychology’s most famous experiments, Asch had people sit together in groups and answer questions that supposedly tested visual perception. Only one person was the actual subject of the experiment, however. All the others were instructed, in the later stages, to give answers that were clearly wrong. In total, the group gave incorrect answers twelve times. Three-quarters of Asch’s test subjects abandoned their own judgment and went with the group at least once. Overall, people conformed to an obviously false group consensus one-third of the time.
We are social animals and what others think matters deeply to us. The group’s opinion isn’t everything; we can buck the trend. But even when the other people involved are strangers, even when we are anonymous, even when dissenting will cost us nothing, we want to agree with the group.
And that’s when the answer is instantly clear and inarguably true. Crutchfield’s experiment involved slightly more ambiguous questions, including one in which people were asked if they agreed with the statement “I believe we are made better by the trials and hardships of life.” Among subjects in a control group that was not exposed to the answers of others, everyone agreed. But among those in the experiment who thought that everyone else disagreed with the statement, 31 percent said they did not agree. Asked whether they agreed with the statement “I doubt whether I would make a good leader,” every person in the control group rejected it. But when the group was seen to agree with the statement, 37 percent of people went along with the consensus and agreed that they doubted themselves.
Crutchfield also designed three questions that had no right answer. They included a series of numbers that subjects were asked to complete, which was impossible because the numbers were random. In that case, 79 percent of participants did not guess or otherwise struggle to come up with their own answer. They simply went with what the group said.
These studies of conformity are often cited to cast humans as sheep, and it certainly is disturbing to see people set aside what they clearly know to be correct and say what they know to be false. That’s all the more true from the perspective of the early 1950s, when Asch and Crutchfield conducted their classic experiments. The horror of fascism was a fresh memory and communism was a present threat. Social scientists wanted to understand why nations succumbed to mass movements, and in that context it was chilling to see how easy it is to make people deny what they see with their own eyes.
But from an evolutionary perspective, the human tendency to conform is not so strange. Individual survival depended on the group working together, and cooperation is much more likely if people share a desire to agree. A band of doubters, dissenters, and proud nonconformists would not do so well hunting and gathering on the plains of Africa.
Conformity is also a good way to benefit from the pooling of information. One person knows only what he knows, but thirty people can draw on the knowledge and experience of thirty, and so when everyone else is convinced there are lions in the tall grass it’s reasonable to set aside your doubts and take another route back to camp. The group may be wrong, of course. The collective opinion may have been unduly influenced by one person’s irrational opinion or by bad or irrelevant information. But still, other things being equal, it’s often best to follow the herd.
It’s tempting to think things have changed. The explosion of scientific knowledge over the last five centuries has provided a new basis for making judgments that is demonstrably superior to personal and collective experience. And the proliferation of media in the last several decades has made that knowledge available to anyone. There’s no need to follow the herd. We can all be fully independent thinkers now.
Or rather, we can be fully independent thinkers if we understand the following sentence, plucked from the New England Journal of Medicine: “In this randomized, multicenter study involving evaluators who were unaware of treatment assignments, we compared the efficacy and safety of posaconazole with those of fluconazole or itraconazole as prophylaxis for patients with prolonged neutropenia.” And this one from a physics journal: “We evaluate the six-fold integral representation for the second-order exchange contribution to the self-energy of a dense three-dimensional electron gas on the Fermi surface.” And then there’s this fascinating insight from a journal of cellular biology: “Prior to microtubule capture, sister centromeres resolve from one another, coming to rest on opposite surfaces of the condensing chromosome.”
Clearly, today’s fully independent thinker will have to have a thorough knowledge of biology, physics, medicine, chemistry, geology, and statistics. He or she will also require an enormous amount of free time. Someone who wants to independently decide how risky it is to suntan on a beach, for example, will find there are thousands of relevant studies. It would take months of reading and consideration in order to draw a conclusion about this one simple risk. Thus if an independent thinker really wishes to form entirely independent judgments about the risks we face in daily life, or even just those we hear about in the news, he or she will have to obtain multiple university degrees, quit his or her job, and do absolutely nothing but read about all the ways he or she may die until he or she actually is dead.
Most people would find that somewhat impractical. For them, the only way to tap the vast pools of scientific knowledge is to rely on the advice of experts—people who are capable of synthesizing information from at least one field and making it comprehensible to a lay audience. This is preferable to getting your opinions from people who know as little as you do, naturally, but it too has limitations. For one thing, experts often disagree. Even when there’s widespread agreement, there will still be dissenters who make their case with impressive statistics and bewildering scientific jargon.
Another solution is to turn to intermediaries—those who are not experts themselves but claim to understand the science. Does abortion put a woman’s health at risk? There’s heaps of research on the subject. Much of it is contradictory. All of it is complicated. But when I took a look at the Web site of Focus on the Family, a conservative lobby group that wants abortion banned, I see that the research quite clearly proves that abortion does put a woman’s health at risk. Studies are cited, statistics presented, scientists quoted. But then when I look at the Web site of the National Abortion Rights Action League (NARAL), a staunchly pro-choice lobby group, I discover that the research indisputably shows abortion does not put a woman’s health at risk. Studies are cited, statistics presented, scientists quoted.
Now, if I happened to trust NARAL or Focus on the Family, I might decide that their opinion is good enough for me. But a whole lot of people would look at this differently. NARAL and Focus on the Family are lobby groups pursuing political agendas, they would think. Why should I trust either of them to give me a disinterested assessment of the science? As Homer Simpson sagely observed in an interview with broadcaster Kent Brockman, “People can come up with statistics to prove anything, Kent. Forty percent of all people know that.”
There’s something to be said for this perspective. On important public issues, we constantly encounter analyses that are outwardly impressive— lots of numbers and references to studies—that come to radically different conclusions even though they all claim to be portraying the state of the science. And these analyses have a suspicious tendency to come to exactly the conclusions that those doing the analyzing find desirable. Name an issue, any issue. Somewhere there are lobbyists, activists, and ideologically driven newspaper pundits who would be delighted to provide you with a rigorous and objective evaluation of the science that just happens to prove that the interest, agenda, or ideology they represent is absolutely right. So, yes, skepticism is warranted.
But Homer Simpson isn’t merely skeptical. He is cynical. He denies the very possibility of knowing the difference between true and untrue, between the more accurate and the less. And that’s just wrong. It may take a little effort to prove that the statistic Homer cites is fabricated, but it can be done. The truth is out there, to quote another staple of 1990s television.
Along with truth, cynicism endangers trust. And that can be dangerous. Researchers have found that when the people or institutions handling a risk are trusted, public concern declines: It matters a great deal whether the person telling you not to worry is your family physician or a tobacco company spokesman. Researchers have also shown, as wise people have always known, that trust is difficult to build and easily lost. So trust is vital.
But trust is disappearing fast. In most modern countries, political scientists have found a long-term decline in public trust of various authorities. The danger here is that we will collectively cross the line separating skepticism from cynicism. Where a reasonable respect for expertise is lost, people are left to search for scientific understanding on Google and in Internet chat rooms, and the sneer of the cynic may mutate into unreasoning, paralyzing fear. That end state can be seen in the anti-vaccination movements growing in the United States, Britain, and elsewhere. Fueled by distrust of all authority, anti-vaccination activists rail against the dangers of vaccinating children (some imaginary, some real-but-rare) while ignoring the immense benefits of vaccination—benefits that could be lost if these movements continue to grow.
This same poisonous distrust is on display in John Weingart’s Waste Is a Terrible Thing to Mind, an account of Weingart’s agonizing work as the head of a New Jersey board given the job of finding a site for a low-level radioactive waste disposal facility. Experts agreed that such a facility is not a serious hazard, but no one wanted to hear that. “At the Siting Board’s open houses,” writes Weingart, who is now a political scientist at Rutgers University, “people would invent scenarios and then dare Board members and staff to say they were impossible. A person would ask, ‘What would happen if a plane crashed into a concrete bunker filled with radioactive waste and exploded? ’ We would explain that while the plane and its contents might explode, nothing in the disposal facility could. And they would say, ‘But what if explosives had been mistakenly disposed of, and the monitoring devices at the facility had malfunctioned so they weren’t noticed?’ We would head down the road of saying that this was an extremely unlikely set of events. And they would say, ‘Well, it could happen, couldn’t it?’ ”
Fortunately, we have not entirely abandoned trust, and experts can still have great influence on public opinion, particularly when they manage to forge a consensus among themselves. Does HIV cause AIDS? For a long time, there were scientists who said it did not, but the overwhelming majority said it did. The public heard and accepted the majority view. The same scenario is playing out now with climate change—most people in every Western country agree that man-made climate change is real, not because they’ve looked into the science for themselves, but because they know that’s what most scientists think. But as Howard Margolis describes in Dealing with Risk, scientists can also find themselves resoundingly ignored when their views go against strong public feelings. Margolis notes that the American Physical Society—an association of physicists—easily convinced the public that cold fusion didn’t work, but it had no impact when it issued a positive report on the safety of high-level nuclear waste disposal.
So scientific information and the opinions of scientists can certainly play a role in how people judge risks, but—as the continued divisions between expert and lay opinion demonstrate—they aren’t nearly as influential as scientists and officials might like. We remain a species powerfully influenced by the unconscious mind and its tools—particularly the Example Rule, the Good-Bad Rule, and the Rule of Typical Things. We also remain social animals who care about what other people think. And if we aren’t sure whether we should worry about this risk or that, whether other people are worried makes a huge difference.
“Imagine that Alan says that abandoned hazardous waste sites are dangerous, or that Alan initiates protest action because such a site is located nearby,” writes Cass Sunstein in Risk and Reason. “Betty, otherwise skeptical or in equipoise, may go along with Alan; Carl, otherwise an agnostic, may be convinced that if Alan and Betty share the relevant belief, the belief must be true. It will take a confident Deborah to resist the shared judgments of Alan, Betty and Carl. The result of these sets of influences can be social cascades, as hundreds, thousands or millions of people come to accept a certain belief because of what they think other people believe.”
Of course it’s a big leap from someone in a laboratory going along with the group answer on meaningless questions to “hundreds, thousands or millions of people” deciding that something is dangerous simply because that’s what other people think. After all, people in laboratory experiments know their answers don’t really matter. They won’t be punished if they make mistakes, and they won’t be rewarded for doing well. But in the real world, our views do matter. For one thing, we are citizens of democracies in which popular opinion influences how governments respond—or don’t respond—to risks. More concretely, what we think about risks can be critical in our personal lives. Will you support the creation of a hazardous waste site in your region? If it goes ahead, it means jobs and economic growth. But it could also mean a threat to the health of your family. Will you shrug off the threat of pedophiles and hoodlums and let your kids play unsupervised in the neighborhood park? Will you eat foods with trans fats? Each of these questions matters far more than whether line number four is longer than line number five, so it’s hard to believe that on these important matters we would so casually abandon our own judgment and follow the herd.
In fact, there is some evidence that the importance of these questions would change the rate at which we conform to the group view. It would increase it. Substantially.
Psychologists at the University of Iowa—Robert Baron, Joseph Vandello, and Bethany Brunsman—designed an experiment in which they told participants they were testing the accuracy of eyewitness verifications. The testing would involve slides appearing briefly and questions based on what was shown: Was the man wearing glasses? Did you see the man in the police lineup? Three people took the test each round. This was a ruse, of course. Two of the three test-takers were actually part of the real experiment, which was to see if people would go along with the false answers they gave.
Two new variables set this experiment apart from the earlier research on conformity. In some trials, the slides were shown repeatedly or they were left on the screen for a full five seconds. The researchers knew from earlier testing that people almost never make mistakes under those conditions. But in other trials, the slide would flash onscreen only briefly, making the questions much harder to answer. The researchers also gave participants different background information about the purpose of the experiment. Some were told it was merely a pilot study intended to get a general sense of how people perceive things that might be used sometime in the future “to develop a test that lets us know how accurate people are as witnesses.” This was the “low importance” version of the experiment. But others were told the researchers had already developed the “Iowa Witness Identification Test” that “will soon be adopted by police departments and courtrooms in cases involving eyewitness testimony,” the researchers said. “Over the next few weeks, we will be running research participants such as yourselves through the revised I.W.I.T. to establish reliable norms. As a result, we hope that you try your best on this test because establishing accurate norms is crucial. Most people do try hard on this test because they are interested in seeing how good their eyewitness accuracy is compared to others. But, to increase your interest in doing well on this test, we will be awarding prizes of $20 at the end of the experimental testing period to the participants who score the highest in accuracy.” This was the “high importance” condition.
The first results were an almost exact duplicate of the original conformity experiments: When the task was easy and people thought the experiment was “low importance,” one-third abandoned their own judgment and conformed to the group answer. Then came the “easy task/high importance” version. The researchers expected conformity would fall under those conditions, and it did. But it didn’t disappear: Between 13 percent and 16 percent still followed the group.
Things got intriguing when the questions became harder to answer. Among those who thought the test was “low importance,” a minority conformed to the group, just as they did when the questions were easy to answer. But when the test was “high importance,” conformity actually went up. The researchers also found that under those conditions, people became more confident about the accuracy of their group-influenced answers. “Our data suggest,” wrote the researchers, “that so long as the judgments are difficult or ambiguous, and the influencing agents are united and confident, increasing the importance of accuracy will heighten confidence as well as conformity—a dangerous combination.”
Judgments about risk are often difficult and important. If Baron, Vandello, and Brunsman are right, those are precisely the conditions under which people are most likely to conform to the views of the group and feel confident that they are right to do so.
But surely, one might think, an opinion based on nothing more than the uninformed views of others is a fragile thing. We are exposed to new information every day. If the group view is foolish, we will soon come across evidence that will make us doubt our opinions. The blind can’t go on leading the blind for long, can they?
Unfortunately, psychologists have discovered another cognitive bias that suggests that, in some circumstances, the blind can actually lead the blind indefinitely. It’s called confirmation bias and its operation is both simple and powerful. Once we have formed a view, we embrace information that supports that view while ignoring, rejecting, or harshly scrutinizing information that casts doubt on it. Any belief will do. It makes no difference whether the thought is about trivia or something important. It doesn’t matter if the belief is the product of long and careful consideration or something I believe simply because everybody else in the Internet chat room said so. Once a belief is established, our brains will seek to confirm it.
In one of the earliest studies on confirmation bias, psychologist Peter Wason simply showed people a sequence of three numbers—2, 4, 6—and told them the sequence followed a certain rule. The participants were asked to figure out what that rule was. They could do so by writing down three more numbers and asking if they were in line with the rule. Once you think you’ve figured out the rule, the researchers instructed, say so and we will see if you’re right.
It seems so obvious that the rule the numbers are following is “even numbers increasing by two.” So let’s say you were to take the test. What would you say? Obviously, your first step would be to ask: “What about 8, 10, 12? Does that follow the rule?” And you would be told, yes, that follows the rule.
Now you are really suspicious. This is far too easy. So you decide to try another set of number. Does “14, 16, 18” follow the rule? It does.
At this point, you want to shout out the answer—the rule is even numbers increasing by two!—but you know there’s got to be a trick here. So you decide to ask about another three numbers: 20, 22, 24. Right, again!
Most people who take this test follow exactly this pattern. Every time they guess, they are told they are right and so, it seems, the evidence that they are right piles up. Naturally, they become absolutely convinced that their initial belief is correct. Just look at all the evidence! And so they stop the test and announce that they have the answer: It is “even numbers increasing by two.”
And they are told that they are wrong. That is not the rule. The correct rule is actually “any three numbers in ascending order.”
Why do people get this wrong? It is very easy to figure out that the rule is not “even numbers increasing by two.” All they have to do is try to disconfirm that the rule is even numbers increasing by two. They could, for example, ask if “5, 7, 9” follows the rule. Do that and the answer would be, yes, it does—which would instantly disconfirm the hypothesis. But most people do not try to disconfirm. They do the opposite, trying to confirm the rule by looking for examples that fit it. That’s a futile strategy. No matter how many examples are piled up, they can never prove that the belief is correct. Confirmation doesn’t work.
Unfortunately, seeking to confirm our beliefs comes naturally, while it feels strange and counterintuitive to look for evidence that contradicts our beliefs. Worse still, if we happen to stumble across evidence that runs contrary to our views, we have a strong tendency to belittle or ignore it. In 1979—when capital punishment was a top issue in the United States— American researchers brought together equal numbers of supporters and opponents of the death penalty. The strength of their views was tested. Then they were asked to read a carefully balanced essay that presented evidence that capital punishment deters crime and evidence that it does not. The researchers then retested people’s opinions and discovered that they had only gotten stronger. They had absorbed the evidence that confirmed their views, ignored the rest, and left the experiment even more convinced that they were right and those who disagreed were wrong.
Peter Wason coined the term “confirmation bias,” and countless studies have borne out his discovery—or rather, his demonstration of a tendency thoughtful observers have long noted. Almost four hundred years ago, Sir Francis Bacon wrote that “the human understanding when it has once adopted an opinion (either as being a received opinion or as being agreeable to itself) draws all things else to support and agree with it. And though there be a greater number and weight of instances to be found on the other side, yet these it either neglects and despises, or else by some distinction sets aside and rejects; in order that by this great and pernicious predetermination the authority of its former conclusions may remain inviolate....” Wise words proven true every day by countless pundits and bloggers.
The power of confirmation bias should not be underestimated. During the U.S. presidential election of 2004, a team of researchers led by Drew Westen at Emory University brought together thirty committed partisans— half Democrats, half Republicans—and had them lie in magnetic resonance imaging (MRI) machines. While their brains were being scanned, they were shown a series of three statements by or about George W. Bush. The second statement contradicted the first, making Bush look bad. Participants were asked whether the statements were inconsistent and were then asked to rate how inconsistent they were. A third statement then followed that provided an excuse for the apparent contradiction between the statements. Participantswere asked if perhaps the statements were not as inconsistent as they first appeared. And finally, they were again asked to rate how inconsistent the first two statements were. The experiment was repeated with John Kerry as the focus and a third time with a neutral subject.
The superficial results were hardly surprising. When Bush supporters were confronted with Bush’s contradictory statements, they rated them to be less contradictory than Kerry supporters. And when the explanation was provided, Bush supporters considered it to be much more satisfactory than did Kerry supporters. When the focus was on John Kerry, the results reversed. There was no difference between Republicans and Democrats when the neutral subject was tested.
All this was predictable. Far more startling, however, was what showed up on the MRI. When people processed information that ran against their strongly held views—information that made their favored candidate look bad—they actually used different parts of the brain than they did when they processed neutral or positive information. It seems confirmation bias really is hardwired in each of us, and that has enormous consequences for how opinions survive and spread.
Someone who forms a belief based on nothing more than the fact that other people around him hold that belief nonetheless has a belief. That belief causes confirmation bias to kick in, and incoming information is screened: If it supports the belief, it is readily accepted; if it goes against the belief, it is ignored, scrutinized carefully, or flatly rejected. Thus, if the information that turns up in newspapers, on televisions, and in conversation is mixed—and it very often is when risk is involved—confirmation bias will steadily strengthen a belief that originally formed only because it’s what everybody else was saying during a coffee break last week.
That’s on the individual level. What happens when people who share a belief get together to discuss it? Psychologists know the answer to that, and it’s not pretty. They call it group polarization.
It seems reasonable to think that when like-minded people get together to discuss a proposed hazardous waste site or the breast implants they believe are making them sick or some other risk, their views will tend to coalesce around the average within the group. But they won’t. Decades of research has proved that groups usually come to conclusions that are more extreme than the average view of the individuals who make up the group. When opponents of a hazardous waste site gather to talk about it, they will become convinced the site is more dangerous than they originally believed. When a woman who believes breast implants are a threat gets together with women who feel the same way, she and all the women in the meeting are likely to leave believing they had previously underestimated the danger. The dynamic is always the same. It doesn’t matter what the subject under discussion is. It doesn’t matter what the particular views are. When like-minded people get together and talk, their existing views tend to become more extreme.
In part, this strange human foible stems from our tendency to judge ourselves by comparison with others. When we get together in a group of like-minded people, what we share is an opinion that we all believe to be correct and so we compare ourselves with others in the group by asking “How correct am I?” Inevitably, most people in the group will discover that they do not hold the most extreme opinion, which suggests they are less correct than others. And so they become more extreme. Psychologists confirmed this theory when they put people in groups and had them state their views without providing reasons why—and polarization still followed.
A second force behind group polarization is simple numbers. Prior to going to a meeting of people who believe silicone breast implants cause disease, a woman may have read several articles and studies on the subject. But because the people at the meeting greatly outnumber her, they will likely have information she was not aware of. Maybe it’s a study suggesting implants cause a disease she has never heard of, or it’s an article portraying the effects of implant-caused diseases as worse than she knew. Whatever it is, it will lead her to conclude the situation is worse than she had thought. As this information is pooled, the same process happens to everyone else in the meeting, with people becoming convinced that the problem is bigger and scarier than they had thought. Of course, it’s possible that people’s views could be moderated by hearing new information that runs in the opposite direction—an article by a scientist denying that implants cause disease, for example. But remember confirmation bias: Every person in that meeting is prone to accepting information that supports their opinion and ignoring or rejecting information that does not. As a result, the information that is pooled at the meeting is deeply biased, making it ideal for radicalizing opinions. Psychologists have also demonstrated that because this sort of polarizationis based on information-sharing alone, it does not require anything like a face-to-face conversation—a fact amply demonstrated every day on countless political blogs.
So Alan convinces Betty, and that persuades Carl, which then settles it for Deborah. Biased screening of information begins and opinions steadily strengthen. Organizations are formed, information exchanged. Views become more extreme. And before you know it, as Cass Sunstein wrote, there are “hundreds, thousands or millions of people” who are convinced they are threatened by some new mortal peril. Sometimes they’re right. It took only a few years for almost everyone to be convinced that AIDS was a major new disease. But they can also be very wrong. As we saw, it wasn’t science that transformed the popular image of silicone breast implants from banal objects to toxic killers.
Reasonable or not, waves of worry can wash over communities, regions, and nations, but they cannot roll on forever. They follow social networks and so they end where those networks end—which helps explain why the panic about silicone breast implants washed across the United States and Canada (which also banned the implants) but caused hardly a ripple in Europe.
The media obviously play a key role in getting waves started and keeping them rolling because groups make their views known through more than conversations and e-mail. Groups also speak through the media, explicitly but also implicitly. Watch any newscast, read any newspaper: Important claims about hazards—heroin is a killer drug, pollution causes cancer, the latest concern is rapidly getting worse—will simply be stated as true, without supporting evidence. Why? Because they are what “everybody knows” is true. They are, in other words, group opinions. And like all group opinions, they exert a powerful influence on the undecided.
The media also respond to rising worry by producing more reports— almost always emotional stories of suffering and loss—about the thing that has people worried. And that causes the Guts of readers and viewers to sit up and take notice. Remember the Example Rule? The easier it is to recall examples of something happening, Gut believes, the more likely it is to happen. Growing concern about silicone breast implants prompted more stories about women with implants and terrible illnesses. Those stories raised the public’s intuitive estimate of how dangerous silicone breast implants are. Concern continued to grow. And that encouraged the media to produce more stories about sick women with implants. More fear, more reporting. More reporting, more fear. Like a microphone held too close to a loudspeaker, modern media and the primal human brain create a feedback loop.
“Against this background,” writes Cass Sunstein, “it is unsurprising that culturally and economically similar nations display dramatically different reactions to identical risks. Whereas nuclear power enjoys widespread acceptance in France, it arouses considerable fear in the United States. Whereas genetic engineering of food causes immense concern in Europe, it has been a nonissue in the United States, at least until recently. It is also unsurprising that a public assessment of any given risk may change suddenly and dramatically even in the absence of a major change in the relevant scientific information.”
So far we’ve identified two sources—aside from rational calculation—that can shape our judgments about risk. There’s the unconscious mind—Gut— and the tools it uses, particularly the Example Rule and the Good-Bad Rule. And there are the people around us, whose opinions we naturally tend to conform to. But if that is all there was to the story, then almost everybody within the same community would have the same opinions about which risks are alarming and which are not.
But we don’t. Even within any given community opinions are often sharply divided. Clearly something else is at work, and that something is culture.
This is tricky terrain. For one thing, “culture” is one of those words that mean different things to different people. Moving from psychology to culture also means stepping from one academic field to another. Risk is a major subject within sociology, and culture is the lens through which sociologists peer. But the psychologists who study risk and their colleagues in the sociology departments scarcely talk to each other. In the countless volumes on risk written by sociologists, the powerful insights provided by psychologists over the last several decades typically receive little more than a passing mention, if they are noticed at all. For sociologists, culture counts. What happens in my brain when someone mentions lying on the beach in Mexico—do I think of tequila or skin cancer?—isn’t terribly interesting or important.
In effect, a line has been drawn between psychology and culture, but that line reflects the organization of universities far more than it does what’s going on inside our skulls. Consider how the Good-Bad Rule functions in our judgment of risk. The thought of lying on a beach in Mexico stirs a very good feeling somewhere in the wrinkly folds of my brain. As we have seen, that feeling will shape my judgment about the risk involved in lying on a beach until I turn the color of a coconut husk. Even if a doctor were to tell me this behavior will materially increase my risk of getting skin cancer, the pleasant feeling that accompanies any discussion of the subject will cause me to intuitively downplay the risk: Head may listen to the doctor, but Gut is putting on sunglasses.
Simple enough. But a piece of the puzzle is missing. Why does the thought of lying on a Mexican beach fill me with positive feelings? Biology doesn’t do it. We may be wired to enjoy the feeling of sunlight—it’s a good source of heat and vitamin D—but we clearly have no natural inclination to bake on a beach, since humans only started doing this in relatively modern times. So where did I learn that this is a Good Thing? Experience, certainly. I did it and it was delightful. But I thought it would be delightful before I did it. That was why I did it. So again, I have to ask the question: Where did I get this idea from?
For one, I got it from people who had done it and who told me it’s delightful. And I got it from others who hadn’t done it but who had heard that it was delightful. And I got it—explicitly or implicitly—from books, magazines, television, radio, and movies. Put all this together and it’s clear I got the message that it’s delightful to suntan on a Mexican beach from the culture around me. I’m Canadian. Every Canadian has either gone south in the winter or dreamed of it. Tropical beaches are as much a part of Canadian culture as wool hats and hockey pucks, and that is what convinced me that lying on a beach in Mexico is delightful. Even if I had never touched toes to Mexican sand, the thought of lying on a beach in Mexico would trigger nice feelings my brain—and those nice feelings would influence my judgment of the risks involved.
This is a very typical story. There are, to be sure, some emotional reactions that are mainly biological in origin, such as revulsion for corpses and feces, but our feelings are more often influenced by experience and culture. I have a Jewish friend who follows Jewish dietary laws that forbid pork. He always has. In fact, he has internalized those rules so deeply that he literally feels nauseated by the sight of ham or bacon. But for me, glazed ham means Christmas and the smell of frying bacon conjures images of sunny Saturday mornings. Obviously, eating pork is not terribly dangerous, but still there is a risk of food poisoning (trichinosis in particular). If my friend and I were asked to judge that risk, the very different feelings we have would lead our unconscious minds—using the Good-Bad Rule—to very different conclusions.
The same dynamic plays a major role in our perceptions about the relative dangers of drugs. Some drugs are forbidden. Simply to possess them is a crime. That is a profound stigma, and we feel it in our bones. These are awful, wicked substances. Sometimes we talk about them as if they were sentient creatures lurking in alleyways. With such strong feelings in play, it is understandable that we would see these drugs as extremely dangerous: Snort that cocaine, shoot that heroin, and you’ll probably wind up addicted or dead.
There’s no question drugs can do terrible harm, but there is plenty of reason to think they’re not nearly as dangerous as most people feel. Consider cocaine. In 1995, the World Health Organization completed what it touted as “the largest global study on cocaine use ever undertaken.” Among its findings: “Occasional cocaine use,” not intensive or compulsive consumption, is “the most typical pattern of cocaine use” and “occasional cocaine use does not typically lead to severe or even minor physical or social problems.”
Of course it is very controversial to suggest that illicit drugs aren’t as dangerous as commonly believed, but exaggerated perceptions of risk are precisely what we would expect to see given the deep hostility most people feel toward drugs. Governments not only know this, they make use of it. Drug-use prevention campaigns typically involve advertising and classroom education whose explicit goal is to increase perceived risk (the WHO’s cocaine report described most drug education as “superficial, lurid, excessively negative”), while drug agencies monitor popular perceptions and herald any increase in perceived risk as a positive development. Whether the perceived risks are in line with the actual risks is not a concern. Higher perceived risk is always better.
Then there are the licit drugs. Tobacco is slowly becoming a restricted and stigmatized substance, but alcohol remains a beloved drug in Western countries and many others. It is part of the cultural fabric, the lubricant of social events, the symbol of celebration. A 2003 survey of British television found that alcohol routinely appeared in “positive, convivial, funny images.” We adore alcohol, and for that reason, it’s no surprise that public health officials often complain that people see little danger in a drug whose consumption can lead to addiction, cardiovascular disease, gastrointestinal disorders, liver cirrhosis, several types of cancer, fetal alcohol syndrome, and fatal overdose—a drug that has undoubtedly killed far more people than all the illicit drugs combined. The net effect of the radically different feelings we have for alcohol and other drugs was neatly summed up in a 2007 report of the Canadian Centre on Substance Abuse: Most people “have an exaggerated view of the harms associated with illegal drug use, but consistently underestimate the serious negative impact of alcohol on society.” That’s Gut, taking its cues from the culture.
The Example Rule provides another opportunity for culture to influence Gut. That’s because the Example Rule—the easier it is to recall examples of something happening, the greater the likelihood of that thing happening—hinges on the strength of the memories we form. And the strength of our memories depends greatly on attention: If I focus strongly on something and recall it repeatedly, I will remember it much better than if I only glance at it and don’t think about it again. And what am I most likely to focus on and recall repeatedly? Whatever confirms my existing thoughts and feelings. And what am I least likely to focus on and recall repeatedly? Whatever contradicts my thoughts and feelings. And what is a common source of the thoughts and feelings that guide my attention and recall? Culture.
The people around us are another source of cultural influence. Our social networks aren’t formed randomly, after all. We are more comfortable with people who share our thoughts and values. We spend more time with them at work, make them our friends, and marry them. The Young Republican with the Ronald Reagan T-shirt waiting in an airport to catch a flight to Washington, D.C., may find himself chatting with the antiglobalization activist with a Che Guevara beret and a one-way ticket to Amsterdam, but it’s not likely he will be adding her to his Christmas card list—unlike the MBA student who collides with the Young Republican at the check-in line because she was distracted by the soaring eloquence of Ronald Reagan’s third State of the Union Address playing on her iPod. So we form social networks that tend to be more like than unlike, and we trust the people in our networks. We value their opinions and we talk to them when some new threat appears in the newspaper headlines. Individually, each of these people is influenced by culture just as we are, and when culture leads them to form a group opinion, we naturally want to conform to it.
The manifestations of culture I’ve discussed so far—Mexican vacations, alcohol and illicit drugs, kosher food—have obvious origins, meaning, and influence. But recent research suggests cultural influences run much deeper.
In 2005, Dan Kahan of the Yale Law School, along with Paul Slovic and others, conducted a randomly selected, nationally representative survey of 1,800 Americans. After extensive background questioning, people were asked to rate the seriousness of various risks, including climate change, guns in private hands, gun-control laws, marijuana, and the health consequences of abortion.
One result was entirely expected. As in many past surveys, nonwhites rated risks higher than whites and women believed risks were more serious than men. Put those two effects together and you get what is often called the white-male effect. White men routinely feel hazards are less serious than other people. Sociologists and political scientists might think that isn’t surprising. Women and racial minorities tend to hold less political, economic, and social power than white men and have less trust in government authorities. It makes sense that they would feel more vulnerable. But researchers have found that even after statistically accounting for these feelings, the disparity between white men and everybody else remains. The white-male effect also cannot be explained by different levels of scientific education—Paul Slovic has found that female physical scientists rate the risks of nuclear power higher than male physical scientists, while female members of the British Toxicological Society were far more likely than male members to rate the risk posed by various activities and technologies as moderate or high.
It is a riddle. A hint of the answer was found in an earlier survey conducted by Paul Slovic in which he discovered that it wasn’t all white males who perceived things to be less dangerous than everybody else. It was only a subset of about 30 percent of white males. The remaining 70 percent saw things much as women and minorities did. Slovic’s survey also revealed that the confident minority of white men tended to be better-educated, wealthier, and more politically conservative than others.
The 2005 survey was designed in part to figure out what was happening inside the heads of white men. A key component was a series of questions that got at people’s most basic cultural world views. These touched on really basic matters of how human societies should be organized. Should individuals be self-reliant? Should people be required to share good fortune? And so on. With the results from these questions, Kahan slotted people into one of four world views (developed from the Cultural Theory of Risk first advanced by the anthropologist Mary Douglas and political scientist Aaron Wildavsky). In Kahan’s terms they were individualist, egalitarian, hierarchist, and communitarian.
When Kahan crunched his numbers, he found lots of correlations between risk and other factors like income and education. But the strongest correlations were between risk perception and world view. If a person were, for example, a hierarchist—someone who believes people should have defined places in society and respect authority—you could quite accurately predict what he felt about various risks. Abortion? A serious risk to a woman’s health. Marijuana? A dangerous drug. Climate change? Not a big threat. Guns? Not a problem in the hands of law-abiding citizens.
Kahan also found that a disproportionate number of white men were hierarchists or individualists. When he adjusted the numbers to account for this, the white-male effect disappeared. So it wasn’t race and gender that mattered. It was culture. Kahan confirmed this when he found that although black men generally rated the risks of private gun ownership to be very high, black men found to be individualist rated guns a low risk—just like white men who were individualist.
Hierarchists also rated the risk posed by guns to be low. Communitarians and egalitarians, however, feel they are very dangerous. Why? The explanation lies in feelings and the cultures that create them. “People who’ve been raised in a relatively individualistic community or who’ve been exposed to certain kinds of traditional values will have a positive association with guns,” Kahan says. “They’ll have positive emotions because they’ll associate them with individualistic virtues like self-reliance or with certain kinds of traditional roles like a protective father. Then they’ll form the corresponding perception. Guns are safe. Too much gun control is dangerous. Whereas people who’ve been raised in more communitarian communities will develop negative feelings toward guns. They’ll see them as evidence that people in the community are distrustful of each other. They’ll resent the idea that the public function of protection is taken by individuals who are supposed to do it for themselves. People who have an egalitarian sensibility, instead of valuing traditional roles like protector and father and hunter, might associate them with patriarchy or stereotypes that they think treat women unfairly, and they’ll develop a negative affective orientation toward the gun.” And once an opinion forms, information is screened to suit.
In the survey, after people were asked to rate the danger posed by guns, they were then asked to imagine that there was clear evidence that their conclusion about the safety of guns is wrong. Would they still feel the same way about guns? The overwhelming majority said yes, they would. That’s pretty clear evidence that what’s driving people’s feelings about the risks posed by guns is more than the perceived risks posed by guns. It’s the culture, and the perception of guns within it.
That culture, Kahan emphasizes, is American, and so the results he got in the poll apply only to the United States. “What an American who has, say, highly egalitarian views thinks about risk may not be the same as what an egalitarian in France thinks about risk. The American egalitarian is much more worried about nuclear power, for example, than the French egalitarian. ” This springs from the different histories that produce different cultures. “I gave you the story about guns and that story is an American story because of the unique history of firearms in the United States, both as tools for settling the frontier and as instruments for maintaining authority in a slave economy in the South. These created resonances that have persisted over time and have made the gun a symbol that evokes emotions within these cultural groups that then generate risk perceptions. Something completely different could, and almost certainly would, happen some place else that had a different history with weapons.”
In 2007, Kahan’s team ran another nationwide survey. This time the questions were about nanotechnology—technology that operates on a microscopic level. Two results leapt out. First, the overwhelming majority of Americans admitted they knew little or nothing about this nano-whatzit. Second, when asked if they had opinions about the risks and benefits of nanotechnology, the overwhelming majority of Americans said they did, and they freely shared them.
How can people have opinions about something they may never have heard of until the moment they were asked if they had an opinion about it? It’s pure affect, as psychologists would say. If they like the sound of “nanotechnology, ” they feel it must be low risk and high benefit. If it sounds a little creepy, it must be high risk and low benefit. As might be expected, Kahan found that the results of these uninformed opinions were all over the map, so they really weren’t correlated with anything.
But at this point in the survey, respondents were asked to listen to a little information about nanotechnology. The information was deliberately crafted to be low-key, simple, factual—and absolutely balanced. Here are some potential benefits. Here are some potential risks. And now, the surveyors asked again, do you have an opinion about the risks and benefits of nanotechnology?
Sure enough, the information did change many opinions. “We predicted that people would assimilate balanced information in a way biased by their cultural predispositions toward environmental risks generally,” says Kahan. And they did. Hierarchists and individualists latched onto the information about benefits, and their opinions became much more bullish— their estimate of the benefits rose while the perceived risks fell. Egalitarians and communitarians did exactly the opposite. And so, as a result of this little injection of information, opinions suddenly became highly correlated to cultural world views. Kahan feels this is the strongest evidence yet that we unconsciously screen information about risk to suit our most basic beliefs about the organization of society.
Still, it is early days for this research. What is certain at this point is that we aren’t the perfectly rational creatures described in outdated economics textbooks, and we don’t review information about risks with cool detachment and objectivity. We screen it to make it conform to what we already believe. And what we believe is deeply influenced by the beliefs of the people around us and of the culture in which we live.
In that sense, the metaphor I used at the start of this book is wrong. The intuitive human mind is not a lonely Stone Age hunter wandering a city it can scarcely comprehend. It is a Stone Age hunter wandering a city it can scarcely comprehend in the company of millions of other confused Stone Age hunters. The tribe may be a little bigger these days, and there may be more taxis than lions, but the old ways of deciding what to worry about and how to stay alive haven’t changed.