8
All the Fear That’s Fit to Print
The toddler grins madly and leans toward the camera, one bare foot forward, as if she is about to rush ahead and wrap her arms around the photographer’s knees. She is so young. Baby fat bulges at her wrist. It is an image suffused with joy—a gorgeous, glowing, black-and-white portrait a mother would place on a bedside table, or perhaps in the front hall to be shown to anyone who comes through the door. But it is instead on the front page of a newspaper, which can only mean tragedy.
The little girl’s name is Shelby Gagne. The tragedy is hinted at in a detail easily missed: Her hair is short and wispy, like a newborn’s, or a toddler undergoing radiation treatment and chemotherapy.
When Shelby was twenty-two months old, an inexplicable lump appeared in her shoulder. “She had stage 4 Ewing’s carcinoma, a kind of bone-and-soft-tissue cancer that affects boys more often than girls, usually teenagers. Shelby was a one-in-a-million case. And her cancer was running like mad: In three days between CT scans, the spots on her lungs grew from pepper-like flecks to recognizable nodules of cancer,” wrote Erin Anderssen. A barrage of surgeries and radical treatments followed. Her mother, Rebecca, “immediately quit her job, splitting twenty-four-hour shifts at the hospital with her mother, Carol McHugh. Her husband, Steve, a car salesman, had to continue pitching options and warranties knowing that his child was dying. Someone had to cover the mortgage.”
The little girl descended into agony. “She had high fevers. She suffered third-degree radiation burns. Her mouth became so raw with sores she couldn’t swallow her own saliva. She threw up five to ten times a day.” It was all futile. Shelby was moved into palliative care. “Even drugged,” Anderssen writes, “Shelby coughs and vomits and shivers. It does not stop. It is more than any human should bear, let alone a little, brown-eyed girl just turned three. People still write Rebecca to say they are hoping for a miracle. But she knows Shelby is beyond the kind of miracle they’re hoping for. Holding her, here in the soft shadows of the hospital room, Rebecca Gagne does not pray for her daughter to live. She prays, with the selfless love of a mother, for Shelby to die.” And she did, not long after.
Under the headline “Cancer: A Day in the Life,” Shelby Gagne’s portrait covered almost the entire front page of the Globe and Mail on Saturday, November 18, 2006. The Globe was launching an ambitious series of articles about cancer, and Shelby was its star: As the articles rolled out in the days and weeks that followed, the little girl’s painfully beautiful portrait appeared at the start of each one. In effect, the newspaper made Shelby the face of cancer.
And that was odd because the face of cancer looks nothing like Shelby’s. “Cancer is primarily a disease of the elderly,” the Canadian Cancer Society says in its compilation of cancer statistics. In 2006, the society notes, 60 percent of all those who lost their lives to cancer were seventy or older. A further 21 percent were in their sixties. “In contrast, less than one percent of new cases and of deaths occur prior to age 20.” The precise figures vary from country to country and year to year, but everywhere the basic story is the same: The risk of cancer falls heavily on older people, and a story like Shelby’s is vanishingly rare.
The Globe’s profile of Shelby, especially that stunning photograph, is journalism at its best. It is urgent and moving. But the decision to put the little girl at the center of a series about cancer is journalism at its worst. It was obvious that this “one-in-a-million case” was fantastically unrepresentative, but the newspaper chose story over statistics, emotion over accuracy, and in doing so it risked giving readers a very false impression about a very important issue.
This sort of mismatch between tragic tale and cold numbers is routine in the media, particularly in stories about cancer. In 2001, researchers led by Wylie Burke of the University of Washington published an analysis of articles about breast cancer that appeared in major U.S. magazines between 1993 and 1997. Among the women that appeared in these stories, 84 percent were younger than fifty years old when they were first diagnosed with breast cancer; almost half were under forty. But as the researchers noted, the statistics tell a very different story: Only 16 percent of women diagnosed with breast cancer were younger than fifty at the time of diagnosis, and 3.6 percent were under forty. As for the older women who are most at risk of breast cancer, they were almost invisible in the articles: Only 2.3 percent of the profiles featured women in their sixties and not one article out of 172 profiled a woman in her seventies—even though two-thirds of women diagnosed with breast cancer are sixty or older. In effect, the media turned the reality of breast cancer on its head. Surveys in Australia and the United Kingdom made the same discovery.
This is troubling because it has a predictable effect on women’s perceptions of the risk of breast cancer. Profiles of victims are personal, vivid, and emotional—precisely the qualities that produce strong memories—so when a woman exposed to them later thinks about the risk of breast cancer she will quickly and easily recall examples of young women with breast cancer, while finding it a struggle (assuming she does not have personal experience) to come up with examples of old women with breast cancer. The woman’s Gut will use the Example Rule to conclude that the risk of breast cancer is slight for old women and substantial for young women. Even if she reads the statistics that reveal the truth—the older a woman is, the greater the risk—these numbers may not make a difference because statistics do not sway Gut, and Gut often has the final word in people’s judgments.
And that is precisely what research has found in several countries. A 2007 survey of British women by Oxford University researchers, for example, asked at what age a woman is “most likely to get breast cancer”: 56.2 percent said, “Age doesn’t matter”; 9.3 percent said the risk was greatest in the forties; 21.3 percent said it was in the fifties; 6.9 said in the sixties; 1.3 percent said the risk was highest in the seventies. The correct answer— “80 or older”—was chosen by a minuscule 0.7 percent.
“Exaggerated and inaccurate perceptions of breast cancer risk could have a variety of adverse effects on patients,” noted Wylie Burke. Older women may not bother getting screened if they believe breast cancer is a disease of the young, while younger women may worry unreasonably, “which in itself might be considered a morbid condition.”
These distortions can result from words alone, but on television and in print, words are rarely alone. In the news, we are presented with words and images together, and researchers have found that our memories tend to blend the two—so if the sentence “A bird was perched atop the tree” is accompanied with a photograph of an eagle in a tree, it will likely be remembered as “an eagle was perched atop the tree.” Rhonda Gibson, a professor of journalism at Texas Tech University, and Dolf Zillman, a communications professor at the University of Alabama, took this research one step further and applied it to risk perception.
To ensure they were starting with a clean slate, Gibson and Zillman used a fictitious threat—“Blowing Rock Disease”—which was said to be a newly identified illness spread by ticks in the American Southeast. Children were deemed particularly vulnerable to this new danger.
Gibson and Zillman asked 135 people, mainly university students, to read two articles—the first about wetlands and another about Blowing Rock Disease—taken from national news magazines, with questions about facts and opinions asked after each. The first article really did come from a national news magazine. The second was fictitious, but made to look like a typical piece from the magazine U.S. News & World Report, with a headline that read, “Ticks Cutting a Mean Path: Areas in the Southeast Hardest Hit by Deadly New Disease.” Participants were presented with one of several versions of the article. One was text only. The second also had photos of ticks, seen in creepy close-up. The third had the ticks plus photos of children who were said to be infected. The text of the article was the same in every case—it informed the reader that children were at more risk than adults, and it had profiles of children who had contracted the disease.
If factual information and logic were all there were to risk perception, the estimates of the danger posed by “Blowing Rock Disease” would have been the same no matter which version of the article they read. But those who read the version that had no pictures gave a lower estimate of the risk than all the others. Those who got the second version of the story—with photos of ticks—believed the risk was significantly higher, while those who saw photos of ticks and children pegged the risk higher still. This is the Good-Bad Rule in action. No picture means no charged emotion and no reason for Gut to inflate its hunch about the risk; a close-up of a diseased tick is disturbing and Gut uses that emotion to conclude the risk is higher; images of ticks and sad children are even worse, and so Gut again ratchets up its estimate. The result is a series of risk estimates that have nothing to do with factual information and everything to do with how images make people feel.
The power of images to drive risk perceptions is particularly important in light of the media’s proven bias in covering causes of death. As Paul Slovic was among the first to demonstrate, the media give disproportionate coverage to dramatic, violent, and catastrophic causes of death—precisely the sort of risks that lend themselves to vivid, disturbing images—while paying far less attention to slow, quiet killers like diabetes. A 1997 study in the American Journal of Public Health that examined how leading American magazines covered a list of killers found “impressively disproportionate” attention given to murder, car crashes, and illicit drugs, while tobacco, stroke, and heart disease got nowhere near the coverage proportionate to their death toll. A 2001 study by David McArthur and other researchers at the University of California that compared local television news in Los Angeles County with the reality of injuries and deaths from traumatic causes got much the same results: Deaths caused by fire, murder, car crashes, and police shootings were widely reported; deaths caused by falls, poisonings, or other accidents got little notice. Injuries were also much less likely to be reported, although injuries caused by fires or assaults were actually better represented than accidental deaths. Overall, the picture of traumatic injury and death presented by the news is “grossly” distorted, the authors concluded, with too much attention paid to “events with high visual intrigue” and too little for those that didn’t offer striking images. The other consistent factor, they noted, was crime—the news heavily tilted toward injuries or deaths caused by one person hurting another at the expense of injuries and deaths where no one was to blame.
The information explosion has only worsened the media’s biases by making information and images instantly available around the world. The video clip of a helicopter hovering above floodwaters as a man is plucked from the roof of a house or a tree is a staple of evening news broadcasts. The flood may be in New Zealand and the broadcast in Missouri, or the other way around, but the relevance of the event to the people watching is of little concern to the broadcaster. It’s exciting and that’s enough. Watching the evening news recently, I was shown a video of a riot in Athens. Apparently, students were protesting changes in “how universities are governed.” That tells me nothing, but it doesn’t matter because the words aren’t the point. The images are. Clouds of tear gas billowing, masked men hurling Molotovs, riot cops charging: It’s pure drama, and so it’s being shown to people for whom it is completely meaningless.
If this were unusual, it wouldn’t matter much. But it’s not unusual because there is always another flood, riot, car crash, house fire, or murder. That’s not because the societies we live in are awash in disaster. It’s that the societies we live in have a lot of people. The population of the United States is 300 million, the European Union 450 million, and Japan 127 million. These numbers alone ensure that rare events—even one-in-a-million events—will occur many times every day, making the wildly improbable perfectly routine. That’s true even in countries with relatively small populations, such as Canada (32 million people), Australia (20 million), the Netherlands (17 million), and New Zealand (4 million). It’s even true within the borders of cities like New York (8 million people), London (7.5 million), Toronto (4.6 million), and Chicago (2.8 million). As a result, editors and producers who put together the news have a bottomless supply of rare-but-dramatic deaths from which to choose. And that’s if they stick with the regional or national supply. Go international and every newspaper and broadcast can be turned into a parade of improbable tragedy. Remove all professional restraints—that is, the desire to portray reality as it actually is—and you get the freak show that has taken over much of the media: “The man who was tied up, stabbed several times during sex, and watched as the woman he was with drank his blood is speaking only to ABC 15!” announced the KNXV anchorman in Phoenix, Arizona. “You wouldn’t expect this type of thing is going to happen during sex,” the victim said with considerable understatement.
The skewed images of mortality presented by the media have two effects. As we saw earlier, it fills our memories with examples of dramatic causes of death while providing few examples of mundane killers—and so when Gut uses the Example Rule, it will tend to overestimate the risk of dramatic causes of death while underestimating others. It also showers the audience with emotional images that drive risk perceptions via the Good-Bad Rule—pushing Gut further in the same direction. As a result, it’s entirely predictable that people would tend to overestimate the risk of dramatic deaths caused by murder, fire, and car crashes while underestimating such undramatic killers as asthma, diabetes, and heart disease. And that’s what researchers consistently find.
But distorted coverage of causes of death is far from the sole failure in the media’s handling of risk. Another is failing to ask the question that is essential to understanding any risk: How likely is it?
“The cholesterol-lowering statin Crestor can cause dangerous muscle problems,” my morning newspaper told me in an article that rounded up revelations about prescription-drug health risks in 2005. “The birth control method Depo-Provera is linked to bone loss. The attention-deficit/hyperactivity disorder drug Strattera might make children want to hurt themselves. It’s enough to make you clear out your medicine cabinet.” The writer feels these drugs pose a significant risk and she is inviting me to share that conclusion. But this is all she wrote about these drugs, and telling me that something could happen actually tells me very little. As I sit at my desk typing this sentence, it is possible that a passenger jet will lose power in all four engines, plummet from the sky, and interrupt my work in spectacular fashion. It could happen. What matters far more is that the chance of it happening is so tiny I’d need a microscope to see it. I know that. It’s what allows me to conclude that I can safely ignore the risk and concentrate instead on finishing this paragraph. And yet news stories routinely say there is a possibility of something bad happening without providing a meaningful sense of how likely that bad thing is.
John Roche and Marc Muskavitch, biologists at Boston College, surveyed articles about West Nile virus that appeared in major North American newspapers in 2000. The year was significant. This exotic new threat first surfaced in New York City in the summer of 1999, and its rapid spread through the eastern states, and later across the border into Canada, pushed the needle of public concern into the red zone. A 2002 survey by the Pew Research Center of Washington, D.C., found that 70 percent of Americans said they followed the West Nile virus story “very” or “fairly” closely—only a little less than the 77 percent who said they were following preparations for the invasion of Iraq—even though this was a virus that had still not appeared in most of the United States.
Making this attention all the more remarkable is the fact that West Nile isn’t a particularly deadly virus. According to the Centers for Disease Control, 80 percent of those infected with the virus never experience even the slightest symptoms, while almost all the rest suffer nothing worse than a fever, nausea, and vomiting that will last somewhere between a few days and a few weeks. One in 150 infected with the virus develops severe symptoms, including high fever, disorientation, and paralysis, and most of these very unlucky people fully recover after several weeks—only about 3 to 15 percent die. But these basic facts were rarely put at the center of the news about West Nile. Instead, the focus was on a family struggling with the loss of a beloved mother or a victim whose pleasant walk in the woods ended in a wheelchair.
There were statistics to go with these sad stories, of course. Roche and Muskavitch found that almost 60 percent of articles cited the number of people sickened by the virus and 81 percent had data on deaths. But what do these sorts of numbers actually tell us about the risk? If I read that the virus has killed eighteen people (as it had by 2001), should I worry? It depends. If it is eighteen dead in a village of one hundred, I definitely should. But if it is eighteen in a city of one million people, the risk is slim. And if it is eighteen in a nation of 300 million—the population of the United States—it is almost nonexistent. After all, 875 Americans choked to death on the food they were eating in 2003, but people don’t break into a cold sweat before each meal. But Roche and Muskavitch’s survey found that 89 percent of the articles about West Nile virus had “no information whatsoever” about the population on which the statistics were based. So readers were informed that West Nile virus had killed some people and, in many articles, they were also introduced to a victim suffering horribly or to the family of someone killed by the disease, but there was nothing else. With only that information, Head is unable to figure out how great the risk is and whether it’s worth worrying about. But not Gut. It has all the evidence it needs to conclude that the risk is high.
Not surprisingly, a poll taken by the Harvard School of Public Health in 2002 found that Americans grossly overestimated the danger of the virus. “Of people who get sick from the West Nile virus,” the survey asked, “about how many do you think die of the disease?” There were five possible answers: “Almost None,” “About One in 10,” “About One in 4,” “More Than Half,” and “Don’t Know.” Fourteen percent answered “Almost None.” The same number said more than half, while 18 percent chose one in four and 45 percent said one in ten.
Call it “denominator blindness.” The media routinely tell people "X people were killed” but they rarely say “out of Y population.” The X is the numerator, Y is the denominator. To get a basic sense of the risk, we have to divide the numerator by the denominator—so being blind to the denominator means we are blind to the real risk. An editorial in The Times of London is a case in point. The newspaper had found that the number of Britons murdered by strangers had “increased by a third in eight years.” That meant, it noted in the fourth paragraph, that the total had increased from 99 to 130. Most people would find that at least a little scary. Certainly the editorial writers did. But what the editorial did not say is that there are roughly 60 million Britons, and so the chance of being murdered by a stranger rose from 99 in 60 million to 130 in 60 million. Do the math and the risk is revealed to have risen from an almost invisible 0.0001 percent to an almost invisible 0.00015 percent.
An even simpler way to put a risk in perspective is to compare it to other risks, as I did earlier by putting the death toll of West Nile virus alongside that of choking on food. But Roche and Muskavitch found that a mere 3 percent of newspaper articles that cited the death toll of West Nile gave a similar figure for other risks. That’s typical of reporting on all sorts of risks. A joint survey of British and Swedish newspapers published in the journal Public Understanding of Science found a small minority of Swedish articles did compare risks, but "in the U.K. there were almost no comparisons of this nature”—even though the survey covered a two-month period that included the tenth anniversary of the Chernobyl disaster and the peak of the panic over BSE (mad cow disease). Readers needed perspective but journalists did not provide it.
Another common failure was illustrated in the stories reporting on a September 2006 announcement by the U.S. Food and Drug Administration that it was requiring the product-information sheet for the Ortho Evra birth-control patch to be updated with a new warning to include the results of a study that found that—in the words of one newspaper article—“women who use the patch were twice as likely to have blood clots in their legs or lungs than those who used oral contraceptives.” In newspapers across North America, even in the New York Times, that was the only information readers got. “Twice the risk” sounds big, but what does it actually mean? If the chance of something horrible happening is one in eight, a doubling of the risk makes it one in four: Red alert! But if the risk of a jet crashing onto my desk were to double, I still wouldn’t be concerned because two times almost-zero is still almost-zero. An Associated Press story included the information readers needed to make sense of this story: “The risk of clots in women using either the patch or pill is small,” the article noted. “Even if it doubled for those on the patch, perhaps just six women out of 10,000 would develop clots in any given year, said Daniel Shames, of the FDA’s Center for Drug Evaluation and Research.” The AP story was carried widely across North America but many newspapers that ran it, including the New York Times, actually cut that crucial sentence.
Risks can be described in either of two ways. One is “relative risk,” which is simply how much bigger or smaller a risk is relative to something else. In the birth-control patch story, “twice the risk”—women who use the patch have twice the risk of those who don’t—is the relative risk. Then there’s “absolute risk,” which is simply the probability of something happening. In the patch story, 6 in 10,000 is the absolute risk. Both ways of thinking about risk have their uses, but the media routinely give readers the relative risk alone. And that can be extremely misleading.
When the medical journal The Lancet published a paper that surveyed the research on cannabis and mental illness, newspapers in Britain—where the issue has a much higher profile than elsewhere—ran frightening headlines such as this one from the Daily Mail: “Smoking just one cannabis joint raises danger of mental illness by 40 percent.” Although overdramatized with the “just one cannabis joint” phrasing, this was indeed what the researchers had found. Light users of cannabis had a 40 percent greater risk of psychosis than those who had never smoked the drug, while regular users were found to be at 50 to 200 percent greater risk. But there were two problems here. The first—which the Daily Mail and other newspapers noted in a few sentences buried in the depths of their stories—is that the research did not show cannabisuse causes mental illness, only that cannabis use and mental illness are statistically associated, which means cannabis may cause mental illness or the association may be the result of something else entirely. The second is that the “40 percent” figure is the relative risk. To really understand the danger, people needed to know the absolute risk—but no newspaper provided that. An Agence France-Press report came closest to providing that crucial information: “The report stresses that the risk of schizophrenia and other chronic psychotic disorders, even in people who use cannabis regularly, is statistically low, with a less than one-in-33 possibility in the course of a lifetime. ” That’s enough to work out the basic figures: Someone who never uses cannabis faces a lifetime risk of around 1 percent; a light user’s risk is about 1.4 percent; and a regular user’s risk is between 1.5 and 3 percent. These are significant numbers, but they’re not nearly as scary as those that appeared in the media.
Why do journalists so often provide information about risks that is misleading and unduly frightening? The standard explanation for media hype is plain old self-interest. Like corporations, politicians, and activists, the media profit from fear. Fear means more newspapers sold and higher ratings, so the dramatic, the frightening, the emotional, and the worst case are brought to the fore while anything that would suggest the truth is not so exciting and alarming is played down or ignored entirely.
The reality varies by place, time, medium, and institution, but in general there is obviously something to this charge. And there’s reason to worry that sensationalism will get worse as the proliferation of information sources continues to fracture the media audience into smaller and smaller segments. Evening news broadcasts in the United States fell from more than 50 million viewers in 1980 to 27 million in 2005, with the audience departing first to cable TV and then the Internet. Cable news audiences have started to slip. Newspapers are in the most trouble—particularly in the United States, where readership has fallen from 70 percent of Americans in 1972 to one-third in 2006. Things aren’t so grim in other countries, but everywhere the trend to fewer readers and smaller audiences is the same. The business of news is suffering badly and it’s not clear how, or even if, it will recover. As the ships sink, it is to be expected that ethical qualms will be pitched overboard.
But still it is wrong to say, as many do, that the drive for readers and ratings is the sole cause of the exaggeration and hysteria so often seen in the news.
For one thing, that overlooks a subtler effect of the media’s business woes, one that is—again—particularly advanced in the United States. “In some cities, the numbers alone tell the story,” wrote the authors of The State of the News Media 2006, published by the Project for Excellence in Journalism. “There are roughly half as many reporters covering metropolitan Philadelphia, for instance, as in 1980. . . . As recently as 1990, the Philadelphia Inquirer had 46 reporters covering the city. Today it has 24.” At the same time that the number of reporters is declining, the channels of communication are multiplying and the sheer volume of information being pumped out by the media is growing rapidly. How is this possible? In one sense, fewer people are doing more: The reporter who puts a story on the Web site at 11 A.M. also does a video spot at 3 P.M. and files a story for the next day’s newspaper at 6 P.M. But reporters are also doing much less—less time out of the office, less investigation, less verification of numbers, less reading of reports. In this environment, there is a growing temptation to simply take a scary press release at face value, rewrite it, and move on. With countless corporate marketers, politicians, officials, and activists seeking to use the media to market fear, that has profound implications. Reporters are a filter between the public and those who would manipulate them, and that filter is wearing thin.
In 2003, the pharmaceutical company GlaxoSmithKline launched a “public awareness” campaign on behalf of restless legs syndrome, an uncomfortable urge to move the legs that worsens when the legs are at rest, particularly at night. First came a study that showed one of GlaxoSmithKline’s existing drugs also worked on restless legs. This was immediately followed by a press release announcing a survey that was said to reveal that a “common yet under-recognized disorder—restless legs syndrome—is keeping Americans awake at night.” Then came the ad blitz. In 2006, Steven Woloshin and Lisa Schwartz of the Dartmouth Medical School examined thirty-three articles about the syndrome published in major American newspapers between 2003 and 2005. What they found was “disturbing,” they wrote.
There are four standard criteria for diagnosing restless legs syndrome, but almost every article the researchers found cited a survey that asked for only one symptom—and came up with the amazing conclusion that one in ten Americans was afflicted by the syndrome. The likelier prevalence of the syndrome, the authors wrote, is less than 3 percent. Worse, almost half the articles illustrated the syndrome with only an anecdote or two and almost all of those involved people with unusually severe symptoms, including suicidal thoughts. Not one story provided an anecdote of someone who experienced the symptoms but didn’t find them terribly troubling—which is actually common. Half the stories mentioned GlaxoSmithKline’s drug by name (ropinirole), and about half of those illustrated the drug’s curative powers by telling the story of someone who took the drug and got better. Only one story actually quantified the benefits of the drug, which Woloshin and Schwartz rightly describe as “modest” (in a clinical trial, 73 percent of those who took the drug got at least some relief from their symptoms, compared to 57 percent who were given a placebo). Two-thirds of the articles that discussed ropinirole did not mention the drug’s potential side effects, and only one quantified that risk. One-fifth of the articles referred readers to the “nonprofit” Restless Legs Foundation, but none reported that the foundation’s biggest donor by far is GlaxoSmithKline. “The media seemed to have been co-opted,” Woloshin and Schwartz concluded.
But there is another, more fundamental problem with blaming the if-it-bleeds-it-leads mentality entirely on the pursuit of profit. The reader got a sense of it reading the awful story of Shelby Gagne at the start of this chapter. As painful as it was, the reporter’s description of the family’s struggle and the little girl’s suffering was absorbing and moving. Anyone with a heart and a conscience would be affected—and that includes reporters.
For the most part, reporters, editors, and producers do not misrepresent and exaggerate risks because they calculate that this is the best way to boost revenues and please their corporate masters. They do it because information that grabs and holds readers grabs and holds reporters. They do it because they are human.
“Human beings have an innate desire to be told and to tell dramatic stories,” wrote Sean Collins, a senior producer with National Public Radio News in a letter to the Western Journal of Medicine. Collins was responding to the study of television news in Los Angeles County, which included some tough criticism by David McArthur and his colleagues. “I am at a loss to name a single operatic work that treats coronary artery disease as its subject but I can name several where murder, incest, and assassination play a key part in the story. Check your own instinct for storytelling by asking yourself this: If, driving home from work, you passed a burning building, would you wait to tell your spouse about it until you first explained the number of people who died that day from some form of neoplastic disease?”
Pamphlets peddled on the streets of Elizabethan England were filled with tales of murder, witchcraft, and sexual misbehavior of the most appalling sort. By the early nineteenth century, recognizably modern newspapers were flourishing in London, and in 1820 came the first example of what a later age would call a media circus. The story that occasioned this momentous event was not a war, revolution, or scientific triumph. It was the unpopular King George IV’s attempt to divorce his wife by having her tried for adultery—which turned the queen’s sex life into a matter of public record and a source of endless fascination for every Englishman who could read or knew someone who could. In journalism schools today, students are told there is a list of qualities that make a story newsworthy, a list that varies from teacher to teacher, but that always includes novelty, conflict, impact, and that beguiling and amorphous stuff known as human interest. A royal sex scandal scores on all counts, then and now. “Journalism is not run by a scientific formula,” wrote Collins. “Decisions about a story being newsworthy come from the head, the heart and the gut.”
From this perspective, it makes perfect sense that stories about breast cancer routinely feature young women, even though most women with breast cancer are old. It’s a simple reflection of our feelings: It may be sad when an eighty-five-year-old woman loses her life to cancer, but it is tragic when the same happens to a young woman. Whether these contrasting valuations are philosophically defensible is irrelevant. This is how we feel, all of us. That includes the reporters, who find themselves moved by the mother of young children dying of breast cancer or the man consigned to a wheelchair by West Nile virus, and are convinced by what they feel that this is a great story that should be the focus of the report. The statistics may say these cases are wildly unrepresentative, but given a choice between a powerful personal story and some numbers on a chart, reporters will go with the story. They’re only human.
So much of what appears in the media—and what doesn’t—can be explained by the instinct for storytelling. Conflict draws reporters because it is essential to a good story; Othello wouldn’t be much of a play if Iago didn’t spread that nasty rumor. Novelty is also in demand—“three-quarters of news is ‘new,’ ” as an editor once instructed me. The attraction to both qualities—and the lack of interest in stories that fail to provide them—was evident in the results of a 2003 study by The King’s Fund, a British think tank, on the reporting of health issues. “In all the news outlets studied,” the researchers concluded, “there was a preponderance of stories in two categories. One was the National Health Service—mostly stories about crises besetting the service nationally or locally, such as growing waiting times or an increased incidence of negligence. The other was health ‘scares’—that is, risks to public health that were widely reported but which often involved little empirical impact on illness and premature death.” That second category includes so-called mad-cow disease, SARS, and avian flu—all of which offered an abundance of novelty. What was ignored? The slow, routine, and massive toll taken by smoking, alcohol, and obesity. By comparing the number of stories a cause of death garnered with the number of deaths it inflicted, the researchers produced a “death-per-news-story” ratio that “measures the number of people who have to die from a given condition to merit a story in the news. It shows, for example, that 8,571 people died from smoking for each story about smoking on the BBC news programs studied. By contrast, it took only 0.33 deaths from vCJD (mad cow disease) to merit a story on BBC news.”
An ongoing narrative is also highly valued because a story that fits an existing storyline is strengthened by that larger story. Celebrity news— to take the most extreme example—is pure narrative. Once the Anna Nicole Smith narrative was established, each wacky new story about Anna Nicole Smith was made more compelling by the larger storyline of Anna Nicole Smith’s wacky life, and so we got more and more stories about Anna Nicole Smith even after Anna Nicole Smith was no longer providing fresh material. Even the smallest story could be reported—I actually got a CNN news alert in my e-mail when a judge issued an injunction temporarily stopping the burial of the body—because it didn’t have to stand on its own strengths. It was part of the larger narrative. And if the big narrative is considered important or compelling, no story is too small to run. Conversely, if a story isn’t part of a larger narrative—or worse, if it contradicts the narrative—it is far less likely to see the light of day. This applies to matters considerably more important than celebrity news.
In the early 1990s, the AIDS epidemic in the developed world was showing the first signs of being more manageable than had been feared. But the storyline it had inspired—exotic new virus emerges from the fetid jungles of Africa and threatens the world—didn’t fade, thanks mainly to the release of Richard Preston’s The Hot Zone in 1994. Billed as “a terrifying true story,” The Hot Zone was about a shipment of monkeys sent to Virginia, where they were discovered to be infected with Ebola. There was no outbreak in Virginia, and if there had been it wouldn’t have amounted to much because the particular strain of the virus the monkeys had was not lethal to humans, but that didn’t stop The Hot Zone from becoming an international best-seller. The media started churning out endless stories about “emerging viral threats,” and the following year a Hollywood movie inspired by The Hot ZoneOutbreak—was released. More books were commissioned. Documentaries were filmed. And when Ebola actually did break out in Congo (then known as Zaire), reporters rushed to a part of the world that is generally ignored. The coverage was massive, but the 1995 Ebola outbreak didn’t lead to chaos and disaster. It just ran the usual sad course, killing about 255 people in all.
For the people of Congo and central Africa, however, chaos and disaster really were coming. In 1998, a coup led to civil war that sparked fighting across the whole region, and civil authority collapsed. It’s hard to know precisely how many lives were lost—whether to bullet, bomb, or disease— but many authorities suggest three million or more died over the first several years. The developed world scarcely noticed. The war fit no existing narrative, and without any obvious relevance to the rich world it couldn’t start one, so the media gave it a tiny fraction of the attention they lavished on the 1995 Ebola outbreak—even though the war killed roughly 11,700 people for every one lost to Ebola.
Even compelling stories that fit narratives can disappear if the narrative isn’t operational when they happen. In 2006, a Tennessee school district sent home 1,800 students following reports that radioactive cooling water was leaking at a nearby nuclear plant. It was the first nuclear-related evacuationin the United States since the Three Mile Island accident of 1979. If it had occurred at a time when the “nuclear accident” narrative had been in place—as it was for years after Three Mile Island and again after Chernobyl—it would have been major news. But in 2006, that narrative was gathering dust, and so the incident was treated as a minor local story and ignored.
Terrorism is obviously a major narrative today, as it has been for some time, but a decade ago it was quite different. The 1995 Oklahoma City bombing made terrorism the story of men like the bomber, Timothy McVeigh, a white, paranoid, antigovernment radical. Following that storyline, journalists churned out countless articles about tiny groups of cranky gun enthusiasts who grandly styled themselves “militias.” There wasn’t much evidence that the militias were a serious threat to public safety, but McVeigh had briefly belonged to one, so reporters flocked to cover their every word and deed. The September 11 attacks scrapped this storyline and replaced it with the story of Islamist terrorism that is still going strong today—which is why, when a suicide bomber detonated himself outside a packed stadium at the University of Oklahoma on October 1, 2005, the media scarcely reported the incident. The bomber, Joel Henry Hinrichs III, wasn’t Muslim. He was a disturbed white guy with a thing for explosives whose initial plan was apparently to detonate a bomb identical to that used by Timothy McVeigh. If he had carried out his attack at the University of Oklahoma in the late 1990s, it would have been major news around the world, but in 2005 it didn’t fit the narrative so it, too, was treated as a minor local story and ignored.
This happened again in April 2007, when six white men belonging to the “Alabama Free Militia” were arrested in Collinsville, Alabama. Police seized a machine gun, a rifle, a sawed-off shotgun, two silencers, 2,500 rounds of ammunition, and various homemade explosives, including 130 hand grenades and 70 improvised explosive devices (IEDs) similar to those used by Iraqi insurgents. The leader of the group was a wanted fugitive living under an alias who often expressed a deep hatred of the government and illegal immigrants. At a bail hearing, a federal agent testified that the group had been planning a machine-gun attack on Hispanics living in a small nearby town. The media weren’t interested and the story was essentially ignored. But one week later, when a group of six Muslims was arrested for conspiring to attack Fort Dix, it was major international news—even though these men were no more sophisticated or connected to terrorist networks than the “Alabama Free Militia” and had nothing like the arsenal of the militiamen.
Another element essential to good storytelling is vividness, in words or images, and good journalists constantly seek to inject it into their work. This has profound consequences for perceptions of risk.
“Mad cow disease” is the sort of short, vivid, punchy language that newspapers love, and not surprisingly the term was coined by a newspaper-man. David Brown of the Daily Telegraph realized the scientific name—bovine spongiform encephalopathy (BSE)—is dry and abstract and, as he later recalled in an interview, he wanted people to pay attention and demand something be done about the problem. “The title of the disease summed it up. It actually did a service. I have no conscience about calling it mad cow disease.” The label was indeed potent. A 2005 paper examining how the BSE crisis played out in France found that beef consumption dropped sharply when the French media used the “mad cow” label rather than BSE. To bolster those results, Marwan Sinaceur, Chip Heath, and Steve Cole—the first two professors at Stanford University, the last at UCLA— conducted a lab study that asked people to imagine they had just eaten beef and heard a news item about the disease. They found that those who heard the disease described as mad cow disease expressed more worry and a greater inclination to cut back on beef than those who were asked about bovine spongiform encephalopathy. This is the Good-Bad Rule at work. “The Mad Cow label caused them to rely more on their emotional reactions than they did when scientific labels were used,” the researchers wrote. “The results are consistent with dual-system theories in that although scientific labels did not eliminate the effect of emotion, they caused people to think more deliberatively. ” Gut jumped at the mention of mad cow disease, in other words, while bovine spongiform encephalopathy got Head to pay attention.
Even more than emotional language, the media adore bad news, so journalists often—contrary to the advice of the old song—accentuate the negative and eliminate the positive. In October 2007, Britain’s Independent ran a banner headline—“Not An Environment Scare Story”—above a grim article about the latest report from the United Nations Environment Program.The tone was justified, as the report contained documentation of worsening environmental trends. But as the UN’s own summary of the report noted in its first paragraph, the report also “salutes the real progress made in tackling some of the world’s most pressing environmental problem. ” There wasn’t a word about progress in the Independent’s account.
The same newspaper was even more tendentious when it reported on a 2006 survey of illicit drug prices in the United Kingdom conducted by the DrugScope charity. DrugScope’s own report opens with this sentence: “Despite a wealth of dubious media stories about cocaine flooding playgrounds, crack and heroin being easier to buy than takeaway pizzas and an explosion of cannabis smoking sparked by reclassification, a snapshot of average illicit drug prices in 20 towns and cities undertaken in July and August reveals prices have remained relatively stable in the last year.” The lead sentence of the Independent’s report on the survey was somewhat different: “The cost of drugs in many parts of Britain has plummeted in the past year, an authoritative study on the country’s booming industry in illegal substances has revealed. ” DrugScope also reported that “the forecasted crystal meth epidemic has failed to materialize and it was not considered a significant part of any of the 20 drug markets.” Predictably, this was not mentioned in the Independent article.
When the American Cancer Society released 2006 statistics showing overall cancer rates had declined in New York City and across the United States, the New York Post managed to turn this good news bad in a story headlined “Cancer Alarm.” “About 88,230 Big Apple residents were diagnosed with cancer this year,” read the first sentence, “and 35,600 died— many from preventable lung and prostate cancers, a new study shows.” Only in a single sentence of the third paragraph did the Post acknowledge, grudgingly, that the cancer rate—the statistic that really matters—had declined. It took similar creativity for the Toronto Star to find bad news in the Statistics Canada announcement that the life span of the average Canadian male had reached eighty years. After devoting a single sentence to this historic development, the reporter rushed on to the thrust of the rest of the article: “The bad news is these booming ranks of elderly Canadians could crash our health system.”
Scientists, particularly medical researchers, have long complained that the media favor studies that find a threat over those that don’t. Eager to test this observation empirically, doctors at the Hospital for Sick Children in Toronto noticed that the March 20, 1991, edition of the Journal of the American Medical Association had back-to-back studies on the question of childhood cancers caused by radiation. The first study was positive—it showed a hazard existed. The second study was negative—it found no danger. Since the media routinely report on studies in JAMA, this was a perfect test of bias. In all, the researchers found nineteen articles related to the studies in newspapers. Nine mentioned only the study that found there is a danger. None reported only the study that found there isn’t a threat. Ten articles reported both—but in these, significantly more attention was given to the study that said there is a danger than to the one that said there isn’t.
As unfortunate as this bias may be, it is just as understandable as the tendency to prefer emotional stories over accurate data. “We don’t like bad news,” observes a character in a Margaret Atwood short story. “But we need it. We need to know about it in case it’s coming our way. Herd of deer in the meadow, heads down, grazing peacefully. Then woof woof—wild dogs in the woods. Heads up, ears forward. Prepare to flee!” It’s a primitive instinct. Our ancestors didn’t jump up and scan the horizon when someone said there were no lions in the vicinity, but a shout of “Lion!” got everyone’s attention. It’s the way we are wired, reporter and reader alike. A study by psychologists Michael Siegrist and George Cvetkovich found that when students at the University of Zurich were given new research on a health risk (a food coloring, electromagnetic fields), they considered the research more credible when it indicated there is a hazard than when it found no danger. “People have more confidence in studies with negative outcomes than in studies showing no risks,” the researchers concluded.
For the reporter, the natural bias for bad news is compounded by the difficulty of relating good news in the form of personal stories. How do you tell the story of a woman who doesn’t get breast cancer? The ex-con who obeys the law? The plane that makes a smooth landing right on schedule? “Postal Worker Satisfied with Life” isn’t much of a headline—unlike “Postal Worker Kills Eight,” which is bound for the front page.
It can even be a challenge to turn statistically representative examples of bad news into stories. Stories about serial killers may be fascinating, but the average criminal is a seventeen-year-old shoplifter, and stories about seventeen-year-old shoplifters will never be as interesting as stories about serial killers. As for the statistically representative victim of West Nile virus—no symptoms, no consequences—the writer has not been born who could make this story interesting to anyone but a statistician.
And this is just to speak of the news media. The bias in favor of sensational storytelling is all the more true of the entertainment media, because in show business there is no ethic of accuracy pushing back. Novels, television, and movies are filled with risk-related stories that deploy the crowd-pleasing elements known to every storyteller from Homer to Quentin Tarantino—narrative, conflict, surprise, drama, tragedy, and lots of big emotions—and bear no resemblance to the real dangers in our lives. Evening television is a particularly freakish place. A recent episode of CSI featured the murder of a ruthless millionaire casino owner—a case solved when diaper rash on the body led investigators to discover the victim had a sexual fetish that involved being stripped down and treated like a baby. Meanwhile, on the medical drama Grey’s Anatomy, a beautiful young woman presents herself for a routine checkup, is told she has advanced cervical cancer, and is dead by the end of the show—just another day in a hospital where rare disorders like Rasmussen’s encephalitis turn up with amazing frequency, and no one ever gets diabetes or any of the boring diseases that kill more people than all the rare disorders combined.
It’s the information equivalent of junk food, and like junk food, consuming it in large quantities may have consequences. When we watch this stuff, Head knows it’s just a show—that cops don’t spend their time investigating the murders of millionaires in diapers and hospitals aren’t filled with beautiful young women dying of cancer. But Gut doesn’t know any of that. Gut knows only that it is seeing vivid incidents and feeling strong emotions and these things satisfy the Example Rule and the Good-Bad Rule. So while it’s undoubtedly true that the news media contribute to the fact that people often get risk wrong, it is likely that the entertainment media must share some of that blame.
An indication of how influential the media can be comes from the most unlikely place. Burkina Faso is a small country in West Africa. It was once a French colony, and French is the dominant language. The French media are widely available, and the local media echo the French media. But Burkina Faso is one of the poorest countries on earth, and threats to life and limb there are very different than in France. So when researchers Daboula Kone and Etienne Mullet got fifty-one residents of the capital city to rate the risk posed by ninety activities and technologies—on a scale from 0 to 100—it would be reasonable to expect the results would be very different than in similar French surveys. They weren’t. “Despite extreme differences in the real risk structure between Burkina Faso and France,” the researchers wrote, “the Burkina Faso inhabitants in this sample responded on the questionnaire in a way which illustrates approximately the same preoccupations as the French respondents and to the same degree.”
That said, people often exaggerate the influence the media have on society, in part because they see the media as something quite apart from society, as if it were an alien occupying force pumping out information from underground bunkers. But the reporters, editors, and producers who are “the media” have houses in the suburbs, kids in school, and a cubicle in an office building just like everybody else. And they, too, read newspapers, watch TV, and surf the Internet.
In the 1997 study that found the media paid “impressively disproportionate” attention to dramatic causes of death, cancer was found to be among the causes of death given coverage greater than the proportion of deaths it causes. The authors ignored that finding but it’s actually crucial. Cancer isn’t spectacular like a house fire or homicide, and it’s only dramatic in the sense that any potentially deadly disease is dramatic—including lots of deadly diseases that get very little media attention. What cancer does have, however, is a powerful presence in popular culture. The very word is black and frightening. It stirs the bleak feelings psychologists call negative affect, and reporters experience those feelings and their perceptions are shaped by them. So when the media give disproportionate coverage to cancer, it’s clear they are reflecting what society thinks, not directing it. But at the same time, the disproportionate attention to cancer in the media can lead people to exaggerate the risk—making cancer all the more frightening.
Back and forth it goes. The media reflect society’s fear, but in doing so, the media generate more fear, and that gets reflected back again. This process goes on all the time but sometimes—particularly when other cultural concerns are involved—it gathers force and produces the strange eruption sociologists call a moral panic.
In 1998, Time magazine declared, “It’s high noon on the country’s streets and highways. This is road recklessness, auto anarchy, an epidemic of wanton carmanship.” Road rage. In 1994, the term scarcely existed and the issue was nowhere to be seen. In 1995, the phrase started to multiply in the media, and by 1996 the issue had become a serious public concern. Americans were increasingly rude, nasty, and violent behind the wheel; berserk drivers were injuring and killing in growing numbers; it was an “epidemic.” Everyone knew that, and by 1997, everyone was talking about it. Then it stopped. Just like that. The term road rage still appears now and then in the media—it’s too catchy to let go—but the issue vanished about the time Monica Lewinsky became the most famous White House intern in history, and today it is as dated as references to Monica Lewinsky.
When panics pass, they are simply forgotten, and where they came from and why they disappeared are rarely discussed in the media that featured them so prominently. If the road-rage panic were to be subjected to such an examination, it might reasonably be suggested that its rise and fall simply reflected the reality on American roads. But the evidence doesn’t support that. “Headlines notwithstanding, there was not—there is not—the least statistical or other scientific evidence of more aggressive driving on our nation’s roads,” concluded journalist Michael Fumento in a detailed examination of the alleged epidemic published in The Atlantic Monthly in August 1998. “Indeed, accident, fatality and injury rates have been edging down. There is no evidence that ‘road rage’ or an aggressive-driving ‘epidemic’ is anything but a media invention, inspired primarily by something as simple as alliteration: road rage.”
Of course the media didn’t invent the road-rage panic in the same sense that marketers hope to generate new fads for their products. There was no master plan, no conspiracy. Nor was there fabrication. The incidents were all true. “On Virginia’s George Washington Parkway, a dispute over a lane change was settled with a high-speed duel that ended when both drivers lost control and crossed the center line, killing two innocent motorists,” reported U.S. News & World Report in 1997. That really happened. It was widely reported because it was dramatic, tragic, and frightening. And there were other, equally serious incidents that were reported. A new narrative of danger was established: Drivers are behaving worse on the roads, putting themselves and others at risk. That meant incidents didn’t have to be interesting or important enough to stand up as stories on their own. They could be part of the larger narrative, and so incidents that would not previously have been reported were. The same article also reported “the case in Salt Lake City where seventy-five-year-old J. C. King—peeved that forty-one-year-old Larry Remm Jr. honked at him for blocking traffic—followed Remm when he pulled off the road, hurled his prescription bottle at him, and then, in a display of geriatric resolve, smashed Remm’s knees with his ’92 Mercury. In tony Potomac, Maryland, Robin Flicker—an attorney and ex-state legislator—knocked the glasses off a pregnant woman after she had the temerity to ask him why he bumped her jeep with his.” Today, these minor incidents would never make it into national news, but they fit an established narrative at the time and so they were reported.
More reporting puts more examples and more emotions into more brains. Public concern rises, and reporters respond with more reporting. More reporting, more fear; more fear, more reporting. The feedback loop is established and fear steadily grows.
It takes more than the media and the public to create that loop, however. It also takes people and institutions with an interest in pumping up the fear, and there were plenty of those involved in the manufacture of the road-rage crisis, as Fumento amply documented. The term “road rage” and the alleged epidemic “were quickly popularized by lobbying groups, politicians, opportunistic therapists, publicity-seeking safety agencies and the U.S. Department of Transportation.” Others saw a good thing and tried to appropriate it—spawning “air rage,” “office rage,” and “black rage.” In the United Kingdom, therapists even promoted the term “trolley rage” to describe allegedly growing numbers of consumers who flew into a fury behind the handle of a shopping cart just as drivers lost it behind the wheel of a car.
With road rage established as something that “everyone knows” is real, the media applied little or no scrutiny to frightening numbers spouted by self-interested parties. “Temper Cited as Cause of 28,000 Road Deaths a Year,” read a headline in the New York Times after the head of the National Highway Transportation Safety Administration (NHTSA)—a political appointee whose profile grew in lockstep with the prominence of the issue— claimed that two-thirds of fatalities “can be attributed to behavior associated with aggressive driving.” This became the terrifying factoid that gave the imprimatur of statistics to all the scary anecdotes. But when Fumento asked a NHTSA spokesperson to explain the number, she said, “We don’t have hard numbers but aggressive driving is almost everything. It includes weaving in and out of traffic, driving too closely, flashing your headlights—all kinds of stuff. Drinking, speeding, almost everything you can think of, can be boiled down to aggressive driving behaviors.”
With such a tenuous link to reality, the road-rage scare was not likely to survive the arrival of a major new story, and a presidential sex scandal and impeachment was certainly that. Bill Clinton’s troubles distracted reporters and the public alike, so the feedback loop was broken and the road-rage crisis vanished. In 2004, a report commissioned by the NHTSA belatedly concluded, “It is reasonable to question the claims of dramatic increases in aggressive driving and road rage. . . . The crash data suggest that road rage is a relatively small traffic safety problem, despite the volume of news accounts and the general salience of the issue. It is important to consider the issues objectively because programmatic and enforcement efforts designed to reduce the incidence of road rage might detract attention and divert resources from other, objectively more serious traffic safety problems.” A wise note of caution, seven years too late.
In 2001, the same dynamic generated what the North American media famously dubbed the Summer of the Shark. On July 6, 2001, off the coast of Pensacola, Florida, an eight-year-old boy named Jessie Arbogast was splashing in shallow water when he was savaged by a bull shark. He lost an arm but survived, barely, and the bizarre and tragic story with a happy ending became headline news across the continent. It established a new narrative, and “suddenly, reports of shark attacks—or what people thought were shark attacks—came in from all around the U.S.,” noted the cover story of the July 30, 2001, edition of Time magazine. “On July 15, a surfer was apparently bitten on the leg a few miles from the site of Jessie’s attack. The next day, another surfer was attacked off San Diego. Then a life guard on Long Island, N.Y., was bitten by what some thought was a thresher shark. Last Wednesday, a 12-foot tiger shark chased spearfishers in Hawaii.” Of course, these reports didn’t just “come in.” Incidents like these happen all the time, but no one thinks they’re important enough to make national news. The narrative changed that, elevating trivia to news.
The Time article was careful to note that “for all the terror they stir, the numbers remain minuscule. Worldwide, there were 79 unprovoked attacks last year, compared with 58 in 1999 and 54 the year before. . . . You are 30 times as likely to be killed by lightning. Poorly wired Christmas trees claim more victims than sharks, according to Australian researchers.” But this nod to reason came in the midst of an article featuring graphic descriptions of shark attacks and color photos of sharks tearing apart raw meat. And this was the cover story on one of the most important news magazines in the world. The numbers may have said there was no reason for alarm but to Gut, everything about the story shouted: Be afraid!
In early September, a shark killed a ten-year-old boy in Virginia. The day after, another took the life of a man swimming in the ocean off North Carolina. The evening newscasts of all three national networks made shark attacks the top item of the week. This is what the United States was talking about at the beginning of September 2001.
On the morning of Tuesday, September 11, predators of another kind boarded four planes and murdered almost 3,000 people. Instantly, the feedback loop was broken. Reports of sharks chasing spearfishers vanished from the news and the risk of shark attack reverted to what it had been all along— a tragedy for the very few touched by it, statistical trivia for everyone else. Today, the Summer of the Shark is a warning of how easily the public—media and audience together—can be distracted by dramatic stories of no real consequence.
Storytelling may be natural. It may also be enlightening. But there are many ways in which it is a lousy tool for understanding the world we live in and what really threatens us. Anecdotes aren’t data, as scientists say, no matter how moving they may be or how they pile up.
Criticisms like this bother journalists. It is absurd that the news “should parallel morbidity and mortality statistics,” wrote Sean Collins, the producer who took exception to criticisms of media coverage by public-health experts. “Sometimes we have to tell stories that resonate some place other than the epidemiologists’ spreadsheet.”
He’s right, of course. The stories of a young woman with breast cancer, a man paralyzed by West Nile virus, and a boy killed by a shark should all be told. And it is wonderful that the short life of Shelby Gagne was remembered in a newspaper photograph of a toddler grinning madly. But these stories of lives threatened and lost to statistically rare causes are not what the media present “sometimes.” They are standard fare. It is stories in line with the epidemiologists’ spreadsheet that are told only sometimes—and that is a major reason Gut so often gives us terrible advice.