10
The Chemistry of Fear
"Our bodies have become repositories for dozens of toxic chemicals,” begins a report from Greenpeace. “It is thought that every person on Earth is now contaminated and our bodies may now contain up to 200 synthetic chemicals.”
"Toxic Chemicals Are Invading Our Bodies,” warns a headline on the Web site of the World Wildlife Fund. When the WWF analyzed the blood of thirteen families in Europe, it discovered seventy-three man-made chemicals, while testing in the United Kingdom “found evidence of DDT and PCBs, two dangerous chemicals banned years ago,” in almost everyone. “Hazardous chemicals are found in the tissue of nearly every person on earth and exposure to them has been linked to several cancers and to a range of reproductive problems, including birth defects,” WWF says in a Web site article illustrated with a bag of blood stamped CONTAMINATED.
Some see a connection between this pollution and trends in health. In a 2006 Canadian Broadcasting Corporation (CBC) documentary, journalist Wendy Mesley recounted how she was shocked to learn, after being diagnosed with breast cancer, that a baby born today has a one in two chance of getting cancer in its lifetime. “I set out to figure out what on earth is causing that.” Smoking and sun exposure obviously contribute to cancer, she reported, as does the aging of the population. But those factors don’t explain why “childhood cancers have increased over 25 percent in the last 30 years.” Mesley had her blood tested and discovered it was contaminated with forty-four chemicals and heavy metals, including PCBs. “I’m full of carcinogens and apparently that’s normal,” she commented. Mesley interviewed Sam Epstein, a University of Illinois scientist. Cancer is “epidemic,” Epstein declared.
Messages like these are reflected in popular opinion surveys that Paul Slovic and colleagues conducted in the United States, Canada, and France. In each country, the results were roughly the same: Three-quarters of those surveyed said they “try hard to avoid contact with chemicals and chemical products in my daily life”; the same proportion said that “if even a tiny amount of a cancer-producing substance were found in my tap water, I wouldn’t drink it”; seven in ten believed that “if a person is exposed to a chemical that can cause cancer, then that person will probably get cancer some day”; and six in ten agreed that “it can never be too expensive to reduce the risk from chemicals.”
We really don’t like chemicals. We don’t even like the word. In surveys of the American public, Slovic asked people to say what comes to mind when they hear the word chemical. The results were “dominated by negative imagery,” he says. “Death.” “Toxic.” “Dangerous.” In Canadian surveys carried out by Daniel Krewski, an epidemiologist at the University of Ottawa, people were asked what thought pops into their minds when they hear the word risk. One common answer was “chemical.”
Water is a chemical, and so is mother’s milk. But that’s not how people use the word today. Chemicals are invented in laboratories and manufactured in giant industrial plants. And they are inherently dangerous, something to be avoided whenever possible. It is this cultural redefinition of “chemical” that has transformed organic produce from a niche market into a booming multibillion-dollar industry, and why the word natural has become the preferred adjective of corporate marketers, no matter what they’re selling. “The tobacco in most cigarettes contains additives drawn from a list of 409 chemicals commonly used in tobacco products,” reads an ad that appeared in American magazines in 2006. “Natural American Spirit is the only brand that features both cigarettes made with 100 percent organic tobacco as well as cigarettes made with 100 percent additive-free natural tobacco.”
This is new. Prior to the 1960s, “chemical” was associated with the bounty of science. It meant progress and prosperity, an image the DuPont Corporation sought to capitalize on in 1935 with the help of a new slogan: “Better things for better living . . . through chemistry.” New products came to market with little or no testing and were used in massive quantities with scarcely a thought for safety. It was an era in which children caught in the mist of a crop duster had their faces washed by mothers who had no idea it would take more than a damp washcloth to make their children clean again.
The end of that era came in 1962, when Rachel Carson, a marine biologist with the U.S. Fish and Wildlife Service, published a book called Silent Spring. “For the first time in the history of the world,” Carson wrote, “every human being is now subjected to contact with dangerous chemicals, from the moment of conception until death.”
Carson’s primary concern in Silent Spring was the damage being inflicted on the natural world by the indiscriminate use of synthetic chemicals, particularly DDT, a pesticide she believed was annihilating bird populations and threatening to usher in a springtime made silent by the absence of bird song. But the book likely would not have come to much if it had stopped at that. Carson further argued that the chemical stew that was crippling the natural world was also doing terrible harm to Homo sapiens. In a chapter entitled “One in Every Four,” Carson noted that the proliferation of synthetic chemicals that started in the late nineteenth century was paralleled by a rise in cancer. In the United States, Carson wrote, cancer “accounted for 15 percent of the deaths in 1958 compared with only four percent in 1900.” The lifetime risk of getting cancer would soon be a terrifying one in four and “the situation with respect to children is even more deeply disturbing. A quarter century ago, cancer in children was considered a medical rarity. Today, more American school children die of cancer than from any other disease. . . . Twelve percent of all deaths in children between the ages of one and 14 are caused by cancer.” (Emphasis in the original.)
It would be difficult to exaggerate the impact of Silent Spring. The book influenced a whole generation of policy-makers and thoughtful citizens, including Supreme Court justice William O. Douglas and President John F. Kennedy. The chemical industry launched a campaign of nasty attacks on Carson—that “hysterical woman”—but that only raised the book’s profile and damaged the industry’s image. Commissions were launched to investigateCarson’s claims and citizens’ groups formed to press for a ban on DDT and other chemicals. It was the beginning of the modern environmental movement.
In 1970, the first Earth Day was celebrated. In 1972, DDT was banned in the United States. In 1982, DuPont dropped the "through chemistry” part of its famous slogan. At the end of the century, Silent Spring routinely appeared on lists of the most influential books of all time and Time named Carson one of the “100 People of the Century.”
Carson didn’t live to see her words change the world. She died in 1964—killed by breast cancer.
Cancer is the key to understanding why Silent Spring set off the explosion it did. It wasn’t just any disease Carson warned of. The very word cancer is “unclean,” wrote a survivor in a 1959 memoir. “It is a crab-like scavenger reaching its greedy tentacles into the life of the soul as well as the body. It destroys the will as it gnaws away the flesh.” Cancer has a unique image in modern culture. It is not merely a disease—it’s a creeping killer and we fear it like no other. Paul Slovic’s surveys show cancer is the only major disease whose death toll is actually overestimated by the public. It also has a presence in the media even bigger than its substantial toll.
And yet, despite its enormous presence in our culture, cancer wasn’t always the stuff of nightmares. “In 1896,” writes Joanna Bourke in Fear: A Cultural History, “the American Journal of Psychology reported that when people were asked which diseases they feared, only five percent named cancer, while between a quarter and a third drew attention to the scary nature of each of the following ailments: smallpox, lockjaw, consumption and hydrophobia [rabies]. In the fear-stakes, being crushed in a rail accident or during an earthquake, drowning, being burned alive, hit by lightning, or contracting diphtheria, leprosy, or pneumonia all ranked higher than cancer.”
That changed after the Second World War. By 1957, cancer was such a terror that an oncologist quoted by Bourke complained that the disease had been transformed into “a devil” and the fear of cancer—“cancerophobia,” as he called it—had become a plague in its own right. “It is possible that today cancerophobia causes more suffering than cancer itself,” he wrote. With Silent Spring, Carson told people that this new specter wasn’t just in their nightmares. It was all around them, in the air they breathe, the soil they walk on, and the food they eat. It was even in their blood. Small wonder people paid attention.
Carson’s numbers suggested fear of cancer was rising rapidly simply because cancer was rising rapidly. But her numbers were misleading.
Carson’s statement that cancer “accounted for 15 percent of the deaths in 1958 compared with only four percent in 1900” makes the common mistake of simply assuming that the disease’s larger share of the total is the result of rising rates of the disease. But according to U.S. Census Bureau data, cancer was the number-seven killer in the period 1900 to 1904. Number one was tuberculosis. Number four was diarrhea and enteritis. Number ten was typhoid fever, followed by diphtheria at number eleven. Scarlet fever, whooping cough, and measles ranked lower but still took a significant toll. By the time Carson was writing in the late 1950s, vaccines, antibiotics, and public sanitation had dramatically reduced or even eliminated every one of these causes of death. (In 1958, tuberculosis had fallen from the number one spot to number fifteen. Enteritis was number nineteen. Deaths due to diphtheria, scarlet fever, and the rest had all but vanished.) With the toll of other causes dropping rapidly, cancer’s share of all deaths would have grown greatly even if the rate of cancer deaths hadn’t changed in the slightest.
The same facts take the sting out of the statement Carson thought was so important she put it in italics: “Today, more American school children die of cancer than from any other disease.” By 1962, traditional child killers such as diphtheria had been wiped out. More children were dying of cancer than any other disease not because huge numbers of children were dying of cancer but because huge numbers of children were not dying of other diseases.
As for the title of Carson’s chapter on cancer—“One in Every Four”—it comes from a 1955 report by the American Cancer Society (ACS) predicting that the then-current estimate of cancer striking one person in five would rise to one in four. But as age is the primary risk factor for cancer, the fact that far more people were surviving childhood and living to old age would inevitably mean more people would get cancer—mostly when they were old—and so the “lifetime risk” would rise. However, the ACS noted that wasn’t the whole story. Data on cancer were still sketchy in that era, but in the previous two decades there was an apparent 200 percent rise in the incidenceof cancer among women and a 600 percent rise among men, which was mostly the result of a rise in only one type of cancer. Lung cancer “is the only form of cancer which shows so definite a tendency,” the report noted.
Lung cancer started to soar in the 1920s, twenty years after the habit of smoking cigarettes took off among men in the United States and other Western countries. Women didn’t start smoking in large numbers until the 1920s and 1930s—and twenty years after that, cancer among women took off as well. When smoking rates started to decline in the 1960s and 1970s— again, first among men—so did many cancers twenty years later. This pattern mainly involves lung cancer, but other forms of cancer are also promoted by smoking: cancers of the larynx, pancreas, kidney, cervix, bladder, mouth, and esophagus.
But Carson didn’t write a word about smoking in Silent Spring. In fact, the only mention of tobacco is a reference (again italicized for emphasis) to arsenic-bearing insecticides sprayed on tobacco crops: “The arsenic content of cigarettes made from American-grown tobacco increased more than 300 percent between the years 1932 and 1952.” Carson was nodding toward a popular theory of the day. It isn’t inhaling tobacco smoke that kills. Tobacco is natural and safe. It’s the chemicals added to tobacco that kill. This theory was advocated by Wilhelm Hueper of the National Cancer Institute, who was a major influence on Carson’s views and is repeatedly quoted in Silent Spring.
At the time, that hypothesis was not unreasonable. The research linking smoking to cancer was fairly new, and very little was known about synthetic chemicals and human health. And while the rise in cancer may not have been as enormous as Carson made it out to be, it was real, and the possibility that all these new wonder chemicals were the source was truly scary.
Adding to the reasons to worry was a study conducted by a scientist named John Higginson—who later founded the World Health Organization’s agency for research on cancer—that compared cancerous tumors among Africans with those of African-Americans. Higginson discovered there was far more cancer in the second group. This indicated heredity was not among the bigger factors driving cancer. Based on this study and others, Higginson estimated that about two-thirds of all cancers had what he called an environmental cause. He didn’t mean environmental in the way that word came to be understood after Silent Spring, however. To Higginson, environmental simply meant anything that isn’t genetic. Even smoking was included. “Environment is what surrounds people and impinges on them,” he said in an interview with Science in 1979. “The air you breathe, the culture you live in, the agricultural habits of your community, the social cultural habits, the social pressures, the physical chemicals with which you come in contact, the diet, and so on.” As the science advanced, Higginson’s theory was vindicated and it became routine for cancer specialists to say that most cancers have environmental causes, but that only deepened the misunderstanding. “A lot of confusion has arisen in later days because most people have not gone back to the early literature, but have used the word ‘environment’ to mean chemicals,” Higginson said.
That mistaken belief is still widespread among environmental activists. “Cancer has been identified as an environmental disease—that is to say, it is unleashed in our cells by the absorption of toxic chemicals from our air, our water, our food,” wrote Bob Hunter, the cofounder of Greenpeace, in 2002 (who was battling prostate cancer at the time and would be killed by it three years later).
Higginson cited several reasons that his theory was misunderstood. One was the chemical industry’s blithe indifference to safety in the era prior to Silent Spring, which made it easy to see it as the villain of the cancer drama. And “Rachel Carson’s book was a watershed, as suddenly we became aware of the vast quantities of new chemicals, pollutants, pesticides, fibers and so forth in the environment.” Higginson also said environmentalists “found the extreme view convenient because of the fear of cancer. If they could possibly make people believe that cancer was going to result from pollution, this would enable them to facilitate the clean-up of water, of the air, or whatever it was. Now I’m all for cleaning up the air, and all for cleaning up trout streams, and all for preventing Love Canals, but I don’t think we should use the wrong argument for doing it. To make cancer the whipping boy for every environmental evil may prevent effective action when it does matter, as with cigarettes.”
Higginson was careful not to accuse environmentalists of deliberate dishonesty. It was more a case of excessive zeal. “People would love to be able to prove that cancer is due to pollution or the general environment. It would be so easy to say ‘let us regulate everything to zero exposure and we have no more cancer.’ The concept is so beautiful that it will overwhelm a mass of facts to the contrary.” That “mass of facts” included the observation that “there are few differences in cancer patterns between the polluted cities and the clean cities,” Higginson said. “You can’t explain why Geneva, a non-industrial city, has more cancer than Birmingham in the polluted central valleys of England.”
That was 1979. Since then, the “mass of facts” has grown steadily and today there is a consensus among leading cancer researchers that traces of synthetic chemicals in the environment—the stuff that turns up in blood tests of ordinary people—are not a major cause of cancer. “Exposure to pollutants in occupational, community, and other settings is thought to account for a relatively small percentage of cancer deaths,” says the American Cancer Society in Cancer Facts & Figures 2006. Of those, occupational exposures—workers in aluminum smelters, miners who dug asbestos under the unsafe conditions of the past—are by far the biggest category, responsible for perhaps 4 percent of all cancer. The ACS estimates that only 2 percent of all cancers are the result of exposure to “man-made and naturally occurring” environmental pollutants—a massive category that includes everything from naturally occurring radon gas to industrial emissions to car exhaust.
It’s critical to understand that not all carcinogenic chemicals in the environment are man-made. Far from it. To take just one example, countless plants produce carcinogenic chemicals as defenses against insects and other predators, so our food is positively riddled with natural carcinogens. They are in coffee, carrots, celery, nuts, and a long, long list of other produce. Bruce Ames, a leading cancer scientist at the University of California at Berkeley, estimates that “of all dietary pesticides people eat, 99.99 percent are natural” and half of all chemicals tested—synthetic and natural—cause cancer in high-dose lab animal experiments. So it’s highly likely that synthetic chemicals are responsible for only a small fraction of the 2 percent of cancers believed to be caused by environmental pollution; Ames believes the precise figure is much less than 1 percent.
Major health organizations agree that traces of synthetic chemicals in the environment are not a large risk factor. What is hugely important is lifestyle. Smoking, drinking, diet, obesity, and exercise: These things make an enormous difference—by most estimates, accounting for roughly 65 percent of all cancers. As early as the 1930s, researchers found that cancer rates were higher in the rich world than the poor, a division that continues to this day thanks to differences in lifestyle. “The total cancer burden is highest in affluent societies, mainly due to a high incidence of tumors associated with smoking and Western lifestyle,” the World Health Organization noted in its World Cancer Report. There’s an obvious paradox here. Those of us living in wealthy societies are enormously lucky, but it is precisely that wealth which supports a lifestyle that, in many ways, promotes cancer.
None of this has persuaded the legion of environmentalists, activists, and concerned citizens campaigning against chemicals they believe are a major cause—some would say the major cause—of cancer. The interesting question is why. When there is such widespread scientific agreement, why do people persist in believing the opposite? There are several answers, but the most profound was hinted at by John Higginson in that 1979 interview. “I think that many people had a gut feeling that pollution ought to cause cancer,” he said.
Paul Slovic’s surveys revealed how true that is. Large majorities in the United States, Canada, and France said they avoided chemicals as much as possible, they wouldn’t drink tap water that had even a tiny amount of a cancer-causing substance, and they believed that someone exposed to a chemical that can cause cancer “will probably get cancer some day.” For these people, it would seem obvious that having carcinogenic chemicals floating in our bodies is a major threat.
But that’s not how toxicologists see it. “All substances are poisons; there is none that is not a poison,” wrote Paracelsus in the sixteenth century. “The right dose differentiates a poison from a remedy.” That is the first principle of toxicology. Drink enough water and the body’s sodium and potassium levels can be thrown out of balance, inducing seizures, coma, and even death. Consume even very lethal substances in a sufficiently tiny portion and no harm will come of it—like the trillions of radioactive uranium atoms that are present in our bodies as a result of eating plants and drinking water that absorb naturally occurring uranium from the soil. What matters isn’t whether a substance is in us or not. It’s how much is in us. “It’s important for people to know that the amounts to which they’re exposed is the first thing they should think about,” says Lois Swirsky Gold, senior scientist at the U.S. Department of Energy’s Lawrence Berkeley National Laboratory and director of the Carcinogenic Potency Project at the University of California at Berkeley.
That perspective changes everything. When Paul Slovic surveyed toxicologists in the United States, the United Kingdom, and Canada, he found that large majorities said they do not try to avoid chemicals in their daily lives, they are not bothered by the presence of trace contaminants, and they do not agree that any exposure to a carcinogen means that the person exposed is likely to get cancer. The quantities of synthetic chemicals that turn up in blood analysis are almost always incredibly tiny. They are measured in parts per billion, sometimes even parts per trillion. To most toxicologists, they are simply too tiny to worry about.
But that doesn’t make intuitive sense. Humans instinctively recoil from contamination without the slightest regard for the amounts involved. Paul Slovic calls this “intuitive toxicology.” It can be traced it back to our ancient ancestors. Every time they found drinking water, they had to decide if the water was safe. Every time they picked a berry off a bush or cut up a carcass, they had to judge whether they could eat what was in their hands. Every time someone was taken with fever, they had to think about how to help without becoming sick themselves, and when death came, they had to safely dispose of the corpse and the unlucky person’s possessions. We’ve been dealing with dangerous substances for a very long time.
Consider one of the most threatening contaminants our ancestors faced—human waste. Disease loves it. The entire life cycle of cholera, for example, hinges on feces: Someone who drinks water infected with the bacterium will expel huge quantities of watery diarrhea that can spread the disease to any water sources it touches. And so throughout history, avoiding all contact with feces, or anything that had contact with feces, has been absolutely essential for survival. There could be no exceptions. Any contact with any quantity was dangerous and must be avoided: Those who followed this rule tended to survive better than those who didn’t, and so it became hardwired instinct.
We can understand the toxicological principle that “the poison is in the dose” rationally. But Gut doesn’t get, it doesn’t make intuitive sense, and that can lead to some very odd conclusions. In The Varieties of Scientific Experience , the late astronomer Carl Sagan tells how, when it appeared that the Earth would pass through the long tail of Halley’s comet in 1910, “there were national panics in Japan, in Russia, in much of the southern and midwestern United States.” An astronomer had found that comet tails include, among other ingredients, cyanide. Cyanide is a deadly poison. So people concluded that if the Earth were to pass through a comet tail, everyone would be poisoned. “Astronomers tried to reassure people,” Sagan recounted. “They said it wasn’t clear that the Earth would pass through the tail, and even if the Earth did pass through the tail, the density of [cyanide] molecules was so low that it would be perfectly all right. But nobody believed the astronomers. . . . A hundred thousand people in their pajamas emerged onto the roofs of Constantinople. The pope issued a statement condemning the hoarding of cylinders of oxygen in Rome. And there were people all over the world who committed suicide.”
Our ancestors could analyze the world with only their eyes, nose, tongue, and fingers, and intuitive toxicology makes sense for humans limited to such tools. But science revealed what was in a comet’s tail. It also discovered contamination of earth, water, and air in quantities smaller than the senses can detect. In fact, today, we have technology that can dissect the components of drinking water to the level of one part per billion—equivalent to a grain of sugar in an Olympic-size swimming pool—while even finer tests can drill down to the level of parts per trillion. Gut hasn’t a clue what numbers like that mean. They’re even a stretch for a fully numerate Head—which is why, to even begin to understand them, we have to use images like a grain of sugar in a swimming pool.
The synthetic chemicals in our bodies that disturb people are typically found only in these almost indescribably minute quantities. They are traces, mere whispers, like the radioactive uranium we consume all our lives in blissful ignorance of its benign presence. It’s true that many of those chemicals can cause cancer and other horrible effects, but the science on which those conclusions are based almost never involves these sorts of traces. Quite the contrary.
The first step in testing for a carcinogenic effect is to stuff rats and mice with so much of the suspect substance that they die. This tells researchers that the given quantity is above what is called the maximum tolerated dose (MTD). So it’s reduced a little and injected into some more animals. If they live, the researchers then know the MTD. In the next step, fifty mice are injected with the MTD of the chemical. Another batch of fifty is injected with one-tenth or one-half the MTD. Finally, fifty very lucky mice are put in a third group that isn’t injected with anything. This routine is followed day after day for the entire natural life of the animals—usually about two years. Then scientists cut all the mice open and look for cancerous tumors or other damage. In parallel with this project, the whole procedure is done with other groups of mice and at least one other species, usually rats.
These tests find lots of cancer. Almost a third of rodents develop tumors, even if they aren’t injected with anything. So to identify the chemical as a carcinogen, the animals injected with it must have cancer at even higher rates. And they very often do. “Half of everything tested is a carcinogen in high-dose tests,” says Lois Swirsky Gold. But the relevance of these findings to trace contamination is doubtful because the quantities involved are so spectacularly different. “With pesticide residues, the amounts [found in the body] are a hundred thousand or a million or more times below the doses they gave to rodents in the cancer tests,” says Swirsky Gold. There’s also the question of whether bodies of rats and mice react the same way to the presence of a substance as the body of a human. Lab tests showed, for example, that gasoline causes cancer in male rats, but when scientists did further research to figure out exactly how gasoline was causing cancer, they discovered that the key mechanism involved the binding of a chemical in the gasoline to a protein found in the kidneys of male rats—a protein that doesn’t exist in humans. Unfortunately, rigorous analysis to determine precisely how a chemical causes cancer in lab animals hasn’t been done for most chemicals deemed carcinogens, so while there’s a very long list of chemicals that has been shown to cause cancer in mice and rats, it’s not clear how many of the chemicals on the list actually cause cancer in humans.
A second tool scientists use in deciding if a chemical is carcinogenic are studies that look at broad populations to see if people exposed to the chemical are more likely to get cancer. This is the field of epidemiology, which has made huge contributions to health over the last century and a half. Unfortunately, when epidemiology finds that one thing is associated with another, it is often not because the first thing causes the second. Criminals and tattoos, for example, are highly correlated, but tattoos do not cause crime. And so when epidemiologists showed shipyard workers who handled asbestos had higher rates of cancer, it was a strong clue that asbestos causes cancer, but it was not final proof. That came with later work. “Epidemiology is fantastically difficult,” Bruce Ames says. “You’re talking about studying humans, and there are a million confounders. A study will say this and another will say the opposite.” The steady proliferation of studies saying that one thing is “linked” to another—that there is a correlation, but nothing more— invites abuse. “You can easily hype it up. Our local paper has a scare story every couple of weeks. They like those scare stories. But I don’t believe any of them.” Ames cites the example of a controversy in California’s Contra Costa County. “There are a lot of refineries and there is more lung cancer. Ah, the refineries are causing the lung cancer. But who lives around refineries? Poor people. And who smokes more? Poor people. And when you correct for smoking, there’s no extra risk in the county.”
The limitations of animal testing, epidemiology, and other forms of evidence are why regulatory agencies have classification systems for carcinogens. The terms vary, but they typically label a substance a “potential,” “likely,” or “known” carcinogen. These levels indicate degrees of certainty, based on what the evidence looks like when it is all assembled. Scientists today place much less weight on any one piece of evidence, and high-dose tests on lab animals have particularly fallen out of favor. They’re still treated as valid evidence, but they don’t carry nearly as much weight as laypeople think they do.
Aside from Gut’s ancient aversion to contamination, culture also drives the perception that exposure to traces of synthetic chemicals is dangerous. Corporate marketers love to label products “natural” because they understand that natural is equated with wholesome and nurturing—and safe. “People have this impression that if it’s natural, it can’t be harmful, and that’s a bit naive,” says Bruce Ames. “The other night in Berkeley, I noticed they were selling a bag of charcoal briquettes and it said ‘no artificial additives, all natural.’ And it’s pure carcinogen!”
In Daniel Krewski’s 2004 survey, “natural health products” were deemed by far the safest of the thirty presented—safer even than X-rays and tap water. Prescription drugs were seen to be riskier, while pesticides were judged to be more dangerous than street crime and nuclear power plants. It’s not hard to guess at the thinking behind this, or to see how dominated it is by Gut. “Natural” and “healthy” are very good things so natural health products must be safe. Prescription drugs save lives, so while they may not be as safe as “natural health products”—everyone knows prescription drugs can have adverse effects—they are still good and therefore relatively safe. But “pesticides” are “man-made” and “chemical” and therefore dangerous. The irony here is that few of the “natural health products” that millions of people happily pop in their mouths and swallow have been rigorously tested to see if they work, and the safety regulations they have to satisfy are generally quite weak—unlike the laws and regulations governing prescription drugs and pesticides.
Many companies—and whole industries, in the case of organic foods— actively promote the idea that chemicals are dangerous. “What you don’t know about chlorine could hurt you,” warns the Web site of a Florida company selling water purification systems that remove by-products of chlorine treatment that may raise the risk of cancer by some infinitesimal degree. On occasion, politicians also find it convenient to hype the risk of chemical contamination. “Arsenic is a killer,” declared U.S. congressman Henry Waxman. “If there is one thing we all seem to agree on it is that we do not want arsenic in our drinking water.” Arsenic happens to be common in the natural environment, and tiny levels of it are often found in drinking water—but in the spring of 2001, Waxman suddenly found that fact intolerable after the Bush administration suspended the existing regulation on permissible levels of arsenic in water and ordered more study. This was a dispute over how much arsenic content was safe, not whether there should be any at all, but the congressman and his Democratic colleagues saw obvious advantage in framing it as Bush-wants-to-put-poison-in-the-water.
The media, in pursuit of the dramatic story, are another contributor to prevailing fears about chemicals. Robert Lichter and Stanley Rothman scoured stories about cancer appearing in the American media between 1972 and 1992 and found that tobacco was only the second-most mentioned cause of cancer—and it was a distant second. Man-made chemicals came first. Third was food additives. Number 6 was pollution; 7, radiation; 9, pesticides; and 12 was dietary choices. Natural chemicals came sixteenth. Dead last on the list of twenty-five—mentioned in only nine stories—was the most important factor: aging. Lichter and Rothman also found that of the stories that expressed a view on whether the United States was facing a cancer epidemic, 85 percent said it was.
This has a predictable effect on public opinion. In November 2007, the American Institute of Cancer Research (AICR) released the results of a survey in which Americans were asked about the causes of cancer. The institute noted with regret that only 49 percent of Americans identified a diet low in fruits and vegetables as a cause of cancer; 46 percent said the same of obesity; 37 percent, alcohol; and 36 percent, diets high in red meat. But 71 percent said pesticide residues on food cause cancer. “There’s a disconnect between public fears and scientific fact,” said an AICR spokesperson.
Lichter and Rothman argue that the media’s picture of cancer is the result of paying too little attention to cancer researchers and far too much to environmentalists. As John Higginson noted almost thirty years ago, the idea that synthetic chemicals cause cancer is “convenient” for activists opposed to chemical pollution. If DDT had threatened only birds, Rachel Carson would probably never have created the stir she did with Silent Spring. It’s the connection between pollution and human health that makes the environment a personal concern, and connecting synthetic chemicals to health is easy because the chemicals are everywhere, and Gut tells us they must be dangerous no matter how tiny the amounts may be. Add the explosive word “cancer” and you have a very effective way to generate support for environmental action.
However laudable the ultimate goal, many experts are not pleased with what environmentalists have been telling the public about chemicals and health. “This is irresponsible, hysterical scaremongering,” said Alan Boobis, a toxicologist with the faculty of medicine of Imperial College, London, in 2005. Boobis and other leading British scientists were furious that several environmental organizations, particularly the World Wildlife Fund, were massively publicizing the presence of “hazardous chemicals” in blood, food, and even babies’ umbilical cords. “Most chemicals were found at a fraction of a part per billion. There is no evidence such concentrations pose any threat to people’s health,” Boobis told The Guardian. “The message they are putting across is misleading, and deliberately so,” David Coggon, a specialist in occupational and environmental causes of cancer and other diseases at the University of Southampton, told the BBC. “By and large, I think people shouldn’t be worried. Most chemicals will not do any great harm at these very low levels,” added Richard Sharpe, an expert on endocrine disruptersat the Medical Research Council’s Human Reproductive Unit in Edinburgh. “You have to put this in perspective.”
Campaigns like the WWF’s are common in many countries. “Pollutants Contaminate Blood of Federal Politicians” read the headline of a 2007 press release from Canada’s Environmental Defence announcing that testing had found dozens of “harmful pollutants” in the blood and urine of several leading Canadian politicians. Enormous media coverage followed, most of it focused heavily on the scary nature of the chemicals and giving little mention, if any, to the quantities involved. It was another success for a campaign Environmental Defence calls “Toxic Nation,” whose slogan is “Pollution. It’s In You.” There is extensive information on the “Toxic Nation” Web site about the abundance of synthetic chemicals in our bodies, but there’s almost no mention that the presence in the body of tiny quantities of “dangerous” chemicals may not actually be dangerous. Only in a glossary of technical terms, under the definition of toxic, does the Web site let slip that while chemicals may cause harm “the quantities and exposures necessary to cause these effects can vary widely.”
Such sins of omission are common. “Many pesticides that have been shown to cause cancer in laboratory animals are still being used,” says the Web site of Natural Resources Defense Council, a leading American organization. There is, of course, no concern expressed at the one-half of natural chemicals that have also been shown to cause cancer in laboratory animals. Similarly, the Environmental Working Group (EWG), a Washington, D.C., organization, says on its Web site that “there is growing consensus in the scientific community that small doses of pesticides and chemicals can adversely affect people, especially during vulnerable periods of fetal development and childhood when exposures can have long-lasting effects.” It’s true that scientists agree these chemicals can do harm, and there is no definition of “small doses” in this statement, so in a very narrow sense this statement isn’t wrong. But EWG has widely publicized—always with a tone of alarm— the fact that traces of these chemicals exist in our bodies, and so it is making it easy for people to conclude that there is “a growing consensus in the scientific community” that the tiny amounts of these chemicals in our bodies “can adversely affect people.” And that’s false.
There’s also an essay on the Web site of the Worldwatch Institute that warns readers that “the 450 million kilograms of pesticides that U.S. farmers use every year have now contaminated almost all of the nation’s streams and rivers, and the fish living in them, with chemicals that cause cancer and birth defects.” Left unmentioned is the fact that the level of contamination in most places is believed by most scientists to be far too low to actually cause these harms to humans, which would explain why the United States is not experiencing massive increases in cancers and birth defects despite this massive contamination.
But then, the existence of an “epidemic of cancer” is often taken by environmentalists to be such an obvious fact that its existence hardly needs to be demonstrated. In a 2005 newspaper column, Canada’s David Suzuki—a biologist and renowned environmentalist—blamed chemical contamination for the “epidemic of cancer afflicting us.” His proof consisted of a story about catching a flounder that had cancerous tumors and the fact that “this year, for the first time, cancer has surpassed heart disease as our number-one killer.” But it is not true, as Suzuki seems to assume, that cancer’s rise to leading killer means cancer is killing more people. It is possible that heart disease is killing fewer people. And that turns out to be the correct explanation. Statistics Canada reported that the death rates of both cardiovascular disease and cancer are falling, but “much more so for cardiovascular disease.”
The Cancer Prevention Coalition (CPC), an activist group headed by Sam Epstein, made a more determined effort in a 2007 press release. “The incidence of cancer has escalated to epidemic proportions, striking most American families,” the release says. “Cancer now impacts about 1.3 million Americans annually and kills 550,000; 44 percent of men and 39 percent of women develop cancer in their lifetimes. While smoking-related cancers have decreased in men, there have been major increases in non-smoking cancers in adults as well as childhood cancers.” Elsewhere, the CPC puts it a little more colorfully: “Cancer strikes nearly one in every two men and more than one in every three women.”
What’s left out here is the simple fact that cancer is primarily a disease of aging, a fact that has a profound effect on cancer statistics. The rate of cancer deaths in Florida, for example, is almost three times higher than in Alaska, which looks extremely important until you factor in Florida’s much older population. “When the cancer death rates for Florida and Alaska are age-adjusted,” notes a report from the American Cancer Society, “they are almost identical.”
Lifetime risk figures—like “one in every two men” or Rachel Carson’s famous “one in every four”—ignore the role of aging and don’t take into account our steadily lengthening life spans. To see how deceptive that is, consider that if every person lived to one hundred, the lifetime risk of cancer would likely rise to almost 100 percent. Would we say, in shocked tones, that “cancer will strike nearly every person”? Probably not. I suspect we’d consider it cause for celebration. Conversely, if some new plague ensured that we all died at thirty-five, the lifetime risk of getting cancer would fall spectacularly, but no one would be dancing in the streets.
In the 1990s, as worries about breast cancer rose, activists often said that “one in eight” American women would get breast cancer in their lifetimes. That was true, in a sense. But what wasn’t mentioned was that to face that full one-in-eight risk, a woman has to live to be ninety-five. The numbers look very different at younger ages: The chance of getting breast cancer by age seventy is 1 in 14 (or 7 percent); by age fifty, it is 1 in 50 (2 percent); by age forty, 1 in 217 (0.4 percent); by age thirty, 1 in 2,525 (0.03 percent). “To emphasize only the highest risk is a tactic meant to scare rather than inform,” Russell Harris, a cancer researcher at the University of North Carolina, told U.S. News & World Report.
Aging shouldn’t affect data on childhood cancers, however, and those who claim chemical contamination is a serious threat say childhood cancers are soaring. They are up “25 percent in the last 30 years,” journalist Wendy Mesley said in her CBC documentary. That statistic is true, to a degree, but it is also a classic example of how badly presented information about risk can mislead.
Mesley is right that the rate of cancer among Canadian children is roughly 25 percent higher now than it was thirty years ago. But what she didn’t say is that the increase occurred between 1970 and 1985 and then stopped. “The overall incidence of childhood cancer has remained relatively stable since 1985,” says the 2004 Progress Report on Cancer Control from the Public Health Agency of Canada.
It is also misleading to note the relative increase in risk—25 percent— but not the actual size of the risk. In 1970, there were a little more than 13 cases of cancer for every 100,000 children. It then rose to a peak of 16.8 cases per 100,000 children—meaning the annual risk of a child getting cancer was 0.0168 percent. Now, put all this information together and it sounds like this: In 1970, the risk of childhood cancer was very small. It increased until 1985 but has remained stable ever since. Despite the increase, the risk continues to be very small. “Cancer in children is rare, accounting for only about one per cent of cases,” notes the Progress Report. That is hardly the description of an epidemic. In addition, the rate of childhood cancer deaths has fallen steadily over the last three decades. In 1970, about 7 children per 100,000 were killed by cancer; thirty years later, the rate had dropped to 3.
American figures are almost identical. According to a 1999 publication of the U.S. National Cancer Institute, childhood cancers rose from 1975 until 1991 and then declined slightly. In 1975, there were about 13 cases per 100,000 children. In 1990, that had risen to 16. The death rate dropped steadily, from 5 per 100,000 children in 1975 to 2.6 two decades later.
British statistics show the same trend. From 1962 until 1971, the rate of childhood cancer cases was flat. It then rose steadily until 1995 but appears to have stabilized. In 1971, there were 10.1 cases per 100,000 children. In 1995, there were 13.6. The death rate fell steadily throughout this period, from 7.8 per 100,000 children to 3.2. Medicine is steadily breaking cancer’s grip on children.
Still, the increased risk of children getting cancer may have stopped rising, but it did rise. What could have caused that rise if not chemical contamination? Here, it’s very important to remember that numbers are only as good as the methods used to calculate them. All statistics have strengths and weaknesses. Cancer data are a perfect demonstration of that universal truth.
There are two ways to measure how much cancer there is in society. One is simply to count deaths caused by cancer. Most such deaths are unmistakable and they’re carefully recorded, which makes death stats a reliable way to track the disease’s prevalence. Or at least they were in the past. Treatments have improved dramatically in recent decades and so, increasingly, victims survive who would not have in the past. As a result, cancer death rates may decline even if the actual cancer rate doesn’t, and so death statistics tend to underestimate the reality.
The other way of tracking cancer is to use what are called incidence rates. These are based simply on the number of people diagnosed with cancer, and they would seem to more accurately reflect the real level of cancer in society. But incidence rates can be tricky, too. If physicians get better at diagnosing a cancer, the number of diagnosed cases will rise even if the actual prevalence of cancer doesn’t. Even changes in how bureaucrats file paperwork and collect numbers can artificially push up incidence numbers. What really throws the numbers off, however, are screening programs. Many forms of cancer—including breast, prostate, thyroid, and skin—are known to produce cancers that just sit there. They don’t do any damage, don’t progress, and don’t cause any symptoms. Those who have them may live out their lives without ever knowing of their existence. But screening programs—such as blood tests for prostate cancer and mammograms for breast cancer—can detect both the aggressive and the irrelevant cancers and so when they are introduced, or improved, incidence rates soar. When that happens, it doesn’t mean more people are getting cancer, only that more cancer is being uncovered.
So to get a sense of what’s really happening, experts look at both death and incidence statistics. If they rise together, cancer is probably on the rise. They’re also reliable indicators if they go down together. But they often point in opposite directions—as they did with childhood cancers in the 1970s and 1980s. To sort things out when that happens, experts have to investigate from every angle and consider the relative weight of the many factors that may be pushing the numbers one way or another. And even then there are likely to be some uncertainties because that’s the best science can do.
And that conclusion seems to be the answer to the rise in childhood cancers that ended in the mid-1980s. “Improvements in the efficiency of systems for the diagnosis and registration of cancer may have contributed to the increase in registration rates,” noted Cancer Research UK. “It has also probably become easier to track and record the diagnosis of new patients as treatment has become more centralized. The amount of real change, if any, in the underlying incidence rates is not clear.”
Variations between sexes, populations, and countries make it difficult to generalize about cancer in adults. Even more important, cancer isn’t really one disease, it is many. But still, the broad outlines are clear.
The age-adjusted rates of deaths caused by most types of cancer have been falling for many years in developed countries. (An important exception is smoking-related cancers among those groups in which smoking rates have yet to fall.) As for incidence rates, they rose in the 1970s, rose more rapidly in the 1980s, but then leveled off in the last ten or fifteen years. It is not a coincidence that the period that saw the most rapid increases— the 1980s—also saw the introduction of major screening programs. There is a consensus among researchers that a big chunk of the rise in incidence rates over the last three decades was the result of better screening, diagnosis, and collection of statistics.
And, in any event, the rise in incidence rates has generally stopped. In the United States, the numbers for the last several years—both incidence and death rates—have been remarkably encouraging. Even the total number of deaths caused by cancer has fallen, which is pretty amazing considering that the American population is growing and aging. Summing up the trends, Bruce Ames says, “If you take away the cancer due to smoking and the extra cancer due to the fact that we’re living longer, then there’s no increase in cancer.”
Advocates of the chemicals-are-killing-us claim respond with one final sally. We don’t know, they say. “The worrying reality is that no one really knows what effects these chemicals have on humans,” reads a Greenpeace report. “We are all unwittingly taking part in an unregulated global experiment that needs to be stopped,” says the World Wildlife Fund.
There has been an enormous amount of scientific study of chemicals over the last half century, but it’s still true that a great many synthetic chemicals have not been rigorously analyzed, either separately or in their combined effects with other chemicals. There really is lots we don’t know. That’s particularly true in case of the raging controversy over the endocrine disruptor hypothesis—the idea that trace amounts of synthetic chemicals such as bisphenol A can throw the body’s hormones off balance, lowering sperm counts, causing cancer, and maybe much more. The hypothesis first got widespread attention in the mid-1990s, and scores of scientists have been studying the issue for more than a decade, but still the science remains contradictory and unsettled. Regulatory agencies in Europe, the United States, and Japan have reviewed the evidence on bisphenol A and decided there is no reason to ban the chemical, but the investigation goes on. Slow and complicated: That’s science at work.
Fine, many people say. But until more is known, the sensible thing to do is err on the side of caution by banning or restricting suspected chemicals. Better safe than sorry, after all.
This attitude has been enshrined in various laws and regulations as the precautionary principle. There are many definitions of that principle, but one of the most influential comes from Principle 15 of the Rio Declaration on Environment and Development: “Where there are threats of serious or irreversible damage, lack of full scientific certainty shall not be used as a reason for postponing cost-effective measures to prevent environmental degradation.” Like many international resolutions, this is full of ambiguity. What qualifies as “serious” damage? What are “cost-effective measures”? And while it may be clear that we don’t need “full scientific certainty,” how much evidence should we have before we act? And this is only one of more than twenty definitions of the precautionary principle floating about in regulations and laws. Many are quite different and some are contradictory on certain points. As a result, there is a vast and growing academic literature grappling with what exactly “precaution” means and how it should be implemented. Politicians and activists like to talk about the precautionary principle as if it were a simple and sensible direction to err on the side of caution. But there’s nothing simple about it.
Nor is it all that sensible. As law professor Cass Sunstein argues in Laws of Fear, the precautionary principle is more a feel-good sentiment than a principle that offers real guidance about regulating risks. Risks are everywhere, he notes, so we often face a risk in acting and a risk in not acting— and in these situations, the precautionary principle is no help.
Consider chlorine. Treat drinking water with it and it creates by-products that have been shown to cause cancer in lab animals in high doses and may increase the cancer risk of people who drink the water. There’s even some epidemiological evidence that suggests the risk is more than hypothetical. So the precautionary principle would suggest we stop putting chlorine in drinking water. But what happens if we do that? “If you take the chlorine out of the drinking water, as was done in South America, you end up with an epidemic of 2,000 cases of cholera,” says Daniel Krewski. And cholera is far from the only threat. There are many water-borne diseases, including typhoid fever, a common killer until the addition of chlorine to drinking water all but wiped it out in the developed world early in the twentieth century. So, presumably, the precautionary principle would say we must treat drinking water with chlorine. “Because risks are on all sides, the Precautionary Principle forbids action, inaction, and everything in between, ” writes Sunstein. It is “paralyzing; it forbids the very steps that it requires.”
So should we ban or restrict synthetic chemicals until we have a full understanding of their effects? This attractively simple idea is a lot more complicated than it appears. If pesticides were banned, agricultural yields would decline. Fruits and vegetables would get more expensive and people would buy and eat fewer of them. But cancer scientists believe that fruits and vegetables can reduce the risk of cancer if we eat enough of them, which most people do not do even now. And so banning pesticides in order to reduce exposure to carcinogens could potentially result in more people getting cancer.
Consider also that scientists are at least as ignorant of natural chemicals as they are of the man-made variety. And since there is no reason to assume—contrary to what our culture tells us—that natural is safe and man-made dangerous, that suggests we should worry as much about natural chemicals, or perhaps even more because natural chemicals vastly outnumber their man-made cousins. “The number of naturally occurring chemicals present in the food supply—or generated during the processes of growing, harvesting, storage and preparation—is enormous, probably exceeding one million different chemicals,” notes a 1996 report by the U.S. National Academy of Sciences. Everyone who digs into a delicious meal of all-natural, organically grown produce is swallowing thousands of chemicals whose effects on the human body aren’t fully understood and whose interaction with other chemicals is mysterious. And remember that of the natural chemicals that have been tested, half have been shown to cause cancer in lab animals. If we were to strictly apply the banned-until-proven-safe approach to chemicals, there would be little left to eat.
Partisans—both enviros and industry—prefer to ignore dilemmas like these and cast issues in much simpler terms. In an article entitled “Lessons of History,” the World Wildlife Fund tells readers that when the pesticide DDT “was discovered by Swiss chemist Paul Muller in 1939, it was hailed as a miracle. It could kill a wide range of insect pests but seemed to be harmless to mammals. Crop yields increased and it was also used to control malariaby killing mosquitoes. Muller was awarded the Nobel Prize in 1944. However, in 1962, scientist Rachel Carson noticed that insect and worm-eating birds were dying in areas where DDT had been sprayed. In her book Silent Spring, she issued grave warnings about pesticides and predicted massive destruction of the planet’s ecosystems unless the ‘rain of chemicals’ was halted.” This hardly looks like a dilemma. On the one hand, DDT was used to increase crop yields and control malaria, which is nice but hardly dramatic stuff. On the other hand, it threatened “massive destruction.” It’s not hard to see what the right response is.
Unfortunately, there’s quite a bit wrong with the WWF’s tale (the least of which is saying DDT was discovered in 1939, when it was first synthesized in 1874 and its value as an insecticide revealed in 1935). It doesn’t mention, for example, that the first large-scale use of DDT occurred in October 1943, when typhus—a disease spread by infected mites, fleas, and lice—broke out in newly liberated Naples. Traditional health measures didn’t work, so 1.3 million people were sprayed with the pesticide. At a stroke, the epidemic was wiped out—the first time in history that a typhus outbreak had been stopped in winter. At the end of the war, DDT was widely used to prevent typhus epidemics among haggard prisoners, refugees, and concentration-camp inmates. It’s rather sobering to think that countless Holocaust survivors owe their lives to an insecticide that is reviled today.
As for malaria, DDT did more than “control” it. “DDT was the main product used in global efforts, supported by the [World Health Organization] , to eradicate malaria in the 1950s and 1960s,” says a 2005 WHO report. “This campaign resulted in a significant reduction in malaria transmission in many parts of the world, and was probably instrumental in eradicating the disease from Europe and North America.” Estimates vary as to how many lives DDT helped save, but it’s certainly in the millions and probably in the tens of millions.
In recent years, however, anti-environmentalists have constructed an elaborate mythology around the chemical: DDT is perfectly harmless and absolutely effective; DDT single-handedly wiped out malaria in Europe and North America; DDT could do the same in Africa if only eco-imperialists would let Africans use the chemical to save their children. For the most part, this mythology improperly belittles the proven harms DDT inflicted on nonhuman species, particularly birds, and it ignores the abundant evidence—which started to appear as early as 1951—that mosquitoes rapidly develop resistance to the insecticide. In fact, the indiscriminate spraying of DDT on farm fields during the 1950s contributed to the development of mosquitoes’ resistance, so the banning of the pesticide for agricultural use actually helped preserve its value as a malaria-fighting tool.
The truth about DDT is that the questions about how to deal with it were, and are, complex. So what does the precautionary principle tell us about this most reviled of chemicals? Well, once typhus and malaria have been removed from the equation, it would probably come down on the side of a ban. But what does “precaution” mean when insect-borne disease is still very much present? WHO estimates that malaria kills one million people a year, and contributes to another two million deaths. Most of the dead are children, and most of those children are African. If DDT is used to fight malaria in Africa, it carries certain risks. And there are risks to not using it. So how do we decide? The precautionary principle is no help.
“Why, then, is the Precautionary Principle widely thought to give guidance? ” asks Cass Sunstein. The answer is simple: We pay close attention to some risks while ignoring others, which very often causes the dilemma of choosing between risks to vanish. If we ignore malaria, it seems only prudent to ban DDT. Ignore the potential risks of natural chemicals, or the economic costs, and it becomes much easier to demand bans on synthetic chemicals. Ignore the threat of fire and it seems obvious that the flame-retardant chemicals polluting our blood must be eliminated. And if we don’t know anything about typhoid or cholera, it’s easy to conclude that we should stop treating water with a chemical that produces a known carcinogen. “Many people who are described as risk averse are, in reality, no such thing. They are averse to particular risks, not risks in general,” Sunstein writes. And it’s not just individuals who have blind spots. “Human beings, cultures and nations often single out one or a few social risks as ‘salient,’ and ignore the others.”
But how do people choose which risks to worry about and which to ignore? Our friends, neighbors, and coworkers constantly supply us with judgments that are a major influence. The media provide us with examples—or not—that Gut feeds into the Example Rule to estimate the likelihood of a bad thing happening. Experience and culture color hazards with emotions that Gut runs through the Good-Bad Rule. The mechanism known as habituation causes us to play down the risks of familiar things and play up the novel and unknown. If we connect with others who share our views about risks, group polarization can be expected—causing our views to become more entrenched and extreme.
And of course, for risks involving chemicals and contamination, there is “intuitive toxicology.” We are hardwired to avoid contamination, no matter how small the amounts involved. With the culture having defined chemical to mean man-made chemical, and man-made chemical as dangerous, it is all but inevitable that our worries about chemical pollution will be out of all proportion to the real risks involved. Confirmation bias is also at work. Once we have the feeling that chemical contamination is a serious threat, we will tend to latch onto information that confirms that hunch—while dismissing or ignoring anything that suggests otherwise. This is where the complexity of science comes into play. For controversial chemicals, relevant studies may number in the dozens or the hundreds or the thousands, and they will contradict each other. For anyone with a bias—whether a corporate spokesman, an environmentalist, or simply a layperson with a hunch— there will be almost always be evidence to support that bias.
The first step in correcting our mistakes of intuition has to be a healthy respect for the scientific process. Scientists have their biases, too, but the whole point of science is that as evidence accumulates, scientists argue among themselves based on the whole body of evidence, not just bits and pieces. Eventually, the majority tentatively decides in one direction or the other. It’s not a perfect process, by any means; it’s frustratingly slow and it can make mistakes. But it’s vastly better than any other method humans have used to understand reality.
The next step in dealing with risk rationally is to accept that risk is inevitable. In Daniel Krewski’s surveys, he found that about half the Canadian public agreed that a risk-free world is possible. “A majority of the population expects the government or other regulatory agencies to protect them completely from all risk in their daily lives,” he says with more than a hint of amazement in his voice. “Many of us who work in risk management have been trying to get the message out that you cannot guarantee zero risk. It’s an impossible goal.” We often describe something as “unsafe” and we say we want it to be made “safe.” Most often, it’s fine to use that language as shorthand, but bear in mind that it’s not fully accurate. In the risk business, there are only degrees of safety. It is often possible to make something safer, but safe is usually out of the question.
We must also accept that regulating risk is a complicated business. It almost always involves trade-offs—swapping typhoid for carcinogenic traces in our drinking water, for example. And it requires careful consideration of the risks and costs that may not be so obvious as the things we worry about—like more expensive fruits and vegetables leading to an increase in cancer. It also requires evidence. We may not want to wait for conclusive scientific proof—as the precautionary principle suggests—but we must demand much more than speculation.
Rational risk regulation is a slow, careful, and thoughtful examination of the dangers and costs in particular cases. If banning certain synthetic pesticides can be shown to reduce a risk materially at no more cost than a modest proliferation of dandelions, say, it probably makes sense. If there are inexpensive techniques to reduce the amount of chlorine required to treat drinking water effectively, that may be a change that’s called for. Admittedly, this is not exciting stuff. There’s not a lot of passion and drama in it. And while there are always questions of justice and fairness involved—Who bears the risk? Who will shoulder the cost of reducing the risk?—there is not a lot of room for ideology and inflammatory rhetoric.
Unfortunately, there are lots of activists, politicians, and corporations who are not nearly as interested in pursuing rational risk regulation as they are in scaring people. After all, there are donations, votes, and sales to be had. Even more unfortunately, Gut will often side with the alarmists. That’s particularly true in the case of chemicals, thanks to a combination of Gut’s intuitive toxicology and the negative reputation chemicals have in the culture. Lois Swirsky Gold says, “It’s almost an immutable perception. I hear it from people all the time. ‘Yes, I understand that 50 percent of the natural chemicals tested are positive, half the chemicals that are in [the Carcinogenic Potency Project] data base are positive, 70 percent of the chemicals that are naturally occurring in coffee are carcinogens in rodent tests. Yes, I understand all that but still I’m not going to eat that stuff if I don’t have to.’ ”
All this talk of tiny risks adds up to one big distraction, says Bruce Ames. “There are really important things to worry about, and it gets lost in the noise of this constant scare about unimportant things.” By most estimates, more than half of all cancers in the developed world could be preventedwith nothing more than lifestyle changes ranging from exercise to weight control and, of course, not smoking. Whatever the precise risk of cancer posed by synthetic chemicals in the environment, it is a housefly next to that elephant.
But lifestyle is a much harder message to get across, says Swirsky Gold. “You tell people you need lifestyle change, you need to exercise more, you need to eat more fruits and vegetables and consume fewer calories, they just look at you and walk into McDonald’s.” The problem is that only part of the mind hears and understands the message about lifestyle and health. Head gets it. But Gut doesn’t understand statistics. Gut only knows that lying on the couch watching television is a lot more enjoyable than sweating on a treadmill, that the cigarettes you smoke make you feel good and have never done you any harm, and the Golden Arches call up happy memories of childhood and that clown with the red hair. Nothing to worry about in any of that, Gut concludes. Relax and watch some more TV.
And so we do, until a story on the news reports that a carcinogenic chemical has been detected in the blood of ordinary people. We are contaminated. Now that’s frightening, Gut says. Sit up and pay close attention.