PREFACE
WHY DO THINGS THAT ARE UNLIKELY TO HARM US GET THE MOST ATTENTION?
The modern world, the advanced technological world in which we live, is a dangerous place. Or, at least, that is the message that, with metronomic regularity, seems to jump out at us at every turn. The news media bombard us with reports of the latest threat to our health lurking in our food, air, water, and the environment, and these messages are often reinforced by regulatory agencies, activist groups, and scientists themselves. In recent years we have been encouraged to worry about deadly toxins in baby bottles, food, and cosmetics; carcinogenic radiation from power lines and cell phones; and harm from vaccines and genetically modified foods, to name just a few of the more prominent scares.
When looked at even the least bit critically, many of the scares that get high-profile attention turn out to be based on weak or erroneous findings that were hardly ready for prime time. Consider two recent reports that came out a few days apart. One proclaimed that ingesting the chemical BPA in the minute quantities normally encountered in daily life may increase fat deposition in the body.1 The second suggested that babies born to mothers living in proximity to sites where hydraulic fracturing, or “fracking,” is being used to extract natural gas from rock formations may have reduced birth weight.2 Reports like these have a visceral impact. They inform us that a new and hitherto unsuspected threat has taken up residence in our immediate environment, in our body, or in the bodies of people like us. The impact is similar to coming home and sensing that there is a malevolent intruder in your home.
In the two instances cited above, a quick look at the original studies on which these news items were based would have revealed the crucial point: there are a large number of substantial leaps—over many intervening steps or linkages—between the putative cause and the putative effect. At each point in the logical chain of causation there is the opportunity for unwarranted assumptions, poor measurement, ignoring crucial factors, and other methodological problems to enter in. Any erroneous link would invalidate the overall linkage that the article is positing and that the news reports trumpet. But, by a mysterious cognitive process, we tend to block out these considerations and accept the validity of what is a tenuous connection that would need extensive buttressing to be worthy of concern. The process of questioning how seriously such results should be taken is an effortful, rational process that cannot compete with the visceral impact of the alert telling us that we are under threat. Even those who are in a position to know better can be unsettled by reports like these.
Our response to such reports is often influenced by another cognitive process that we are usually unaware of. Independent of how solid the underlying science is, the new result may sound true to our ears because it appears to fit in with a broader theme or narrative, which is beyond dispute. Thus any report alleging effects of exposure to environmental pollution may gain plausibility from the incontestable fact that we humans are having a profound and unprecedented impact on the global environment. But, in spite of what seems true, the results of any study need to be evaluated critically, and in the light of other evidence, to see if they stand up. One cannot judge a scientific finding based on whether it conforms to our expectations.
The visceral impact of these scares helps explain how, in different instances, the scientific and regulatory communities, various activist groups, self-appointed health gurus, and the media could all get involved and make their contribution to giving these and similar questionable findings currency.
Although news reports of these threats always make reference to the latest scientific study or measurement, the scares that erupt into the public consciousness often have only a tenuous connection to hard scientific evidence or logic. Many people sense this intuitively, since a report pointing to a hazard is often followed closely by another finding no evidence of a hazard, or even finding a benefit from the supposed nemesis. Furthermore, they sense that people aren’t dropping like flies from the numerous dangers alleged to permeate modern life. Certainly the periodic reports raising the terrifying possibility that using a cell phone could cause brain cancer have done nothing to slow the unparalleled spread of this technology. And yet this omnipresent noise and the continual procession of new threats to our health take their toll and have real consequences, although these get little attention from those who so vigorously promote the existence of a hazard.
* * *
Information about what factors truly have an important impact on health is a vital commodity that has the potential to affect lives, but the succession of health scares creates a fog that confuses people about what they should pay attention to. People paralyzed, or merely distracted, by the latest imaginary threat may become desensitized to health messages and be less likely to pay attention to things that matter and that are actually within their control—like stopping smoking, controlling their weight, having their children vaccinated, and going for effective screening. Concerning the cell phone scare, in 2008 Otis Brawley, chief medical officer for the American Cancer Society, commented, “I am afraid that if we pull the fire alarm, scaring people unnecessarily, and actually diverting their attention from things that they should be doing, then when we do pull the fire alarm for a public health emergency, we won’t have the credibility for them to listen to us.”3
In addition, the exaggeration and distortion of health risks can lead to the formulation of well-intended but wrongheaded policies that can actually do harm. Perhaps the best example of this is the overzealous focus on the presumed benefits of a low-fat diet in the 1990s. Both the federal government and the public health community embraced this doctrine, and the food industry complied by reducing the fat content of a wide range of processed foods. However, something needed to be substituted for the missing fat, and sugar filled this role. This large-scale and dramatic change—sometimes referred to as the “SnackWell phenomenon”—has been credited with making a substantial contribution to increasing rates of obesity.4
There is also a cost in missed opportunities. We need to recalibrate our judgment as to what is a problem, since, if resources are spent to remediate a trivial or nonexistent hazard, clearly fewer resources will be available to devote to more promising work that may turn out to have major benefits. This is especially critical since, as the outbreaks of SARS, avian flu, Ebola, and now Zika virus make clear, new and serious threats to public health will continue to arise.
Finally, the confusion caused by conflicting scientific findings, polarizing controversies, and wrongheaded policies erodes the public’s trust in science and in institutions mandated to promote research and apply its results to improving public health. In fact, in spite of the unprecedented progress in many fields of science over the past sixty years, the public’s trust in science has declined since the decades immediately following the Second World War.5
* * *
Although we are dependent on science and medicine as never before, there is widespread confusion among nonscientists about how to make sense of the flood of information that is being produced at an ever-increasing rate regarding factors that influence health. A recent survey by the American Institute for Cancer Research (AICR) found that “awareness of key cancer risk factors was alarmingly low, while more Americans than ever cling to unproven links.”6 The survey results showed that fewer than half of Americans know the real risks, whereas high percentages of respondents worry about risks for which there is little persuasive support. The latter include pesticide residues on produce, food additives, genetically modified foods, stress, and hormones in beef.
If the AICR report is correct, it is worth asking how such a situation arose in the first place and what factors perpetuate it. Scientists who are in a position to know, including epidemiologists who have devoted their careers to evaluating health risks, have expressed their frustration—at times verging on despair—at this state of affairs. And those who have given thought to the problem acknowledge that their work makes no small contribution to the confusion.7
More generally, it is widely recognized that there is a crisis in the field of biomedicine, characterized by a “culture of hyper-competitiveness.” In this environment, scientists may feel the need to overstate the importance of their work in order to attract attention and obtain funding. Other symptoms of this climate are a “lack of transparent reporting of results” and an increasing frequency of published results that cannot be replicated.8
So how is it possible for a nonscientist to distinguish between what deserves serious attention and what is questionable in the torrent of conflicting scientific findings and health recommendations? What is needed above all is to develop an understanding of what solid and important findings look like and how they are established, as well as developing a healthy skepticism toward results that may be tenuous but get amplified because they speak to our deepest fears.
Sorting out what is known on questions relating to health and interpreting the evidence critically is a challenging task, since different groups of scientists can interpret the same results differently and can emphasize different findings. When the evidence is weak or conflicting, as it often is, subjective judgment assumes a more important role, and scientists, being human, are not immune to their own biases.
When it comes to communicating research results to the public, there is an enormous gulf separating the scientific community from the general public. The scientific literature presupposes a familiarity with the subject matter, concepts, terminology, and methods, knowledge that is acquired only through a long apprenticeship. Even the most basic terms, such as risk, hazard, association, exposure, environment, and bias, mean one thing to the specialist and often have a very different meaning in general usage. The very way of thinking about a particular question can differ radically between the specialist and the public. In addition to the challenge of communicating inherently technical results, findings about factors that may affect our health have a strong emotional resonance that does not pertain to other scientific questions, such as the nature of “dark matter,” the origins of life, or the nature of consciousness.
If knowledge about what affects our health is an invaluable commodity, dispelling the mystery and confusion surrounding the science in this area could not be a more urgent task. A number of recent books have sought to explain the power of belief and the increasing prevalence of “denialism,” that is, the holding of beliefs that conflict with well-established science. From a variety of perspectives—journalistic, psychological, sociological, and political—their authors have attempted to shed light on the processes that shape and reinforce erroneous beliefs.9 Other books have done an excellent job of explaining how epidemiology and clinical medicine enable the discovery of new and important knowledge.10 However, little attention has been devoted to the challenges confronting research in the area of health risks and the ways in which biases and agendas endemic to scientific research, as well as tendencies operating in the wider society, can affect how findings are communicated to the public. Only by examining the interactions between scientists and the different groups and institutions that make use of research findings can we begin to make sense of the successes and failures of the science that addresses health risks.
In an earlier book I examined a number of alleged health hazards that received an enormous amount of attention and generated widespread anxiety.11 As an epidemiologist doing primary research on some of these questions, I could see that the public perception of these issues was badly skewed and distorted. When examined in a dispassionate way, these high-profile risks turned out to be much less important than was claimed. But the studies that got reported in the media and acted on by scientific and regulatory panels were “scientific” studies. So I wanted to explore how this could happen, and what factors contributed to the inflation of these health risks. Where did the process go wrong?
The short answer is that when scientific research focuses on a potential hazard that may affect the population at large, researchers themselves, regulatory agencies, advocacy groups, and journalists reporting on the story tend to emphasize what appear to be positive findings, even when the results are inconsistent, the risks may be small in magnitude and uncertain, and other, more important factors may be ignored.
In examining these inflated risks, I was struck by a paradox. In contrast to questions that provoke needless alarm but which can persist for a long time without any resolution or progress, we hear little about other stories that represent extraordinary triumphs of science at its best.
The present book asks the question, what does successful scientific research in the area of health and health risks look like, and how does it differ from the research that draws our attention to sensational but poorly supported or ambiguous findings that never seem to get confirmed but have great potential to inspire fear? By examining examples of these contrasting outcomes of scientific research, I hope to show how the scientific enterprise, at its best, can succeed in elucidating difficult questions, while other issues that attract a great deal of attention may yield little in the way of important new knowledge.
* * *
During work on this book, I have benefited from discussions with a number of colleagues and friends. Several colleagues answered my questions—often repeated waves of questions and follow-up questions—in interviews conducted in person or via e-mail. Some of these colleagues and friends read chapters of the manuscript and offered corrections, suggestions, and encouragement. I especially want to thank Robert Tarone, David Parmacek, Daniel Doerge, Anders Ahlbom, Robert Burk, Mark Schiffman, Richard Sharpe, Arthur Grollman, Robert Adair, Lawrence Silbart, Kamal Chaouachi, David Savitz, Gio Gori, Daniel Kabat, Steven Stellman, John Moulder, Allen Wilcox, and John Ioannidis. From the beginning, my editor at Columbia University Press, Patrick Fitzgerald, has been enthusiastic and excited about the project. Bridget Flannery-McCoy of the Press gave me valuable comments on an early draft, and Ryan Groendyk, Lisa Hamm, and Anita O’Brien did an expert job of shepherding the manuscript through the publication process. As always, my wife, Roberta Kabat, has been a consistent source of clear-eyed judgment, critical intelligence, and unflagging moral support.