Science, in its pure form at least, is about asking questions, designing experiments, and scrutinizing the evidence to find an answer. This book has mostly focused on product defense scientists, whose full-time and lucrative employment involves manufacturing uncertainty. But only a small portion of the studies examining harm from exposure to products or pollutants are done by product defense scientists. Many are produced by academics with government, corporate, or foundation funding, or no outside funding at all. Many other studies are undertaken by scientists in the laboratories of government agencies or corporations.
Not surprisingly, no matter who performs the study, those paid for by a private sponsor tend to deliver the results the sponsor wants. This was seen in the tobacco literature, when the tobacco industry was still holding firm to the idea that secondhand smoke did not increase lung cancer risk.1 Within the field it’s called the “funding effect,” or, maybe more cynically, the Golden Rule: those who have the gold make the rules. In broadest terms, it’s the main subject of this book. There have been so many studies documenting the funding effect in evaluating risk associated with tobacco, food products, chemicals, and pollutants that it is almost surprising when manufacturers of a product sponsor a study that does not find the results they desire.
We know the impact of the funding effect because, for many studies, the authors acknowledge who paid for their work. This is more common now that journals are insisting on it, but there are still, as discussed below, plenty of studies with incomplete or misleading disclosures.
Disclosure of conflicts of interest are important, but are they as important as the conflict itself? No way. The disclosure figures into the assessment of the scientific research as published, but it is the actual conflict that shapes the course of the research itself. It’s a huge difference, and one that’s easily forgotten.
I have been asked why we should care about financial conflicts of interest. Doesn’t the work stand or fall on its own, and shouldn’t it be evaluated on its own, without regard to who paid for it? Well, no. Some scientists will say pretty much whatever someone pays them to say. But the broader “conflict” issue is much more nuanced than that. Theoretically, a scientist conducting an experiment and following certain accepted methods will find the same results as anyone else who does the same experiment the same way. That’s the theory. In most laboratory experiments, however, even more so in field studies involving humans, the investigator must make many decisions along the way that can shape the outcome. And we look at the world through our own prior beliefs (a perhaps kinder way of saying prejudices), theories, and experiences, and these can impact all of our decisions while conducting the research. A relatively new label for this dynamic is “motivated reasoning.” And, to get to the point of this book, the funding source for any research—who’s footing the bill—is a powerful motivator of anyone’s reasoning. Any of us would look at the same data differently than someone with a different set of financial relationships.
But why is it that scientists produce the results their sponsors want? In some cases, the studies are designed, essentially rigged, to find certain results.2 But in other cases, scientists look at the same data and see different things. The impact of motivated reasoning on the interpretation of data has been powerfully demonstrated in an experiment involving very respected scientists, some with financial conflicts and others without, who provided differing interpretations of a data set before the truth was known. At the center of this experiment was Vioxx (the brand name for rofecoxib), the non-steroidal anti-inflammatory drug (NSAID) that was introduced to the market by Merck & Co., Inc. in 1999. NSAIDs are pain killers, especially important for alleviating osteoarthritis pain because many of these sufferers can’t take other NSAIDs (aspirin, for example) because they cause gastrointestinal problems. Merck marketed Vioxx as an effective, safe alternative: a painkiller that doesn’t cause GI problems. That story follows, in extreme brief.
Most pharmaceutical companies don’t want to test a proposed new drug head-to-head against an existing one. The new drug, even if it’s effective in its own right, might still be found less effective than the old one. Not good, because it loses the possibility of touting that “our drug is better than theirs.” Trials against placebos are better for pharmaceutical companies, but in a market with already proven performers, including aspirin, the FDA requires a head-to-head test against at least one of the competitors. The choice of the opposition in such trials is complicated and, clearly, very important. In the case of Vioxx, one obvious factor was aspirin’s widely known cardiovascular benefits; Vioxx would be unlikely to beat it in that regard. Many other factors were also considered, and in the end, Merck chose naproxen (sold over the counter as Aleve) as the competition. The drug company set up a large randomized trial comprising 8,000 participants.
The early Vioxx results available in 2000 were not clear-cut; conflicting interpretations were possible. From one perspective, test subjects taking Vioxx had 2.4 times the risk of a cardiovascular event compared with those taking Aleve. This was the interpretation of three scientists not associated with Merck who published a review of the Vioxx trial in the August 2001 Journal of the American Medical Association.3 The stunning conclusion was not welcome in Merck’s headquarters and laboratories.
But wait a minute. Couldn’t that interpretation of the data be challenged? Merck could and did argue that the trial demonstrated not that Vioxx was bad for hearts relative to Aleve, but that Aleve was extraordinarily good for hearts—on par even with the famously heart-friendly aspirin. Merck-affiliated scientists wrote, “Differences observed between [Vioxx] and [Aleve] are likely the result of the antiplatelet effects of the latter agent.”4 In effect, Merck and its consulting scientists chose the interpretation that improbably credited the comparison drug with preventing disease over the interpretation that much more plausibly indicted its own drug for increasing the risk of cardiac problems. The Merck scientists (the lead author was actually an academic physician consulting for Merck) wrote, “We believe that the analysis of [the independent scientists] provides no substantive support for their conclusions.”5
The truth dramatically and tragically appeared not long afterward. There was and is reason to believe that Vioxx also prevents colon polyps, which are precursors to colon cancer. A new clinical trial was initiated, and since no agent is known to prevent these polyps, this trial used placebos as the comparator. Even before the trial was scheduled to end, the experimenters halted it. Halfway through the trial, participants who took Vioxx for more than eighteen months had suffered twice as many heart attacks and strokes as those who took a placebo—seven excess heart attacks per thousand users per year. The correct interpretation of the original study was now beyond question: Vioxx causes heart attacks. These results were front-page news around the world. The drug was removed from the market, but too late. FDA scientists estimated that Vioxx caused between 88,000 and 139,000 heart attacks, probably 30 to 40 percent of them fatal, in the four years the drug was on the market.6
Indisputably, Merck did play fast and loose with the data to make its drug look far safer than it was. If its scientists had truly believed that Aleve reduced the risk of heart attack by 60 percent (there were numerous reasons to be suspicious of that finding, and it did turn out to be groundless), Merck should have lobbied the government to pour the drug directly into the water supply. Instead, they launched a concerted attack on the independent scientists who first identified the problem with Vioxx, while manipulating the data to obscure the risks when it published flawed and incomplete results of the trials. It turned out that Merck ghostwrote papers published under the names of academic scientists and several of the studies contained such serious mistakes that two respected journals were forced to issue corrections.7 The New England Journal of Medicine issued two “expressions of concern” in which the editor criticized Merck because it “did not accurately represent the safety data available to the authors when the article was being reviewed for publication.”8 Merck pled guilty to criminal charges over marketing and sales of Vioxx and paid a $950 million fine. It also paid almost $5 billion to settle lawsuits by people taking the drug or family members.
All that said, it is not unreasonable to give the benefit of the doubt to the humanity of the academic scientists who were consultants to Merck. I think, or would like to think, they would never have kept pushing a drug that they knew doubled the risk of heart attacks. They did not consciously lie (I hope). They did convince themselves the drug was safe. They looked at what is in retrospect powerfully clear data and simply didn’t see the obvious.
I don’t mind giving product defense scientists the same benefit of the doubt. They may not be intentionally misinterpreting the data or issuing misleading conclusions. And in some cases, maybe their interpretations will someday turn out to be accurate. But, as the philosopher Cyndi Lauper explained, “Money changes everything.” Given what we know about how conflict of interest shapes how one looks at data, and given who is paying their generally substantial fees, the conclusions of these scientists about the toxicity of a product produced by their client simply cannot and should not be treated as valid.
But sometimes, I believe, scientists do intentionally misinterpret data or issue misleading conclusions. It is not an exaggeration to say that in the product defense model, the investigator starts with an answer, then figures out the best way to support it. As often as not, the product defense investigator starts with someone else’s answer, then reviews the evidence or subjects an important study to a post-hoc “re-analysis” that magically produces the sponsor’s preferred conclusions—that the risk is not that high, the harm not that bad, and/or the data fatally flawed. These are the studies that are flogged to regulatory agencies or in litigation.
What follows here is a modest overview of some of these firms’ specific methods (and the conventional identifiers of those methods) for repackaging science as something more malleable. While this is unquestionably some “inside baseball,” my hope here is to provide a field guide to the behaviors of mercenary scientists and product defense firms in the wild. Recognizing these behaviors can be tremendously valuable in trying to frame public discourse in today’s toxic political environment. It can also be kind of fun.
What follows is the product defense disinformation playbook.
One popular tactic—maybe the most popular—is some version of “reviewing the literature.” The basic idea is valid; we do need to consider the scientific studies to date to attempt to answer important questions. The questions that come up in regulation and litigation are complex; they go way beyond simply asking, “Does this chemical cause cancer or lower sperm count or cause developmental damage?” With public health issues, the important and tricky part is determining at what level an exposure can contribute to the undesired effect, and after how much time and exposure. Is there a safe level of exposure, below which a chemical cannot cause disease (or has not, in the case of litigation)? No single study answers such questions, so reviews are warranted. Sometimes these literature reviews are labeled “weight-of-the-evidence” analyses, with the authors deciding how much importance to give each study. But if your business model—your whole enterprise—is based on being paid by the manufacturers of the product in question for those reviews, your judgment is suspect by definition. More specifically, if a review was undertaken by conflicted scientists in business to provide conclusions needed by a commercial sponsor to delay regulation or defeat litigation, the findings are tainted and should be discarded. How can we know if the weight they’ve assigned different studies, intentionally or unconsciously, is impacted by the fact that their sponsors want a certain result?
I have seen product defense firms use “weight of the evidence” analyses to create uncertainty about virtually every common toxic exposure, no matter how strong the evidence. Take, for example, ozone, the invisible atmospheric gas. Ozone can inflame and damage airways and aggravate existing lung diseases like asthma, emphysema, and chronic bronchitis. Countless studies have confirmed that emergency room visits and hospitalizations for asthma spike with increases in ozone level in the atmosphere. But the leaders of the Texas Commission on Environmental Quality (TCEQ), a government agency that acts as if it is a wholly owned subsidiary of the oil and gas industry (and perhaps, as a branch of the Texas state government, it is), have long held that the dangers from ozone, as well as other pollutants from fuel combustion, are greatly exaggerated. The commission’s top toxicologist, Michael Honeycutt, who was later appointed by President Trump to chair the EPA’s Science Advisory Board, is on record saying that lowering ozone levels will increase the risk of lung disease.9 This is exactly the opposite of what accepted wisdom, and the evidence at hand in Texas and elsewhere, has to say.
In need of scientific backup to give oil and gas companies relief, the TCEQ hired the product defense firm Gradient to question this otherwise widely accepted relationship between ozone concentrations and asthma severity. But even Gradient (which received more than $2.2 million from the TCEQ10) couldn’t make this evidence completely disappear. The best they could do, in light of overwhelming evidence, was to take the tobacco road and stress the uncertainty: the evidence was “not sufficient to conclude that a causal relationship exists. The substantial uncertainty in the body of evidence should be taken into consideration when this evidence is used for policymaking.”11
Gradient seems to have a particular expertise in weighing the evidence on air pollutants and concluding the studies that find harm are light and filled with flaws and uncertainty. Working on behalf of the trade association representing electrical power companies (they burn fossil fuels and contribute to ozone exposure), Gradient scientists published a study titled “Critical Review of Long-Term Ozone Exposure and Asthma Development.” The conclusion: the studies are inconsistent and more research is needed to address “key uncertainties.”12
Of course, the true purpose of these literature reviews is to help stop government agencies from strengthening public health protections, so corporations that produce and burn fossil fuels will not be forced to change their economic model. Gradient scientists are confident that “available evidence does not indicate that proposed lower ozone standards would be more health protective than the current one[s]”13 and they weigh in with the EPA on behalf of the American Petroleum Institute and the TCEQ each time the EPA considers a strengthened ozone standard.14
Gradient has also long worked for corporations that pollute the environment with lead, an exposure that can impact children’s neurological development even at low levels. The Gradient clients with a financial interest in showing that lead exposure is not as dangerous as most research concludes include the Battery Council International (consisting of firms that manufacture, sell, or recycle lead batteries),15 and smelters which have been cited for environmental lead pollution.16 When EPA was tightening its environmental lead standard during the Clinton administration, Gradient experts went to the White House on behalf of the Association of Battery Recyclers. Its presentation highlighted “unresolved potential math errors” and the uncertainties in the scientific literature used by EPA to justify the stronger standard.17
Some studies have also suggested that lead exposure is associated with autism spectrum disorder (ASD);18 Gradient’s team has developed a different viewpoint. In a presentation at a scientific meeting entitled “A Weight-of-Evidence Evaluation of the Association Between Lead Exposure and Autism Spectrum Disorders,” the conclusion said: “Lead exposure is not associated with the development or severity of ASD.” (The client for the study was not identified. Instead of that disclosure we have this: “The underlying work for this poster was supported by a private client, but the opinions presented here are solely those of the authors. Dr. [Barbara] Beck was named as an expert witness in a matter for which she relied, in part, on results from this analysis.”19) It could very well be true that lead plays no role in the development of autism but, given the provenance of Gradient’s work, and their history of minimizing the neurotoxic effects of the neurotoxic metal on behalf of the lead industry, how can anyone accept this literature evaluation as unbiased?
Weight-of-the-evidence reviews generally include both human and animal studies, and the attribution of weight to any given study is generally a subjective, qualitative decision. A more quantitative approach to reviewing the literature entails risk assessment, which in its earnest form attempts to provide estimates of the likelihood of effects at different exposure levels. Importantly, risk assessments attempt to estimate the levels below which exposure to a given substance will cause no harm. But as William Ruckelshaus, the first head of the EPA, famously said, “A risk assessment is like a captured spy: Torture it enough and it will tell you anything.”
This much is true: there is tremendous variation in the results of many risk assessments. There are also individual scientists and firms who can be counted on to produce risk assessments that, conveniently for their sponsors, find significant risk only at levels far above the levels where most exposures are occurring. And if these risk assessments are accepted by regulatory agencies or jurors, the sponsors will be required to spend far less money cleaning up their pollution or compensating victims.
One purveyor of mercenary literature reviews and risk assessments who merits discussion is Michael Dourson, toxicologist and founder and principal at the nonprofit Toxicology Excellence for Risk Assessment (TERA). Dourson has made a career of manufacturing doubt to defend toxic chemicals. Over and over again, chemical manufacturers have paid him and TERA to help make the case for weak, inadequately protective public health standards.
Dourson’s modus operandi is ingenious. With funding in hand from a firm or an industry producing a product under scrutiny, his TERA team reviews the studies and produces a risk assessment that almost without exception minimizes the risk, pinpointing a level for “safe” exposure that is almost without exception many times higher than the risk assessment level determined by academic or government scientists. Sometimes TERA’s risk assessment is provided by a panel of “independent” experts, many of whom are not actually independent. The work is presented as legitimate science and often published in one of the industry-controlled journals discussed in Chapter 2.
An example: Dourson and his colleagues published a paper about “managing risks of noncancer endpoints at hazardous waste sites” focused on the widespread and very dangerous solvent trichloroethylene. In the paper, he proposed a range of safety standards up to 15 times weaker than EPA’s for the same exposure. Cleaning hazardous waste sites to meet these weaker standards would be advantageous to the members of the American Chemistry Council. How was the paper funded? By “a gift from the American Chemistry Council.”20
TERA provided risk assessments that determined a safe level that would be less protective, often hundreds of times less protective, than the risk assessments done by public health agencies for numerous toxic chemicals. The firm has provided this service for Dow AgroSciences (for the pesticide Chlorpyrifos that EPA staff had recommended banning until they were over-ruled by both of President Trump’s EPA Administrators); Koch Industries (petroleum coke); ACC’s North American Flame Retardant Alliance (the flame retardant tetrabromobisphenol A); Cargill, Coca-Cola, ConAgra Foods, Frito-Lay North America, General Mills, J. M. Smucker Co., Land O’Lakes, Procter & Gamble, and Unilever) (diacetyl, which causes bronchiolitis obliterans, or popcorn lung, in exposed workers), and, of course, DuPont (for the PFAS used in manufacturing Teflon) that I discuss in Chapter 3. The list goes on and on.21
Dourson became industry’s go-to hired gun for this kind of product defense “science,” always ready to minimize the risks of exposure to their toxic chemicals and to weaken standards that should be protecting the health and safety of Americans. Having provided so much help to the chemical industry, Dourson was a logical selection to be President Trump’s nominee for the EPA’s assistant administrator for chemical safety and pollution prevention. Following his nomination, activists around the country organized against his nomination, bringing to Washington people who had been hurt by the same chemicals whose risks he minimized. This was too much for even some Republican lawmakers, and, as discussed in the following chapter, his nomination failed.
By its nature, epidemiology is a sitting duck for the product defense industry’s uncertainty campaigns. Epidemiology’s studies are complicated and require complex statistical analyses. Judgment is called for all along the way, so good intentions are paramount. Both epidemiologic principles and ethics require that the methods of analysis be selected before the data are actually analyzed. One tactic used by some of the product defense firms is the re-analysis, where the raw data from a completed study are looked at again, changing the way the data are analyzed, often in the most mercenary of ways. The joke about “lies, damned lies, and statistics” pertains.
The battle for the integrity of science is rooted in these sorts of issues around methodology. If a scientist with a certain skill set knows the original outcome and how the data are distributed within the study, it’s easy enough to design an alternative analysis that will make positive results disappear. This is especially true with findings that link a toxic exposure to disease later on—which also happen to be among the most important results for public health agencies. In contrast, if there is no effect from exposure, post hoc analysis to turn a negative study positive is generally difficult and often not possible, since the effect of interest is equally distributed across all parts of the study population.
As with most things about product defense, the re-analysis strategy dates back to tobacco, whose strategists recognized that they needed a means to counter early findings related to smoking’s dangers, in order to shirk responsibility and regulation for lung cancer risk among nonsmoking spouses of smokers. From a public health perspective, a 25 percent increase in cancer risk is a big deal. To industry, making it disappear would be a huge deal, and that’s why they brought in the re-analysts. The tobacco strategists also realized that they couldn’t mount their own studies, which would take years and millions of dollars, so they figured they could get the raw data from the incriminating studies, change some of the basic assumptions, change the parameters, tinker with this and that, and make the results go away. Tobacco’s approach is now commonplace; “re-analysis” is its own cottage industry within product defense.
An early, non-tobacco example came following a 1987 study by NIOSH’s Robert Rinsky and colleagues, published in the New England Journal of Medicine. This report showed that OSHA’s workplace benzene-exposure standard, then 10 parts per million (ppm), was inadequate and estimated that benzene exposure increased risk of leukemia even at lifetime exposures below 1 ppm. OSHA then leveraged this study in setting a new standard of 1 ppm, which posed massive financial implications for the oil industry. Since then the petroleum producers have spent millions of dollars and hired several of the leading product defense firms to re-analyze these results in order to convince regulators and courts that low levels of exposure to benzene simply aren’t so dangerous. The American Petroleum Institute (here they are again) or another branch of the oil industry have commissioned the same product defense firms that appear in virtually every chapter in this book, including Exponent, ChemRisk, Ramboll, and Gradient, to pull apart Rinsky’s study. They have produced at least nine papers in the scientific literature as part of this campaign to make Rinsky’s results go away.22 But they are all post hoc analyses and none of them are very convincing. Since the scientific community (outside of the oil industry and their consultants) recognize the causal link between low levels of benzene exposure and leukemia, there is little additional epidemiologic research under way in this area. Actual studies of benzene-exposed workers (rather than these mercenary re-analyses) published subsequently by researchers not paid by the oil industry continue to find benzene effects at very low levels of exposure.23 Based on these studies, the European Union has announced plans to cut the allowable maximum workplace exposure level by 95 percent, to 0.05 ppm.24
In many mercenary re-analyses of epidemiologic studies that find increased risk of disease associated with low levels of exposure to a given chemical, the product defense scientists decide that the actual exposures in the study were in fact far higher than those estimated by the scientists who did the original study. This is farcical, of course, but highly useful: a retrospective adjustment to the exposure level is guaranteed to change the results of the study, making the exposure look safer because now only those higher levels cause disease. And, of course, the conflicted scientists doing the reanalysis know very well this is exactly what will happen if they juice up the exposure estimates.
But when there is no longer debate whether exposure to a substance causes disease at a given level, a firm whose product is under attack may want to show that historical exposures in the past were lower, not higher. These types of studies, usually conducted by attempting to recreate historical exposure levels in a laboratory, are generally done only for high-stakes court cases, since there is little if any scientific interest in revisiting settled science around old exposure levels. The basic model sometimes involves finding the original product—often that is no longer manufactured or in use—and then simulating the exposure that a plaintiff in a court case would have experienced decades earlier. Pretty much the only reason these studies get published in a scientific journal is so the expert can testify that their study was peer-reviewed.
As a rule, litigation-generated studies like these receive little response in the scientific literature and have no significance in setting regulations. Critics of these efforts, like Brown University physician David Egilman, have shown how the studies can be manipulated to underestimate exposure and thus produce the outcomes desired by whomever is paying the bill. Egilman demonstrated this in a study of asbestos exposures associated with working with Bakelite, a synthetic plastic made by Union Carbide. The exposure was simulated by the product defense firms Exponent and ChemRisk.25 Egilman documented that the experimenters had access to but ignored actual asbestos measures taken in the 1970s in order to claim that the exposure levels found in their simulation were low and therefore presumably safe. Union Carbide paid about a million dollars for this paper, but it was worth it. In fact, Dennis Paustenbach, a leading purveyor of these exposure recreations and coauthor of the Bakelite paper, boasted that as long as a recreation indicated that historical exposures were minimal, “I’m not aware of a single case that has been lost in litigation in the U.S. when a high-quality simulation study was done.”26
Many papers produced by the product defense firms contain the disclosure that individual scientists may be testifying for the corporations being sued, but that the research itself was done independently of the corporations. This sleight-of-hand provides a fiction of independence that might provide a fiction of objectivity. But the research was almost certainly paid for by the product defense firm out of the fees it was paid by the corporation. It’s a charade, but also standard practice.
An example is tucked within a letter written to a Ford Motor Company attorney by ChemRisk’s Paustenbach. As part of his request for more money for his firm, Paustenbach cites the value of producing these sorts of studies, which are of great value to their clients. In Paustenbach’s words: “Over the past 5 years, I have personally spent (in hard or soft dollars) a little more than $3M in profits (which would have been distributed to me or my staff) in asbestos-related research which has been enormously illuminating to the courts and juries. I did this because I believe that the courts deserve to have all the scientific information that can be brought to the table when reaching conclusions. In my view, these papers have changed the scientific playing field in the courtroom. You know better than anyone as you have seen the number of plaintiff verdicts decrease and the cost of settlement go down.” He then goes on to discuss a paper which had been used in 30 or so cases in the 90 days after it was published, and that cost ChemRisk $300,000 in effort.27
Similarly, manufacturer Georgia-Pacific (GP) launched a secretive $6 million program to produce papers that would increase its ability to win suits in connection with asbestos exposure from a joint compound it marketed in the 1960s and 1970s. Using Exponent, ENVIRON (now Ramboll), and other consultants, the manufacturer developed a comprehensive research agenda to conduct a suite of 13 studies, all aimed at showing that the risk from their product was low.28 ENVIRON was tasked to attempt to recreate the levels of exposure that workers might have had when they were working with the joint compound. David Bernstein, a toxicologist hired to study the effects on lab animals, was lead author on several papers published in the Journal of Inhalation Toxicology for which the disclosure statement read in its entirety: “This work was supported by a grant from Georgia-Pacific, LLC.” A coauthor on these papers was Stewart E. Holm, who headed this research initiative for GP, but didn’t exactly make a scientific contribution. In documents that later came out, it turns out his work was “directed solely by GP’s in-house counsel.” Eventually, Holm sent a correction to the journal acknowledging he worked for GP (although not saying he was part of the legal team), which then published an apology for not including the relevant information.29
The truth about Holm’s role and the fact that all the studies were commissioned as part of a legal effort came out in a legal dispute in New York, where Georgia-Pacific tried to claim that many of the documents underlying the studies were subject to the attorney-work-product privilege. This is one of the older tricks in the playbook, again perfected by the tobacco industry: hiding suspect scientific work by claiming attorney-client privilege. The New York court did not see it that way:
GP should not be allowed to use its experts’ conclusions as a sword by seeding the scientific literature with GP-funded studies, while at the same time using the privilege as a shield by withholding the underlying raw data that might be prone to scrutiny by the opposing party and that may affect the veracity of its experts’ conclusions.30
Occasionally authors of industry studies leave off the conflict-of-interest disclosure entirely. One stunning episode was a paper written by two well-known Italian epidemiologists who have been deeply involved in product defense, Carlo La Vecchia and Paolo Boffetta. Boffetta (now associated with Cardno ChemRisk), has published literature reviews paid for by firms or trade groups eager to dispute independent studies that found cancer risk from exposure to, among other toxic chemicals, beryllium,31 diesel exhaust,32 formaldehyde,33 styrene,34 and the nonstick PFAS chemicals used to make Teflon and Scotchgard.35
In one instance, the two epidemiologists analyzed the patterns of death in several populations of asbestos-exposed workers in Italy, concluding that “for workers exposed in the distant past, the risk of mesothelioma is not appreciably modified by subsequent exposures, and that stopping exposure does not materially modify the subsequent risk of mesothelioma.”36 In short: once you’re exposed to asbestos, might as well be exposed a bunch more later on; it’s all the same. Putting aside the public health implications of such an argument, this paper was conspicuously useful to corporations (and corporate executives) responsible for exposing workers to asbestos in the last few decades: the cancers among these workers could be attributed to earlier exposures, before the period covered by statutes of limitations.
In La Vecchia and Boffetta’s conflict-of-interest disclosure, they stated, “There are no conflicts of interest.” The authors did note that the “work was conducted with the contribution of the Italian Association for Cancer Research (AIRC), project No. 10068.” This sounds high-minded enough, but both authors had also been witnesses for the defense in a high-profile trial of an asbestos company executive in Italy, so their work defending an asbestos producer facing criminal charges was no secret within the scientific community. The study itself came under extensive criticism by Benedetto Terracini and other prominent Italian epidemiologists,37 and, in addition, the unacknowledged conflict of interest resulted in much censure of the authors and the journal, which (two years later) published an extensive correction. The correction acknowledged the testimony on criminal defense work done by the authors and withdrew the statement that AIRC had funded the study, since it had not.38
A different kind of conflict of interest, and a different kind of disclosure trickery, is the use of front groups by many industries to advance their interests while hiding their involvement. These fronts are generally incorporated as not-for-profits, with academic scientists in leadership, and innocuous-sounding names, but they are bought and paid for by their various corporate sponsors, many of them sponsoring “research” to be used in regulatory proceedings or the court. In addition, there are the all-corporate-purpose think tanks devoted to “free enterprise” and “free markets” and “deregulation.” Dozens of them work on behalf of just about every significant industry in this country. Each year these entities collect millions of dollars from regulated companies to promote campaigns that weaken public health and environmental protections.
Always, the idea is to portray front groups as serious, independent purveyors of scientific research. And some do produce legitimate science, while at the same time producing purely questionable science that its sponsoring organizations rely on to promote their unhealthy products. It’s a delicate balancing act. Perhaps the most successful of these mixed-purpose outfits is the International Life Sciences Institute, “a nonprofit, worldwide organization whose mission is to provide science that improves human health and well-being and safeguards the environment.” ILSI was founded in 1978 by Alex Malaspina, Coca-Cola’s senior vice-president, and it long has had significant funding from the beverage giant.39 It is also notably open about its funding sources and sponsors, which include hundreds of food and pesticide producers, including ConAgra, Kellogg, Kraft, McDonald’s, Nestlé, PepsiCo, and Unilever, along with the pesticide and seed companies Bayer (which purchased Monsanto in 2018), Syngenta, and Dow.40 CropLife International, the global trade association of pesticide manufacturers, is a major contributor.41 ILSI’s membership confirms that the organization has met Malaspina’s objective “to unite the food industry” by doing work that no single manufacturer, not even Coke, could have accomplished on its own.
With support from these and other corporations and associations, ILSI funds genuine academic research. It also convenes conferences and brings together experts to produce research and reports. There is no question; some of the studies have value. But under this guise of addressing important scientific issues, they also promote positions that greatly benefit their sponsors. True to its roots, ILSI has always been a faithful advocate for the sugar industry, but it doesn’t fund scientists to defend something outrageous, like the proposition that unlimited sugar consumption is safe. Instead, it comes at the issue indirectly, surreptitiously, by questioning the quality of the science underlying the dietary guidelines that recommend limiting sugar intake.42 That particular ISLI literature review was paid for by its technical committee on dietary carbohydrates. Sitting at that table are Coca-Cola, Dr. Pepper, PepsiCo, Archer Daniels Midland, Campbell Soup Company, General Mills, Hershey, Kellogg Company—in combination, a major source of carbohydrates in the diets of most Americans.43 This is one more example of the funding effect. The organization’s critics have succinctly captured what is actually occurring here: ILSI’s claims that there is inadequate evidence to justify placing limits to junk food is based on junk science.44