8
The Best Science Money Can Buy
Science has a face, a house, and a price; it is important to ask who is doing science, in what institutional context, and at what cost. Understanding such things can give us insight into why scientific tools are sharp for certain types of problems and dull for others.
—Robert Proctor,
Cancer Wars1
According to historian Stephen Mason, science has its historical roots in two primary sources: “Firstly, the technical tradition, in which practical experiences and skills were handed on and developed from one generation to another; and secondly, the spiritual tradition, in which human aspirations and ideas were passed on and augmented.” The technical tradition is the basis for the claim that science provides useful ways of manipulating the material world. The spiritual tradition is the basis for the claim that science can explain the world in “objective,” unbiased terms. Sometimes, however, these two traditions are at odds.
Modern science considers itself “scientific” because it adheres to a certain methodology. It uses quantitative methods and measurable phenomena; its data is empirically derived and verifiable by others through experiments that can be reproduced; and, finally, its practitioners are impartial. Whereas ideological thinkers promulgate dogmas and defend them in the face of evidence to the contrary, scientists work with “hypotheses” that they modify whenever the evidence dictates.
The standard description of the scientific method makes it sound like an almost machinelike process for sifting and separating truth from error. The method is typically described as involving the following steps:
1. Observe and describe some phenemenon.
2. Form a hypothesis to explain the phenemonon and its relationship to other known facts, usually through some kind of mathematical formula.
3. Use the hypothesis to make predictions.
4. Test those predictions by experiments or further observations to see if they are correct.
5. If not, reject or revise the hypothesis.
“Recognizing that personal and cultural beliefs influence both our perceptions and our interpretations of natural phenomena, we aim through the use of standard procedures and criteria to minimize those influences when developing a theory,” explains University of Rochester physics professor Frank Wolfs. “The scientific method attempts to minimize the influence of bias or prejudice in the experimenter when testing a hypothesis or a theory.” One way to minimize the influence of bias is to have several independent experimenters test the hypothesis. If it survives the hurdle of multiple experiments, it may rise to the level of an accepted theory, but the scientific method requires that the hypothesis be ruled out or modified if its predictions are incompatible with experimental tests. In science, Wolfs says, “experiment is supreme.”
2
Experience shows, however, that this commonly accepted description of the scientific method is often a myth. Not only is it a myth, it is a fairly recent myth, first elaborated in the late 1800s by statistician Karl Pearson.
3 Copernicus did not use the scientific method described above, nor did Sir Isaac Newton or Charles Darwin. The French philosopher and mathematician René Descartes is often credited with ushering in the age of scientific inquiry with his “Discourse on the Method of Rightly Conducting the Reason and Seeking the Truth in the Sciences,” but the method of Descartes bears little relation to the steps described above. The molecular structure of benzene was first hypothesized not in a laboratory but in a dream. Many theories do not originate through some laborious process of formulating and modifying a hypothesis, but through sudden moments of inspiration. The actual thought processes of scientists are richer, more complex, and less machinelike in their inevitability than the standard model suggests. Science is a human endeavor, and real-world scientists approach their work with a combination of imagination, creativity, speculation, prior knowledge, library research, perseverance, and, in some cases, blind luck—the same combination of intellectual resources, in short, that scientists and nonscientists alike use in trying to solve problems.
The myth of a universal scientific method glosses over many far-from-pristine realities about the way scientists work in the real world. There is no mention, for example, of the time that a modern researcher spends writing grant proposals; coddling department heads, corporate donors, and government bureaucrats; or engaging in any of the other activities that are necessary to obtain research funding. Although the scientific method acknowledges the possibility of bias on the part of an individual scientist, it does not provide a way of countering the effects of systemwide bias. “In a field where there is active experimentation and open communication among members of the scientific community, the biases of individuals or groups may cancel out, because experimental tests are repeated by different scientists who may have different biases,” Wolfs states. But what if different scientists share a common bias? Rather than canceling it out, they may actually reinforce it.
The standard description of the scientific method also tends to idealize the degree to which scientists are even capable of accurately observing and measuring the phenomena they study. “Anyone who has done much research knows only too well that he never seems to be able himself to reproduce the beautiful curves and straight lines that appear in published texts and papers,” admits British biologist Gordon D. Hunter. “In fact, scientists who would be most insulted if I accused them of cheating usually select their best results only, not the typical ones, for publication; and some slightly less rigorous in their approach will find reasons for rejecting an inconvenient result. I well remember when my colleague David Vaird and I were working with a famous Nobel Prize winner (Sir Hans Krebs himself) on bovine ketosis. The results from four cows were perfect, but the fifth wretched cow behaved quite differently. Sir Hans shocked David by stating that there were clearly additional factors of which we were ignorant affecting the fifth cow, and it should be removed from the analysis. . . . Such subterfuges rarely do much harm, but it is an easy step to rejecting whole experiments or parts of experiments by convincing oneself that there were reasons that we can identify or guess at for it giving ‘the wrong result.’ ”
4
The idea that all scientific experiments are replicated to keep the process honest is also something of a myth. In reality, the number of findings from one scientist that get checked by others is quite small. Most scientists are too busy, research funds are too limited, and the pressure to produce new work is too great for this type of review to occur very often. What occurs instead is a system of “peer review,” in which panels of experts are convened to pass judgment on the work of other researchers. Peer review is used mainly in two situations: during the grant approval process to decide which research should get funding, and after the research has been completed to determine whether the results should be accepted for publication in a scientific journal.
Like the myth of the scientific method, peer review is also a fairly new phenomenon. It began as an occasional, ad hoc practice during the middle of the nineteenth century but did not really become established until World War I, when the federal government began supporting scientists through the National Research Council. As government support for science increased, it became necessary to develop a formal system for deciding which projects should receive funding.
In some ways, the system of peer review functions like the antithesis of the scientific method described above. Whereas the scientific method assumes that “experiment is supreme” and purports to eliminate bias, peer review deliberately
imposes the bias of peer reviewers on the scientific process, both before and after experiments are conducted. This does not necessarily mean that peer review is a bad thing. In some ways, it is a necessary response to the empiricist limitations of the scientific method as it is commonly defined. However, peer review can also institutionalize conflicts of interest and a certain amount of dogmatism. In 1994, the General Accounting Office of the U.S. Congress studied the use of peer review in government scientific grants and found that reviewers often know applicants and tend to give preferential treatment to the ones they know.
5 Women and minorities have charged that the system constitutes an “old boys’ network” in science. The system also stacks the deck in favor of older, established scientists and against younger, more independent researchers. The process itself creates multiple opportunities for conflict of interest. Peer reviewers are often anonymous, which means that they do not have to face the researchers whose work they judge. Moreover, the realities of science in today’s specialized world means that peer reviewers are often either colleagues or competitors of the scientist whose work they review. In fact, observes science historian Horace Freeland Judson, “the persons most qualified to judge the worth of a scientist’s grant proposal or the merit of a submitted research paper are precisely those who are the scientist’s closest competitors.”
6
“The problem with peer review is that we have good evidence on its deficiencies and poor evidence on its benefits,” the
British Medical Journal observed in 1997. “We know that it is expensive, slow, prone to bias, open to abuse, possibly anti-innovatory, and unable to detect fraud. We also know that the published papers that emerge from the process are often grossly deficient.”
7
In theory, the process of peer review offers protection against scientific errors and bias. In reality, it has proven incapable of filtering out the influence of government and corporate funders, whose biases often affect research outcomes.
Publication Bias
If you want to know just how craven some scientists can be, the archives of the tobacco industry offer a treasure trove of examples. Thanks to whistle-blowers and lawsuits, millions of pages of once-secret industry documents have become public and are freely available over the Internet. In 1998, for example, documents came to light regarding an industry-sponsored campaign in the early 1990s to plant sympathetic letters and articles in influential medical journals. Tobacco companies had secretly paid 13 scientists a total of $156,000 simply to write a few letters to influential medical journals. One biostatistician, Nathan Mantel of American University in Washington, received $10,000 for writing a single, eight-paragraph letter that was published in the
Journal of the American Medical Association. Cancer researcher Gio Batta Gori received $20,137 for writing four letters and an opinion piece to the
Lancet, the
Journal of the National Cancer Institute, and the
Wall Street Journal—nice work if you can get it, especially since the scientists didn’t even have to write the letters themselves. Two tobacco-industry law firms were available to do the actual drafting and editing. All the scientists really had to do was sign their names at the bottom. “It’s a systematic effort to pollute the scientific literature. It’s not a legitimate scientific debate,” observed Dr. Stanton Glantz, a professor of medicine at the University of California-San Francisco and longtime tobacco industry critic. “Basically, the drill is that they hired people to write these letters, then they cited the letters as if they were independent, disinterested scientists writing.”
8
In some cases, scientists were paid to write not just letters but entire scientific articles. In at least one case, the going rate for this service was $25,000, which was paid to one scientist for writing an article for the publication
Risk Analysis. The same fee went to former EPA official John Todhunter and tobacco consultant W. Gary Flamm for an article titled “EPA Process, Risk Assessment-Risk Management Issues,” which they published in the
Journal of Regulatory Toxicology and Pharmacology, where Flamm served as a member of the journal’s editorial board. Not only did they fail to disclose that their article had been commissioned by the tobacco industry, journal editor C. Jelleff Carr says he “never asked that question, ‘Were you paid to write that?’ I think it would be almost improper for me to do it.”
9
The tobacco industry is hardly alone in attempting to influence the scientific publishing process. A similar example of industry influence came to light in 1999 regarding the diet-drug combo fen-phen (a combination of fenfluramine, dexfenfluramine, and phentermine), developed by Wyeth-Ayerst Laboratories. Wyeth-Ayerst had commissioned ghostwriters to write ten articles promoting fen-phen as a treatment for obesity. Two of the ten articles were actually published in peer-reviewed medical journals before studies linked fen-phen to heart valve damage and an often-fatal lung disease, forcing the company to pull the drugs from the market in September 1997. In lawsuits filed by injured fen-phen users, internal company documents were subpoenaed showing that Wyeth-Ayerst had also edited the draft articles to play down and occasionally delete descriptions of side effects associated with the drugs. The final articles were published under the names of prominent researchers, one of whom claimed later that he had no idea that Wyeth had commissioned the article on which his name appeared. “It’s really deceptive,” said Dr. Albert J. Stunkard of the University of Pennsylvania, whose article was published in the
American Journal of Medicine in February 1996. “It sort of makes you uneasy.”
10
How did Stunkard’s name end up on an article without his knowing who sponsored it? The process involved an intermediary hired by Wyeth-Ayerst called Excerpta Medica, Inc., which received $20,000 for each article. Excerpta’s ghostwriters produced first-draft versions of the articles and then lined up well-known university researchers like Stunkard and paid them honoraria of $1,000 to $1,500 to edit the drafts and lend their names to the final work. Stunkard says Excerpta did not tell him that the honorarium originally came from Wyeth. One of the name-brand researchers even sent a letter back praising Excerpta’s ghostwriting skills. “Let me congratulate you and your writer on an excellent and thorough review of the literature, clearly written,” wrote Dr. Richard L. Atkinson, professor of medicine and nutritional science at the University of Wisconsin Medical School. “Perhaps I can get you to write all my papers for me! My only general comment is that this piece may make dexfenfluramine sound better than it really is.”
11
“The whole process strikes me as egregious,” said Jerome P. Kassirer, then-editor of the New England Journal of Medicine—“the fact that Wyeth commissioned someone to write pieces that are favorable to them, the fact that they paid people to put their names on these things, the fact that people were willing to put their names on it, the fact that the journals published them without asking questions.” Yet it would be a mistake to imagine that these failures of the scientific publishing system reflect greed or laziness on the part of the individuals involved. Naïveté might be a better word to describe the mind-set of the researchers who participate in this sort of arrangement. In any case, the Wyeth-Ayerst practice is not an isolated incident. “This is a common practice in the industry. It’s not particular to us,” said Wyeth spokesman Doug Petkus.
Medical editor Jenny Speicher agrees that the Wyeth-Ayerst case is not an aberration. “I used to work at Medical Tribune, a news publication for physicians,” she said. “We had all these pharmaceutical and PR companies calling, asking what are the writing guidelines for articles, because they wanted to have their flack doctors write articles, or assign a freelance writer to write under a doctor’s name. I’ve even been offered these writing jobs myself. We always told them that all of our articles had to have comments from independent researchers, so of course they weren’t interested. But they kept on trying.”
“Pharmaceutical companies hire PR firms to promote drugs,” agrees science writer Norman Bauman. “Those promotions include hiring freelance writers to write articles for peer-reviewed journals, under the byline of doctors whom they also hire. This has been discussed extensively in the medical journals and also in the Wall Street Journal, and I personally know people who write these journal articles. The pay is OK—about $3,000 for a six- to ten-page journal article.”
Even the
New England Journal of Medicine—often described as the world’s most prestigious medical journal—has been involved in controversies regarding hidden economic interests that shape its content and conclusions. In 1986, for example,
NEJM published one study and rejected another that reached opposite conclusions about the antibiotic amoxicillin, even though both studies were based on the same data. Scientists involved with the first, favorable study had received $1.6 million in grants from the drug manufacturer, while the author of the critical study had refused corporate funding.
NEJM proclaimed the pro-amoxicillin study the “authorized” version, and the author of the critical study underwent years of discipline and demotions from the academic bureaucracy at his university, which also took the side of the industry-funded scientist. Five years later, the dissenting scientist’s critical study finally found publication in the
Journal of the American Medical Association, and other large-scale testing of children showed that those who took amoxicillin actually experienced
lower recovery rates than children who took no medicine at all.
12 In 1989,
NEJM came under fire again when it published an article downplaying the dangers of exposure to asbestos while failing to disclose that the author had ties to the asbestos industry.
13 In 1996, a similar controversy emerged when the journal ran an editorial touting the benefits of diet drugs, again failing to note that the editorial’s authors were paid consultants for companies that sell the drugs.
14
In November 1997, questions of conflict of interest arose again when the
NEJM published a scathing review of Sandra Steingraber’s book
Living Downstream: An Ecologist Looks at Cancer and the Environment. Authored by Jerry H. Berke, the review described Steingraber as “obsessed . . . with environmental pollution as the cause of cancer” and accused her of “oversights and simplifications . . . biased work . . . notoriously poor scholarship. . . . The focus on environmental pollution and agricultural chemicals to explain human cancer has simply not been fruitful nor given rise to useful preventive strategies. . . .
Living Downstream frightens, at times misinforms, and then scorns genuine efforts at cancer prevention through lifestyle change. The objective of
Living Downstream appears ultimately to be controversy.”
15
Berke was identified alongside the review as “Jerry H. Berke, MD, MPH.” The NEJM failed to disclose, however, that Berke was director of toxicology for W. R. Grace, one of the world’s largest chemical manufacturers and a notorious polluter. A leading manufacturer of asbestos-containing building products, W. R. Grace has been a defendant in several thousand asbestos-related cancer lawsuits and has paid millions of dollars in related court judgments. It is probably best known as the company that polluted the drinking water of the town of Woburn, Massachusetts, and later paid an $8 million out-of-court settlement to the families of seven Woburn children and one adult who contracted leukemia after drinking contaminated water. During the Woburn investigation, Grace was caught in two felony lies to the U.S. Environmental Protection Agency.
When questioned about its failure to identify Berke’s affiliation, the
New England Journal of Medicine offered contradictory and implausible explanations. First it attributed the omission to an “administrative oversight” and claimed that it didn’t know about Berke’s affiliation with W. R. Grace. Later, a journal representative admitted that they
did know but said they thought Grace was a “hospital or research institute.” If so, this ignorance would itself be remarkable, since the
NEJM is located in Boston, and Grace had been the subject of more than a hundred news stories in the
Boston Globe between 1994 and 1997. Moreover,
NEJM editor Marcia Angell lives in Cambridge, Massachusetts, the world headquarters of W. R. Grace. Her home is only eight miles away from Woburn, whose leukemia lawsuit is also the central subject of
A Civil Action, Jonathan Harr’s best-selling book that was made into a movie starring John Travolta. During the months immediately preceding the publication of Berke’s review, in fact, the film crew for
A Civil Action was working in the Boston area and was itself the subject of numerous prominent news stories.
16
In response to criticism of these lapses,
NEJM editor Jerome P. Kassirer insisted that his journal’s conflict-of-interest policy was “the tightest in the business.”
17 The sad fact is that this boast is probably correct. In 1996, Sheldon Krimsky of Tufts University did a study of journal disclosures that dug into the industry connections of the authors of 789 scientific papers published by 1,105 researchers in 14 leading life science and biomedical journals. In 34 percent of the papers, at least one of the chief authors had an identifiable financial interest connected to the research, and Krimsky observed that the estimate of 34 percent was probably lower than the true level of financial conflict of interest, since he was unable to check if the researchers owned stock or had received consulting fees from the companies involved in commercial applications of their research. None of these financial interests were disclosed in the journals, where readers could see them.
18 In 1999, a larger study by Krimsky examined 62,000 articles published in 210 different scientific journals and found that only one half of one percent of the articles included information about the authors’ research-related financial ties. Although all of the journals had a formal requirement for disclosure of conflicts of interest, 142 of the journals had not published a single disclosure during 1997, the year under study.
19
Corporate-sponsored scientific symposiums provide another means for manipulating the content of medical journals. In 1992, the
New England Journal of Medicine itself published a survey of 625 such symposiums which found that 42 percent of them were sponsored by a single pharmaceutical sponsor. There was a correlation, moreover, between single-company sponsorship and practices that commercialize or corrupt the scientific review process, including symposiums with misleading titles designed to promote a specific brand-name product. “Industry-sponsored symposia are promotional in nature and . . . journals often abandon the peer-review process when they publish symposiums,” the survey concluded.
20 Drummond Rennie, a deputy editor of the
Journal of the American Medical Association, describes how the process works in plainer language:
I’m the advertising guy for the drug. I tell a journal I will give them $100,000 to have a special issue on that drug. Plus I’ll give the journal so much per reprint, and I’ll order a lot of reprints. I’ll select the editor and all the authors. I phone everyone who has written good things about that drug. I say, “I’ll fly you and your wife first class to New Orleans for a symposium. I’ll put your paper in the special issue of the journal, and you’ll have an extra publication for your c.v.” Then I’ll put a reprint of that symposium on some doctor’s desk and say, “Look at this marvelous drug.”
21
Does Money Matter?
As these examples illustrate, many of the factors that bias scientific results are considerably more subtle than outright bribery or fraud. “There is distortion that causes publication bias in little ways, and scientists just don’t understand that they have been influenced,” Rennie says. “There’s influence everywhere, on people who would steadfastly deny it.”
22 Scientists can be naive about politics and other external factors shaping their work and become indignant at the suggestion that their results are shaped by their funding. But science does not occur in a vacuum. In studying animal populations, biologists use the term “selection pressure” to describe the influence that environmental conditions exert upon the survival of certain genetic traits over others. Within the population of scientists, a similar type of selection pressure occurs as industry and government support, combined with the vicissitudes of political fashion, determine which careers flourish and which languish. As David Ozonoff of the Boston University School of Medicine has observed, “One can think of an idea almost as one thinks of a living organism. It has to be continually nourished with the resources that permit it to grow and reproduce. In a hostile environment that denies it the material necessities, scientific ideas tend to languish and die.”
23
Like other human institutions, the development of the scientific enterprise has seen both advances and reversals and is exquisitely sensitive to the larger social environment in which it exists. Germany, for example, was a world leader in science in the nineteenth and early twentieth centuries but went into scientific decline with the rise of fascism. Under the Nazis, scientists were seen as too “cosmopolitan,” and the idea of a culturally rooted “German science” transformed applied scientists into “folk practitioners,” elevated astrology at the expense of astronomy, and impoverished the country’s previously renowned institutions for the study of theoretical physics. Something similar happened in Soviet Russia when previously accepted theories in astronomy, chemistry, medicine, psychology, and anthropology were criticized on the grounds that they conflicted with the principles of Marxist materialism. The most notorious example in the Soviet case was the rise of Lysenkoism, which rejected the theories of Mendelian genetics with catastrophic results for Russian agriculture. In the United States, political and social movements have also given rise to a number of dubious scientific trends, including the “creation science” of Christian fundamentalists as well as such movements as parapsychology and scientology.
The most dramatic trend influencing the direction of science during the past century, however, has been its increasing dependence on funding from government and industry. Unlike the “gentleman scientists” of the nineteenth century who enjoyed financial independence that allowed them to explore their personal scientific interests with considerable freedom, today’s mainstream scientists are engaged in expensive research that requires the support of wealthy funders. A number of factors have contributed to this reality, from the rise of big government to the militarization of scientific research to the emergence of transnational corporations as important patrons of research.
The Second World War marked a watershed in the development of these trends, with the demands of wartime production, military intelligence, and political mobilization serving as precursors to the “military-industrial complex” that emerged during the Cold War in the 1950s. World War II also inaugurated the era of what has become known as “big science.” Previously, scientists for the most part had been people who worked alone or with a handful of assistants, pursuing the inquiries that fit their interests and curiosity. It was a less rigorous approach to science than we expect today, but it also allowed more creativity and independence. Physicist Percy Bridgman, whose major work was done before the advent of “big science,” recalled that in those days he “felt free to pursue other lines of interest, whether experiment, or theory, or fundamental criticism. . . . Another great advantage of working on a small scale is that one gives no hostage to one’s own past. If I wake up in the morning with a new idea, the utilization of which involves scrapping elaborate preparations already made, I am free to scrap what I have done and start off on the new and better line. This would not be possible without crippling loss of morale if one were working on a large scale with a complex organization.” When World War II made large-scale, applied research a priority, Bridgman said, “the older men, who had previously worked on their own problems in their own laboratories, put up with this as a patriotic necessity, to be tolerated only while they must, and to be escaped from as soon as decent. But the younger men . . . had never experienced independent work and did not know what it was like.”
24
The Manhattan Project took “big science” to unprecedented new levels. In the process it also radically transformed the assumptions and social practices of science itself, as military considerations forced scientists to work under conditions of strict censorship. “The Manhattan Project was secret,” observe Stephen Hilgartner, Richard Bell, and Rory O’Conner in
Nukespeak, their study of atomic-age thinking and rhetoric. “Its cities were built in secret, its research was done in secret, its scientists traveled under assumed names, its funds were concealed from Congress, and its existence was systematically kept out of the media. . . . Compartmentalization, or the restriction of knowledge about various aspects of the project to the ‘compartments’ in which the knowledge was being developed, was central to this strategy. . . . Press censorship complemented compartmentalization.”
25 President Truman described the development of the atom bomb as “the greatest achievement of organized science in history.” It was also the greatest
regimentation of science in history, and spawned the need for further regimentation and more secrecy.
Prior to the development of the atomic bomb, the scientific community believed with few exceptions that its work was beneficial to humanity. “Earlier uses of science for the development of new and deadlier weapons had, upon occasion, brought forth critical comments by individual scientists; here and there, uncommonly reflective scientists had raised some doubts about the generalized philosophy of progress shared by most of the scientific community, but it was only in the aftermath of Hiroshima that large numbers of scientists were moved to reflect in sustained ways on the moral issues raised by their own activities,” notes historian Lewis Coser.
26
Even before the bombing of Japan, a group of atomic scientists had tried unsuccessfully to persuade the U.S. government against its use. In its aftermath, they began to publish the
Bulletin of the Atomic Scientists, which campaigned for civilian control of atomic energy. Some of its members called for scientists to abstain from military work altogether. In the 1950s, however, the Red Scare and McCarthyism were brought to bear against scientists who raised these sorts of questions. “Furthermore, as more and more scientific research began to be sponsored by the government, many scientists considered it‘dangerous’to take stands on public issues,” Coser notes. By 1961, some 80 percent of all U.S. funds for research and development were being provided directly or indirectly by the military or by two U.S. agencies with strong military connections, the Atomic Energy Commission and the National Aeronautics and Space Administration.
27
The terrifying potential of the new weaponry became a pretext for permanently institutionalizing the policy of secrecy and “need-to-know” classification of scientific information that had begun with the Manhattan Project. In 1947, the Atomic Energy Commission expanded its policy of secrecy beyond matters of direct military significance by imposing secrecy in regard to public relations or “embarrassment” issues as well as issues of legal liability. When a deputy medical director at the Manhattan Project tried to declassify reports describing World War II experiments that involved injecting plutonium into human beings, AEC officials turned down the request, noting that “the coldly scientific manner in which the results are tabulated and discussed would have a very poor effect on the public.”
28
Alvin Weinberg, director of the Oak Ridge National Laboratory from 1955 to 1973, bluntly laid out the assumptions of atomic-age science. In order to avert catastrophe, he argued, society needed “a military priesthood which guards against inadvertent use of nuclear weapons, which maintains what a priori seems to be a precarious balance between readiness to go to war and vigilance against human errors that would precipitate war.”
29 He did not mean the word “priesthood” lightly or loosely. “No government has lasted continuously for 1,000 years: only the Catholic Church has survived more or less continuously for 2,000 years or so,” he said. “Our commitment to nuclear energy is assumed to last in perpetuity—can we think of a national entity that possesses the resiliency to remain alive for even a single half-life of plutonium-239? A permanent cadre of experts that will retain its continuity over immensely long times hardly seems feasible if the cadre is a national body. . . . The Catholic Church is the best example of what I have in mind: a central authority that proclaims and to a degree enforces doctrine, maintains its own long-term social stability, and has connections to every country’s own Catholic Church.”
30
The idea of a “central authority” that “proclaims and enforces doctrine” runs contrary, of course, to the spirit of intellectual freedom and scientific inquiry that led Galileo to defy the Catholic Church in his defense of Copernican astronomy. Weinberg’s comments show how much the practice and philosophy of science had changed under the pressures of government bureaucracy and military secrecy. Instead of a process for asking questions, it had become a dogma, a set of answers imposed by what was becoming a de facto state religion.
Nuts About Nukes
Just as Edward Bernays had used the theories of Sigmund Freud to develop a theory of public relations based on the belief that the public was irrational and pliable, the Atomic Energy Commission also turned to mental health experts in an effort to consign the public to the psychiatric couch. In 1948, AEC commissioner Sumner T. Pike appealed to the American Psychiatric Association to “cool off anyone who seems hysterical about atomic energy.”
31 In 1957, the World Health Organization convened a Study Group on Mental Health Aspects of the Peaceful Uses of Atomic Energy, in the hope that “the behavioural sciences can make a valuable and concrete contribution to the adaptation of mankind to the advent of atomic power” by using expert knowledge of “personality dynamics” to build “positive morale.”
32 The study group, composed of psychiatrists, professors, and representatives of the AEC and the European nuclear industry, began from the premise that the public’s “irrational fears, irrational hopes, or irrational tendencies” were an “abnormal emotional response to atomic energy” which was “quite unjustified. . . . Even if all the objective evidence were interpreted in the most pessimistic way possible, the weight of evidence would not justify anxiety in the present, and only vaguely and remotely in the future. Yet anxiety exists and persists to a quite extraordinary degree. This can only be accounted for by looking into the psychological nature of man himself.”
33
What was it about our human nature that made us so irrational about nuclear power? The study group concluded that its very power made adults “regress to more infantile forms of behavior,” so that they acted like “the very young child first experiencing the world.” The split atom, they said, somehow evoked primal fears related to such “everyday childhood situations . . . as feeding and excretion.” Thus, “of all the fears rising from radiation, whether it be from atomic bomb fall-out or from nuclear plant mishap, it is the danger to food which is generally the most disquieting.” The same principle also applied to nuclear waste: “As with feeding, so with excretion. Public concern with atomic waste disposal is quite out of proportion to its importance, from which there must be a strong inference that some of the fear of ‘fall-out’ derives from a symbolic association between atomic waste and body waste.”
34
“This explanation is the most ludicrous kind of dime-store Freudian-ism; it trivializes people’s concern about fallout and nuclear war,” observe Hilgartner et al. “But the study group was deadly serious about the richness of insight which this crude, narrow-minded analysis provided.”
35 Indeed, after an accidental radiation release at the Windscale nuclear reactor in England, the government was forced to confiscate and dump milk contaminated with radioiodine. A psychiatrist on the study group explained the negative newspaper headlines that accompanied the dumping by commenting, “Obviously all the editors were breast fed.” It was, to him, a perfect example of “regression.”
36
These analyses share a retreat from the empiricist notion that experts should begin first with evidence and reason from it to their conclusions. For the experts in charge of nuclear planning, the political goals came first, the evidence second. Anyone who thought otherwise could simply be diagnosed as neurotic.
From Military Secrets to Trade Secrets
“The expansion of university research in the 1950s was largely the result of support from the military,” wrote Dorothy Nelkin in her 1984 book
Science as Intellectual Property. “In the context of the times, most university scientists supported collaboration with military objectives, a collaboration they deemed crucial to the development of the nation’s scientific abilities. However, even during this period, university-military relations were a source of nagging concern. Doubts turned to disenchantment during the Vietnam War.”
37
At the Massachusetts Institute of Technology, Professor George Rathjens observed that “a very large fraction” of the school’s students were destined to find careers dependent on the military:
They don’t know, when they enter as freshmen, what they will be doing when they graduate, of course. But, on a probabilistic basis, it is reasonable for them to assume they are very likely to be working in defense programs. And they surely can’t foresee how that work will affect mankind’s welfare, when they can’t possibly predict whether they will be working on a particular kind of bomber, against whom it might be used, or whether in an unjust or a just war. But, they have to make decisions about whether or not they want to get into a particular profession when they are about 18 years old, and for many, those decisions will be virtually irrevocable. It is hard to get out. I have talked to many people, scientists and engineers, who were working around Route 128, the high technology community around Boston, who were desperate during the Vietnamese War to get out of the defense business. They had no options. They really had nowhere else to go. They could go out and sell vacuum cleaners, perhaps, but if they wanted to use the skills that they spent a lifetime acquiring, they didn’t have much choice. I have a friend, who was one of the principal weapon designers at Los Alamos for many years. At age 50 or so, he decided he really didn’t want to make bombs anymore. He had had enough. What does a person like that do? There just aren’t very many options for a man like that, at that age.
38
By the 1960s, military programs had come to employ nearly a third of the scientists and engineers in the United States. The militarization of science had become, and remains, a central organizing condition of U.S. government-funded science research. Even in 1998, nearly a decade after the end of the Cold War, military research and development represented 53 percent of the U.S. federal R & D budget.
Even outside the scope of military programs, a top-down, command-driven rhetoric of science has seeped into many aspects of national life. Billion-dollar foundations and massive government research contracts became commonplace. University professors mastered the intricate rules of grantsmanship and learned to walk the narrow path between consultation and conflict of interest. As federal tax policies underwrote and shaped private giving, the distinction between public and private grants began to blur. Lyndon Johnson brought the concept of policy-oriented social science to new levels in the pursuit of
two wars—the Vietnam War and the War on Poverty. Presidential chronicler Theodore White described the Johnson years as the “Golden age of the action intellectual,” as experts were brought in “to shape our defenses, guide our foreign policy, redesign our cities, eliminate poverty, reorganize our schools.”
39 A few years later, President Nixon would invoke the military metaphor again when he declared “war on cancer” in his 1971 State of the Union speech. Each of these wars came with their concomitant experts, whose job was to reassure the public with confident promises that inevitable victory was near at hand, that there was “light at the end of the tunnel.”
The last quarter of the twentieth century saw the commercialization of big science, as the rise of the so-called “knowledge-based” industries—computers, telecommunications, and biotechnology—prompted a wide variety of corporate research initiatives. In 1970, federal government funding for research and development totaled $14.9 billion, compared to $10.4 billion from industry. By 1997, government expenditures were $62.7 billion, compared to $133.3 billion from industry. After adjusting for inflation, government spending had barely risen, while business spending more than tripled.
40 Much of this increase, moreover, took place through corporate partnerships with universities and other academic institutions, blurring the traditional line between private and public research. In 1980, industrial funding made up only 3.8 percent of the total research budget for U.S. universities. “Seldom controversial, it provided contacts and financial benefits usually only to individual faculty members, and on the whole it did not divert them from university responsibilities,” Nelkin noted. However, declining public funding in many areas of research “left many faculty and university administrators receptive to, indeed, eager for industrial support, and inevitably less critical of the implications for the ownership and control of research.”
First reluctantly and then eagerly, universities began to collaborate with industry in fields such as biotechnology, agriculture, chemistry, mining, energy, and computer science. “It is now accepted practice for scientists and institutions to profit directly from the results of academic research through various types of commercial ventures,” Nelkin observed in her 1984 book,
41 and what was a noteworthy trend back then has since become a defining characteristic of university research. Between 1981 and 1995, the proportion of U.S. industry-produced articles that were coauthored with at least one academic researcher roughly doubled, from 21.6 percent to 40.8 percent. The increase was even more dramatic in the field of biomedical research, where the number of coauthored articles quadrupled.
42 According to the Association of American Medical Colleges, corporate sponsorship of university medical research has grown from about 5 percent in the early 1980s to as much as 25 percent in some places today.
43
In 1999, the Department of Plant and Microbial Biology at the University of California-Berkeley signed an unprecedented five-year, $25 million agreement with the Novartis biotech firm of Switzerland. In exchange for the funding, the university promised that Novartis would have first bid on a third of the research discoveries developed by the department. “The Berkeley agreement has inspired other major American research universities to seek similar agreements with industry,” noted the National Center for Public Policy and Higher Education.
44 But although the deal was popular with the department that received the money, it drew a different reaction from many of the professors in other departments. A survey conducted by the chairman of the university’s College of Natural Resources showed that two-thirds of the faculty in that college disagreed with the terms of the contract.
“We fear that in our public university, a professor’s ability to attract private investment will be more important than academic qualifications, taking away the incentives for scientists to be socially responsible,” stated professors Miguel Altieri and Andrew Paul Gutierrez in a letter to the university’s alumni magazine. Altieri’s academic career has been devoted to the study of “biological control”—the discipline of controlling agricultural pests through means other than pesticides. He noted bitterly that while money from Novartis was pouring in, university funding for biological control research had been eliminated. “For more than 40 years we trained leaders in the world about biological control . . . A whole theory was established here, because pesticides cause major environmental problems,” Altieri said.
45 Another researcher, UC-Berkeley anthropologist Laura Nader, said the Novartis contract “sent a chill especially over younger, untenured faculty. Word gets around early . . . over the proper relationship between researchers and industry in a university setting. A siege mentality sets in, reminiscent of the McCarthy period and the so-called Red Scare, except then it was government which could be called to account and was, and now this is as yet unaccountable large companies.”
46
Just as military funding for research carried with it a set of obligations that had nothing to do with the pursuit of knowledge, corporate funding has transformed scientific and engineering knowledge into commodities in the new “information economy,” giving rise to an elaborate web of interlocking directorates between corporate and academic boardrooms. By the end of the 1990s, the ivory tower of academia had become “Enterprise U,” as schools sought to cash in with licensing and merchandising of school logos and an endless variety of university-industry partnerships and “technology transfers,” from business-funded research parks to fee-for-service work such as drug trials carried out on university campuses. Professors, particularly in high-tech fields, were not only allowed but encouraged to moonlight as entrepreneurs in start-up businesses that attempted to convert their laboratory discoveries into commercial products. Just as science had earlier become a handmaiden to the military, now it was becoming a servant of Wall Street.
“We’re adopting a business instead of an economic model,” said chemist Brian M. Tissue of Virginia Polytechnic Institute and State University. “The rationale is collaborations are good because they bring in money. People say we can have better facilities and more students, and it’s a win-win situation, but it’s not. There
can be benefits, but you’re not training students anymore; you’re bringing them in to work a contract. The emphasis shifts from what’s good for the student to the bottom line.”
47 “More and more we see the career trajectories of scholars, especially of scientists, rise and fall not in relation to their intellectually-judged peer standing, but rather in relation to their skill at selling themselves to those, especially in the biomedical field, who have large sums of money to spend on a well-marketed promise of commercial viability,” observed Martin Michaelson, an attorney who has represented Harvard University and a variety of other leading institutions of higher education. “It is a kind of gold rush,” Michaelson said at a 1999 symposium sponsored by the American Association for the Advancement of Science. “More and more we see incentives to hoard, not disseminate, new knowledge; to suppress, not publish, research results; to titillate prospective buyers, rather than to make full disclosure to academic colleagues. And we see today, more than ever before, new science first—generally, very carefully, and thinly—described in the fine print of initial public offerings and SEC filings, rather than in the traditional, fuller loci of academic communication.”
48
Industry-academic entanglements can take many forms, some of which are not directly related to funding for specific research. Increasingly, scientists are being asked to sit on the boards of directors of for-profit companies, a service that requires relatively little time but can pay very well—often in excess of $50,000 per year. Other private-sector perks may include gifts to researchers of lab equipment or cash, or generous payment for speeches, travel, and consulting.
Corporate funding creates a culture of secrecy that can be as chilling to free academic inquiry as funding from the military. Instead of government censorship, we hear the language of commerce: nondisclosure agreements, patent rights, intellectual property rights, intellectual capital. Businesses frequently require scientists to keep “proprietary information” under wraps so that competitors can’t horn in on their trade secrets. “If we could not maintain secrecy, research would be of little value,” argued the late Arthur Bueche, vice president for research at General Electric. “Research properly leads to patents that protect ideas, but were it not for secrecy, it would be difficult to create a favorable patent position.”
49
In 1994 and 1995, researchers led by David Blumenthal at the Massachusetts General Hospital surveyed more than 3,000 academic researchers involved in the life sciences and found that 64 percent of their respondents reported having some sort of financial relationship with industry. They also found that scientists with industry relationships were more likely to delay or withhold publication of their data. Their study, published by the
Journal of the American Medical Association, found that during the three years prior to the survey, 20 percent of researchers reported delaying publication of their research results for more than six months. The reasons cited for delaying publication included the desire to patent applications from their discovery and a desire by some researchers to “slow the dissemination of undesired results.” The practice of withholding publication or refusing to share data with other scientists was particularly common among biotechnology researchers.
50
“It used to be that if you published you could ask about results, reagents—now you have these confidentiality agreements,” said Nobel Prize-winning biochemist Paul Berg, a professor of biochemistry at Stanford University. “Sometimes if you accept a grant from a company, you have to include a proviso that you won’t distribute anything except with its okay. It has a negative impact on science.”
In 1996, Steven Rosenberg, chief of surgery at the U.S. National Cancer Institute, observed that secrecy in research “is underappreciated, and it’s holding back medical cancer research—it’s holding back my research.”
First, Do No Harmful Publicity
The problem of secrecy in science is particularly troubling when it involves conflicts of interest between a company’s marketing objectives and the public’s right to know. When research results are not to a sponsor’s liking, the company may use heavy-handed tactics to suppress them—even if doing so comes at the expense of public health and the common good.
One such case came to light in 1997 regarding the work of Betty Dong, a researcher at the University of California. In the late 1980s, the Boots Pharmaceutical company took an interest in Dong’s work after she published a limited study which suggested that Synthroid, a thyroid medication manufactured by Boots, was superior to drugs produced by the company’s competitors. Boots offered $250,000 to finance a large-scale study that would confirm these preliminary findings. To the company’s dismay, however, the larger study, which Dong completed in 1990, contradicted her earlier findings and showed that Synthroid was no more effective than the cheaper drugs made by Boots’s competitors. What followed was a seven-year battle to discredit Dong and prevent publication of her work. The contract Dong and her university had signed with the company gave it exclusive access to the prepublished results of the study as well as final approval over whether it would ever be published. The study sat on the shelf for five years while Boots waged a campaign to discredit Dong and the study, bombarding the chancellor and other university officials with allegations of unethical conduct and quibbles over the study’s method, even though the company itself had previously approved the method. In 1994, Dong submitted a paper based on her work to the
Journal of the American Medical Association. It was accepted for publication and already set in type when the company invoked its veto right, forcing her to withdraw it.
51
In 1995, Boots was purchased by Knoll Pharmaceutical, which continued to suppress Dong’s conclusions. While she remained unable to publish her own results, Knoll published a reinterpretation of her data under the authorship of Gilbert Mayor, a doctor employed by the company. Mayor published his reanalysis of Dong’s data without acknowledging her or her research associates, a practice that the
Journal of the American Medical Association would later characterize as publishing “results hijacked from those who did the work.”
52 After further legal battles and an exposé of Knoll’s heavy-handed tactics in the
Wall Street Journal, Dong was finally allowed to publish her own version of the study in
JAMA in 1997—nearly seven years after its completion. During those seven years, Boots/Knoll had used Synthroid’s claims of superiority to dominate the $600-million-per-year synthetic thyroid market. The publication of her work in
JAMA prompted a class-action lawsuit on the part of Synthroid users who had been effectively duped into paying an estimated $365 million per year more than they needed for their medication. Knoll settled the lawsuit out of court for $98 million—a fraction of the extra profits it had made during the years it spent suppressing Dong’s study.
53
Another attempt to suppress research occurred in 1995, when liver specialist Nancy Olivieri at the University of Toronto wanted to warn patients about the toxic side effects of a drug she was testing. The Canadian drug giant Apotex, which was sponsoring the study in hopes of marketing the drug, told her to keep quiet, citing a nondisclosure agreement that she had signed. When Olivieri alerted her patients anyway and published her concerns in the New England Journal of Medicine, Apotex threatened her with legal action and she was fired from her hospital, a recipient of hundreds of thousands of dollars each year in research funding from Apotex.
In 1997, David Kern, an occupational health expert at Brown University, discovered eight cases of a new, deadly lung disease among workers at a Microfibres, Inc., a manufacturer of finely cut nylon flock based in Pawtucket, Rhode Island. Microfibres tried to suppress Kern’s finding, citing a confidentiality agreement that he had signed at the time of an educational visit to the company more than a year before the start of his research. When Kern spoke out anyway, administrators at the hospital and university where he worked (a recipient of charitable contributions from Microfibres) insisted that he withdraw a previously submitted scientific communiqué about the disease outbreak and that he cease providing medical care to his patients who worked at the company. Kern’s program—the state’s only occupational health center—was subsequently closed, and his job was eliminated.
54 Even more disturbing was the response of many of his research colleagues. “There were courageous folks who stood up for me, but most looked the other way,” he said. “I’m mightily discouraged by the failure of the community to do more.”
55
In 1999,
JAMA editor Drummond Rennie complained that the influence of private funding on medical research has created “a race to the ethical bottom.” Known cases of suppression may be only the tip of the iceberg. “The behavior of universities and scientists is sad, shocking, and frightening,” Rennie said. “They are seduced by industry funding, and frightened that if they don’t go along with these gag orders, the money will go to less rigorous institutions.”
56
Beyond the problem of outright fraud and suppression, moreover, there is a larger and more pervasive problem: the systemwide bias that industry funding creates among researchers in commercially profitable fields. “Virtually every academic in biotechnology is involved in exploiting it commercially,” says Orville Chapman of the University of California at Los Angeles. “We’ve lost our credentials as unbiased on such subjects as cloning or the modification of living things, and we seem singularly reluctant to think it through.”
57
Predetermined Outcomes
A host of techniques exist for manipulating research protocols to produce studies whose conclusions fit their sponsor’s predetermined interests. These techniques include adjusting the time of a study (so that toxic effects do not have time to emerge), subtle manipulations of target and control groups or dosage levels, and subjective interpretations of complex data. Often such methods stop short of outright fraud, but lead to predictable results. “Usually associations that sponsor research have a fairly good idea what the outcome will be, or they won’t fund it,” says Joseph Hotchkiss of Cornell University. In
Tainted Truth: The Manipulation of Fact in America, author Cynthia Crossen noted the striking correspondence between the results obtained through published research and the financial interests of its sponsors:
The consistency of research support for the sponsor’s desired outcome intrigued Richard Davidson, a general internist and associate professor of medicine at the University of Florida. “It struck me that every time I read an article about a drug company study, it never found the company’s drug inferior to what it was being compared to,” Davidson says. He decided to test that impression by reviewing 107 published studies comparing a new drug against a traditional therapy. Davidson confirmed what he had suspected—studies of new drugs sponsored by drug companies were more likely to favor those drugs than studies supported by noncommercial entities. In not a single case was a drug or treatment manufactured by the sponsoring company found inferior to another company’s product.
58
When other researchers have examined the link between funding sources and research outcomes, they have reached conclusions similar to Davidson’s:
• In 1994, researchers in Boston studied the relationship between funding and reported drug performance in published trials of anti-inflammatory drugs used in the treatment of arthritis. They reviewed 56 drug trials and found that in every single case, the manufacturer-associated drug was reported as being equal or superior in efficacy and toxicity to the comparison drug. “These claims of superiority, especially in regard to side effects, are often not supported by the trial data,” they added. “These data raise concerns about selective publication or biased interpretation of results in manufacturer-associated trials.”
59 • In 1996, researchers Mildred K. Cho and Lisa A. Bero compared studies of new drug therapies and found that 98 percent of the studies funded by a drug’s maker reached favorable conclusions about its safety and efficacy, compared to 76 percent of studies funded by independent sources.
60 • In 1998, the
New England Journal of Medicine published a study that examined the relationship between drug-industry funding and research conclusions about calcium-channel blockers, a class of drugs used to treat high blood pressure. There are safety concerns about the use of calcium-channel blockers because of research showing that they present a higher risk of heart attacks than other older and cheaper forms of blood pressure medication such as diuretics and beta-blockers. The
NEJM study examined 70 articles on channel blockers and classified them into three categories: favorable, neutral, and critical. It found that 96 percent of the authors of favorable articles had financial ties to manufacturers of calcium-channel blockers, compared with 60 percent of the neutral authors and 37 percent of the critical authors. Only two of the 70 articles disclosed the authors’ corporate ties.
61 • In October 1999, researchers at Northwestern University in Chicago studied the relationship between funding sources and conclusions reached by studies of new cancer drugs and found that studies sponsored by drug companies were nearly eight times less likely to report unfavorable conclusions than studies paid for by nonprofit organizations.
62
Drug research is not the only field in which this pattern of funding-related bias can be detected. In 1996, journalists Dan Fagin and Marianne Lavelle reviewed recent studies published in major scientific journals regarding the safety of four chemicals: the herbicides alachlor and atrazine, formaldehyde, and perchloroethylene, the carcinogenic solvent used for dry-cleaning clothes. When non-industry scientists did the studies, 60 percent returned results
unfavorable to the chemicals involved, whereas industry-funded scientists came back with
favorable results 74 percent of the time. Fagin and Lavelle observed a particularly strong biasing influence with respect to agribusiness financing for research related to farm weed control. “Weed scientists—a close-knit fraternity of researchers in industry, academia, and government—like to call themselves ‘nozzleheads’ or ‘spray and pray guys,’ ” they stated. “As the nicknames suggest, their focus is usually much narrower than weeds. As many of its leading practitioners admit, weed science almost always means herbicide science, and herbicide science almost always means herbicide-justification science. Using their clout as the most important source of research dollars, chemical companies have skillfully wielded weed scientists to ward off the EPA, organic farmers, and others who want to wean American farmers away from their dependence on atrazine, alachlor, and other chemical weedkillers.”
63
Sometimes industry-funded studies become so self-promotional that they seem almost like parodies. In May 1998, the prestigious Kinsey Institute for Research in Sex, Gender and Reproduction teamed with the psychology department of Indiana University to study the effect of odor on women’s sexual arousal. “This is a complex area,” explained research leader Cynthia Graham in describing her study, which was sponsored by the Olfactory Research Fund, an organization financed by the perfume and cologne industry. Described by the
Milwaukee Journal Sentinel as a “rigorous experiment,” the study asked 33 women to view an erotic movie or engage in sexual fantasy, while researchers measured physical changes to their genitals. To test the effect of fragrance on arousal, the experimenters had the women wear a necklace scented with either women’s perfume, men’s cologne, or water. “The strongest, scientifically significant finding from the study was that male cologne markedly increased sexual arousal among women in the two days after the end of a woman’s menstrual period,” the
Journal Sentinel reported.
64
The public today is bombarded with scientific information regarding the safety and efficacy of everything from drugs to seat belts to children’s toys. Eating garlic bread brings families closer together, says research sponsored by Pepperidge Farms bakeries, which makes frozen garlic bread. Eating oat bran lowers cholesterol, according to research sponsored by Quaker Oats. Eating chocolate may prevent cavities, says the Princeton Dental Resource Center, which is financed by the M&M/Mars candy company and is not a part of Princeton University. A daily glass of red wine reduces your risk of heart disease, say the doctors hired by the liquor industry. Chromium picolinate taken as a dietary supplement will help you burn off fat, says the dietary supplement industry. Zinc lozenges might shorten the duration of the common cold, reports a researcher who happens to hold 9,000 shares of stock in a zinc lozenge company.
Much of this information is confusing and contradictory. Sometimes the contradictions reflect genuine disagreements, but often they simply mirror the opposing interests of different companies and industries. Wearing sunscreen at the beach is important to avoid skin cancer, say doctors affiliated with Partners for Sun Protection, an organization sponsored by Schering-Plough, the pharmaceutical company that makes Coppertone sun lotion. On the other hand, studies sponsored by the International Smart Tan Network, a trade group representing tanning salons, claim that “regular tanning sessions could prevent as many as 30,000 cancer deaths every year in the United States.” According to the ISTN, “legitimate research” shows that “giant pharmaceutical firms” and “dermatology industry lobbyists” have fomented unwarranted “paranoia” about tanning-related skin cancers.
When covering these topics, journalists have a responsibility to do more than present a source simply as “a scientist from such-and-such university.” The public needs to know the context with which to weigh the information it receives. Does the scientist or other expert receive any funding from companies with a stake in the topic? Are there other conflicts of interest? Is there a pattern to the expert’s past pronouncements or affiliations that suggests a particular ideological bent? Do the expert’s opinions match or contradict the opinions of the majority of other experts on the subject at hand? These questions deserve to be answered, but are rarely even asked.