7 Science Gone Wrong: Fraud and Other Failures
To anyone who cares about the scientific attitude, one might think that fraud is a perfunctory topic. People who commit fraud are just cheats and liars, who obviously do not embrace the values of science, right? Why bother to examine it any further than that?
But I suggest that we take a more deliberate approach, for the examination of fraud will help us to understand not only what it means by contrast to have a good scientific attitude, it will also help us to take the measure of all those things that fall just short of fraud. If one has an overly simplistic view of fraud, for instance, one might miss the fact that most fraudsters do not see themselves as deliberately trying to falsify the scientific record, but instead feel entitled to a shortcut because they think that the data will ultimately bear them out. This is problematic on many levels, but it is a live issue whether this is a methodological failure or an attitudinal one. By deceiving themselves into thinking that it is all right to cut a few corners in their procedure, does this pave the way for the later commission of fraud, or is this already fraud itself? Actions matter, but so do intentions. If one starts off not intending to falsify anything, but only to shape the data one needs to perform an experiment, at what point do things go off the rails? Might there be a connection between the sorts of sloppy research practices we examined earlier in this book (like p-hacking and cherry picking) and the later falsification or fabrication of data that constitutes fraud itself? As a normative account, the scientific attitude can help us to sort through these problems.
We must start, though, by facing the problem at its worst. Fraud is the intentional fabrication or falsification of the scientific record.1 In the case of mere error, one’s fidelity to science is not at issue, for one can make a mistake without the intent to deceive. But in the case of fraud—where any flaws are deliberate—one’s commitment to the scientific attitude is squarely in question. In any activity as open and dependent upon the work of others as science, this is the one thing that cannot be tolerated. When one signs on to be a scientist, one is making a voluntary commitment to be open and honest in learning from experience. By committing fraud, one is putting oneself and one’s advancement ahead of this. Ideology, money, ego, and self-interest, are supposed to take a back seat to evidence. It is sometimes said that—because scientific ideas can come from anywhere—there are no heretics in science. But fraud is the one true form of scientific heresy; it is not that one’s theories are different, it is that these theories are based on invented data. Thus fraud is seen as much worse than error, for fraud is by definition intentional and what is at stake is nothing less than a betrayal of the scientific attitude itself.
Mere error turns out not to be very scary for science. As long as one has the right attitude about learning from empirical evidence, science is well equipped to deal with mistakes. And this is a good thing, because the history of science is replete with them. I am not here talking about the pessimistic-inductivist claim that in the long run most of our scientific beliefs will turn out to be false.2 I am talking about the enormous errors and dead ends that knocked science off its track for centuries at a time. Phlogiston. Caloric. Ether. Yet it is important to point out that these mistakes were not frauds, and, in fact, they were in some cases pivotal for coming up with better scientific theories. Without phlogiston we might not have discovered oxygen. Without caloric we might not understand thermodynamics. Why is this? Because science is expected to learn from error. If one embraces the scientific attitude and follows the evidence, error will eventually be rooted out. I suppose one might try to make the same case for errors that are introduced by fraud—for if science is self-correcting these too will eventually be found and fixed. But it is just such an enormous waste of time and resources to chase down the results of intentional error that this is where most scientists draw the line. It is not just the consequences of fraud that lead it to be so despised, it is the break of faith with scientific values. Nature is subtle enough; scientists don’t care to deal with any additional challenges created by deception.
But it is important to remember that there is another source of defect in scientific work. Between fraud and honest error, there is a murky category where it is not entirely clear whether one’s motives are pure. As we saw in chapter 5, scientific error can come from fraud, but it can also come from sloppiness, cognitive bias, willful ignorance, or laziness. I hope already to have established that the scientific attitude is a robust tool to mitigate error, whatever the source. Yet here—on the verge of claiming that fraud is the worst sort of affront one can make against the scientific attitude—we should revisit the question of how to divide these sources of error along the lines of intentional versus unintentional motivations.
The key here is to be explicit in the definition of fraud. If we define fraud as the intentional fabrication or falsification of scientific data, then there are two possible ways to read this:
(1) If one has committed fraud, then one has intentionally fabricated or falsified data.
(2) If one has intentionally fabricated or falsified data, then one has committed fraud.
As we know from logic, these two statements do not imply one another and it is therefore possible for one of them to be true, while the other is not. In this case, however, I think that both of them are true. If one is committing fraud, then it has to be intentional. As we saw in the broader definition of “research misconduct” (given in note 1 to this chapter), if a mistake is due to “honest error” or “difference of opinion,” it is not considered fraud. For fraud, it is necessary to have the intention to deceive. But we must then ask whether if one commits fabrication or falsification intentionally, this is sufficient for fraud. It is. Fabrication and falsification are not just any kinds of errors; by their very definition, they cannot be done by accident. So the minute one engages in these kinds of behaviors, it seems to automatically constitute fraud. We might thus seek to define fraud by combining (1) and (2) into the following biconditional statement: “One commits fraud if and only if one intentionally fabricates or falsifies scientific data.”3 Yet this still leaves open the crucial question of how to define intentionality. Is there perhaps a better way to characterize fraud to make this clear?
Let’s turn now to the scientific attitude, and see what leverage this might give us in understanding the concept of scientific fraud. Throughout this book, I have been arguing that the scientific attitude is what defines science; that it can help us to understand what is special about science and why there is unique warrant behind scientific beliefs. Since I just got done saying that fraud is the worst sort of crime one can commit against science, it would seem to follow that fraud must constitute a complete repudiation of the scientific attitude. But now consider the following two ways of interpreting this claim:
(3) If one has committed fraud, then one does not have the scientific attitude.
(4) If one does not have the scientific attitude, then one has committed fraud.
There is an obvious problem here, for I think that thesis (3) is true, but (4) is not. How could that be? With thesis (3) it seems obvious that if someone has committed fraud, that person does not have the scientific attitude. To fabricate or falsify data is in direct conflict with the idea that one cares about empirical evidence and is committed to holding and changing one’s beliefs on this basis. So why then is thesis (4) false? The issue is a subtle one, for in some situations thesis (4) may well be true, but the issue here is that it is not necessarily true in all cases.4 To say that “if one does not have the scientific attitude, then one has committed fraud” is to make a large presumption. First, one has to be investigating in an empirical field; literature does not have the scientific attitude, but so what? Second, one is presuming that if one has the wrong attitude during empirical inquiry, one will definitely act on it. But we know from human behavior that this is not always the case. And third, what about the issue of intentionality? From thesis (2) above, it seems that if we have committed an intentional error then we have committed fraud. But the issue here is that there are many different levels of intentionality and many different reasons why someone might not have the scientific attitude.
As we saw in chapter 5, it could be that some researchers are the victims of unconscious cognitive bias. Or perhaps they are just lazy or sloppy. Are they also frauds? There could be a whole host of subterranean psychological reasons why not having the scientific attitude is not someone’s fault. It is not necessarily intentional when someone violates the scientific attitude. But here is the key question. What about those cases in which someone does intentionally engage in shady research practices? What about all of those less-than-above-board research practices like p-hacking or cherry picking data that I was railing against in chapter 5? Why aren’t those considered fraud the minute they are done intentionally? But the relevant question to making a determination of fraud is not just whether those actions are done intentionally, it is whether they also involve fabrication or falsification. Remember our working definition of fraud: the intentional fabrication or falsification of scientific data. (Recall too that this is a biconditional relationship.) The reason that p-hacking isn’t normally considered fraud isn’t that the person who did it didn’t mean to; it’s that, as egregious as it may seem, p-hacking is not quite up to the level of falsifying or fabricating data. One is misleading one’s scientific colleagues, perhaps, but not fabricating evidence. One may be leaving a study open to get more data so that one can publish, but this is not quite falsifying.5
Consider an analogy with lying. To tell a bald-faced lie is to say something false while knowing that it is false. But what about those instances where we have not lied, but we have not exactly told the whole truth either? This is patently dishonest, but not (quite) the same as lying. This is precisely the analogy we are looking for to mark off the difference between questionable research practices and fraud. P-hacking, selective data reporting, and the like are not considered fraud by the standard definition because they do not involve fabrication or falsification of data. Yet they are not altogether honest either.6 They are intentional deceptions that fall short of fraud. They may be a crime against the scientific attitude, but they are not quite a felony. If done intentionally we should hope that these practices can be exposed and discouraged—and even that the scientific attitude (which helps us to understand what is so wrong about fraud) may provide a tool to help us do this—but this does not mean that we should confuse them with fraud.
The scientific attitude may be thought of as a spectrum, with complete integrity at one end and fraud at the other. The criterion that delineates fraud is the intentional fabrication or falsification of data. One can fall short of this either because one has made an unintentional mistake or because one was misleading but it did not rise to the level of fabrication or falsification. Into the latter class, I would put many of those “misdemeanors” against the “degrees of freedom” one has as a scientific researcher.7 The scientific attitude is a matter of degree; it is not all or nothing. Fraud may be thought of as occurring when someone violates the scientific attitude and their behavior rises to the level of fabrication or falsification. Yet a researcher can have an eroded sense of the scientific attitude and not go this far. (Cheating on the scientific attitude thus seems necessary, but not sufficient, for committing fraud.)
While it seems valuable to draw a bright line where one crosses over into fraud, this does not mean that “anything goes” short of it. Using the scientific attitude to mark off what is special about science should be able to help us with both tasks. In this chapter, I will argue that we may use the scientific attitude to gain a better understanding of what is so egregious about fraud and to police the line between fraud and other failures of the scientific attitude. In doing so, I hope to illuminate the many benefits that an understanding of the scientific attitude may offer for identifying and discouraging those shoddy research practices that fall just short of fraud as well. As we saw in chapter 5, the scientific attitude can help us to identify and fight all manner of error. But the proper way to do this is to understand each error for what it is. Some will find anything short of complete honesty in the practice of science to be deplorable. I commend this commitment to the scientific attitude. Yet science must survive even when some of its practitioners—for whatever reason—occasionally misbehave.
Why Do People Commit Fraud?
The stereotypical picture of the scientific fraudster as someone who just makes up data is not necessarily accurate. Of course this does occur, and it is a particularly egregious form of fraud, but it is not the only or even the most common kind. Just as guilty are those who think that they already know the answer to some empirical question, and can’t be bothered to take the time—due to various pressures—to get the data right.
In his excellent book On Fact and Fraud, David Goodstein provides a bracing analysis of numerous examples of scientific fraud, written by someone who for years has been charged with investigating it.8 After making the customary claim that science is self-correcting, and that the process of science will eventually detect the insertion of any falsehood (no matter whether it was intentional or unintentional),9 Goodstein goes on to make an enormously provocative claim.10 He says that in his experience most of those who have committed fraud are not those who are deliberately trying to insert a falsehood into the corpus of science, but rather those who have decided to “help things along” by taking a shortcut to some truth that they “knew” would be vindicated.11 This assessment should at least give us pause to reconsider the stereotypical view of scientific fraud.12 Although there are surely examples of fraud that have been committed by those who deliberately insert falsehoods into the corpus of science, what should we say about the “helpers”? Perhaps here the analogy with the liar (who intentionally puts forth a falsehood) is less apt than that of the impatient egoist, who has the hubris to short-circuit the process that everyone else has to follow. Yet, seen in this light, scientific fraud is a matter not merely of bad motives, but of having the arrogance to think that one deserves to take a shortcut in how science is done.
It is notable that concern with hubris in the search for knowledge predates science. In his dialogues, Plato makes the case (through Socrates) that false belief is a greater threat to the search for truth than mere error.13 Time and again, Socrates exposes the ignorance of someone like Meno or Euthyphro who thought they knew something, only to find out quite literally that they didn’t know what they were talking about. Why is this important? Not because Socrates feels that he himself has all the answers; Socrates customarily professes ignorance. Instead, the lesson seems to be that error is easier to recover from than false belief. If we make an honest mistake, we can be corrected by others. If we accept that we are ignorant, perhaps we will go on learning. But when we think that we already know the truth (which is a mindset that may tempt us to cut corners in our empirical work) we may miss the truth. Although the scientific attitude remains a powerful weapon, hubris is an enemy that should not be underestimated. Deep humility and respect for one’s own ignorance is at the heart of the scientific attitude. When we violate this, we may already be on the road to fraud.14
If some, at least, commit fraud with the conviction that they are merely hurrying things along the road to truth, is their attitude vindicated? No. Just as we would not vindicate the vigilante who killed in the name of justice, the “facilitator of truth” who takes shortcuts is guilty not just of bad actions but of bad intent. Even with so-called well-intentioned fraud, the deceit was still intentional. One is being dishonest not merely in one’s actions but in one’s mind. Fraud is the intentional fabrication or falsification of evidence, in order to convince someone else to believe what we want them to believe. But without the most rigorous methods of gathering this evidence, there is no warrant. Merely to be right, without justification, is not knowledge. As Socrates puts it in Meno, “right opinion is a different thing than knowledge.”15 Knowledge is justified true belief. Thus fraud short-circuits the process by which scientists formulate their beliefs, even if one guesses right. Whatever the motive, one who commits fraud is doing it with full knowledge that this is not the way that science is supposed to be done. Whether one thought that one was “inserting a falsehood” or “helping truth along” does not matter. The hubris of “thinking that you are right” is enough to falsify not just the result but the process. And in a process as filled with serendipity and surprise as science, the danger of false belief is all around us.
The Thin Crimson Line
One problem with judging fraud is the use of euphemistic language in discussing it. Understanding the stakes for one’s academic career, universities are sometimes reluctant to use the words “fraud” or “plagiarism” even in cases that are quite clear-cut.16 If someone is found guilty (or sometimes even suspected) of fraud, they are all but “excommunicated” from the community of scientists. Their reputation is dishonored. Everything they have ever done—whether it was fraudulent or not—will be questioned. Their colleagues and coauthors will shun them. Sometimes, if federal money is mismanaged or they live in a country with strict laws, they may even go to jail.17 Yet the professional judgment of one’s peers is often worse (or at least more certain) than any criminal punishment. Once the taint of fraud is in the air, it is very hard to overcome.18 It is customary for someone who has been found guilty of fraud simply to leave the profession.
One recent example of equivocating in the face of fraud is the case of Marc Hauser, former Professor of Psychology at Harvard University, who was investigated both by Harvard and by the Office of Research Integrity (ORI) at the National Institutes of Health. The results of Harvard’s own internal investigation were never made public. But in the federal finding that came out some time later, the ORI found that half of the data in one of Hauser’s graphs was fabricated. In another paper he “falsified the coding” of some data. In another he “falsely described the methodology used to code the results for experiments.” And the list goes on. If this isn’t fraud, what is? Yet the university allowed Hauser to say—before the federal findings came out—that his mistakes were the result of a “heavy workload” and that he was nonetheless willing to accept responsibility “whether or not I was directly involved.” At first Hauser merely took a leave of absence, but after his faculty colleagues voted to bar him from teaching, he quietly resigned. Hauser later worked at a program for at-risk youth.19
Although many may be tempted to use the term “research misconduct” as a catch-all phrase that includes fraud (or is a euphemism for fraud), this blurs the line between intentional and unintentional deception. Does research misconduct also include sloppy or careless research practices? Are data fabrication and falsification in the same boat as improper data storage? The problem is a real one. If a university is trying to come up with a policy on fraud it might write somewhat differently than if it were writing a policy on scientific misconduct. As Goodstein demonstrates in his book, the latter can tempt us to include language about nonstandard research practices as something we may want to discourage and even punish but does not rise to the level of fraud. Goodstein writes, “There are many practices that are not commonly accepted within the scientific community, but don’t, or shouldn’t, amount to scientific fraud.”20 What difference does this make? Some might argue that it doesn’t matter at all. That even bad research practices like “poor data storage or retention,” “failure to report discrepant data,” or “overinterpretation of data” represent a failure of the scientific attitude. As previously noted, the scientific attitude isn’t all or nothing. Isn’t engaging in “deviation from accepted practices”21 also to be discouraged? Maybe so, but I would argue that there is a high cost for not differentiating this from fraud.
Without a sharp line, it may sometimes be difficult even for researchers themselves to tell when they are on the verge of fraud. Consider again the example of cold fusion. Was this deception or self-deception—and can these be cleanly separated?22 In his book Voodoo Science, Robert Park argues that self-delusion evolves imperceptibly into fraud.23 Most would disagree, because fraud is intentional. As Goodstein remarks, self-delusion and other examples of human foibles should not be thought of as fraud.
Mistaken interpretations of how nature behaves do not and never will constitute scientific misconduct. They certainly tell us something about the ways in which scientists may fall victim to self-delusion, misperceptions, unrealistic expectations, and flawed experimentation, to name but a few shortcomings. But these are examples of all-too-human foibles, not instances of fraud.24
Perhaps we need another category. Goodstein argues that even though the cold fusion case was not fraud it comes close to what Irving Langmuir calls “pathological science,” which is when “the person always thinks he or she is doing the right thing, but is led into folly by self-delusion.”25 So perhaps Park and Goodstein are both right: even if self-delusion is not fraud, it may be a step on the road that leads there. I think we need to take seriously the idea that what starts as self-delusion might later (like hubris) lead us into fraud. The question here is whether tolerating or indulging in self-delusion for long enough erodes our attitude toward what good science is supposed to look like.26
Moreover, even if we are solely concerned (as we are now) with intentional deception, it might be a good idea to examine any path that may lead there. It is important to recognize that self-delusion, cognitive bias, sloppy research practices, and pathological science are all dangerous—even if we do not think that they constitute fraud—precisely because if left unchecked they might erode respect for the scientific attitude, which can lead to fraud. But this does not mean that actual fraud should not be held distinct. Neither should there be any excuse for lack of clarity in university policies over what actually is fraud, versus what practices we merely wish to discourage. We are right to want to encourage researchers to have impeccable attitudes about learning from evidence, even if we must also draw a line between those who are engaging in questionable or nonstandard research practices and those who have committed fraud.
Any lack of clarity—or lack of commitment actually to use the word “fraud” in cases that are unequivocal—can be a problem, for it allows those who have committed fraud sometimes to hide behind unspecified admissions of wrongdoing and cop to the fact that they made mistakes without truly accepting responsibility for them. This does a disservice not only to the majority of honest scientists, but also to those who have not (quite) committed fraud, for it makes the community of scientists even more suspicious when someone has committed only a mistake (e.g., faulty data storage) yet has not committed full-blown fraud.27 If fraud is defined merely as one type of scientific misconduct, or we use the latter phrase as a euphemism for the former, whom does this serve?
If the scientific attitude is our criterion, when we find fraud we should name and expose it. This will act as a deterrent to others and a signal of integrity for science as a whole.
We must be vigilant to find and expose such wrongdoers, careful at the same time not to spread the blame beyond where it belongs and unintentionally stifle the freedom to question and explore that has always characterized scientific progress.28
When an allegation of fraud is made public, the punishment from one’s community can (and should) be swift and sure. But first it must not be covered up. We can acknowledge the pressures to do so, but these can tarnish the reputation of science. For when blame is not cast precisely where it should be—and some suspect that fraud is too often excused or covered up—the unintended consequence can be that an injustice is done to those who are merely accused of it. When fraud is selectively punished, those who are only accused may be presumed guilty. We see evidence of this in the previously mentioned scandals over reproducibility and article retraction. Scientific errors sometimes happen. Some studies are irreproducible and/or can be retracted for reasons that have nothing whatsoever to do with fraud. Yet if there is no sharp line for what constitutes fraud—and we retreat into the weasel words “research misconduct”—it is far too easy to say “a pox on all your houses” and look only at external events (like article retraction) and assume that this is a proxy for bad intent. The sum result is that some fraudsters are allowed to get away with it, while some who have not committed fraud are falsely accused. None of this is good for science.
When left to scientists rather than administrators, there is usually no equivocating about naming and punishing actual instances of fraud. Indeed, I see it as one of the virtues of using the scientific attitude to distinguish science from nonscience that it explains why scientists are so hard on fraud. If we talked more about the scientific attitude as an essential feature of science, this might allow scientists more easily to police the line between good and bad science.29 Some may wonder why this would be. After all, if the process of group scrutiny of individual ideas in science is so good, it will catch all types of errors, whether they were committed intentionally or not. But this misses the point, which is that science is precisely the kind of enterprise where we must count on most policing to be self-policing. If science were a dishonest enterprise where everyone cheated—and it was the job of peer reviewers to catch them—science would break down. Thus fraud is and should be recognized as different in kind from other scientific mistakes, for it represents a breach of faith in the values that bind scientists together.
The Vaccine–Autism Debacle
We are now in a position to consider the impact that scientific fraud can have not just on scientists but on the entire community of people who rely on science to make decisions about their daily lives. In 1998, Dr. Andrew Wakefield published a paper with twelve coauthors in the prestigious British medical journal Lancet, which claimed to have found a link between the classic MMR triple vaccine and the onset of autism. If true, this would have been an enormous breakthrough in autism research. Both the public and the press demanded more information, so along with a few of his coauthors Wakefield held a press conference. Already, questions were being raised about the validity of the research. As it turned out, the paper was based on an extremely small sample of only twelve children. There were, moreover, no controls; all of the children in the study had been vaccinated and had autism. While this may sound to the layperson like good evidence of a causal link, to someone with training in statistics, questions will naturally arise. For one, how did the patients come to the study? This is important: far from being a randomized double-blind clinical study (where researchers randomly test their hypothesis on only half of a sample population, with neither the subject nor the researcher knowing who is in which half), or even a “case control study” (where investigators examine a group that has been naturally exposed to the phenomenon in question),30 Wakefield’s paper was a simple “case series” study, which is perhaps the equivalent of finding out by accident that several people have the same birthday, then mining them for further correlations. Obviously, with the latter, there can be a danger of selection bias. Finally, a good deal of the study’s evidence for a correlation between vaccines and autism was based on a short timeline between vaccination and onset of symptoms, yet this was measured through parental recollection.
Any one of these things would be enough to raise suspicions in the minds of other researchers, and they did. For the next several years, medical researchers from all over the world performed multiple studies to see if they could recreate Wakefield’s proposed link between vaccines and autism. A good deal of speculation focused on the question of whether thimerosal in the MMR shot might have caused mercury poisoning. In the meantime, just to be safe, several countries stopped using thimerosal while research was underway. But, in the end, none of the studies found any link.
Epidemiologists in Finland pored over the medical records of more than two million children … finding no evidence that the [MMR] vaccine caused autism. In addition, several countries removed thimerosal from vaccines before the United States. Studies in virtually all of them—Denmark, Canada, Sweden, and the United Kingdom—found that the number of children diagnosed with autism continued to rise throughout the 1990s, after thimerosal had been removed. All told, ten separate studies failed to find a link between MMR and autism; six other groups failed to find a link between thimerosal and autism.31
Meanwhile, a few stunning facts about Wakefield’s original study came to light. In 2004, it was discovered that Wakefield had been on the payroll of an attorney who was planning a massive lawsuit against an MMR vaccine manufacturer. Worse, it turned out that almost half the children who had been reported on in Wakefield’s study had been funneled to him through the lawyer. Finally, it was learned that just before Wakefield published his study, he had filed a patent for a competing vaccine to the classic MMR shot.32 Far from mere selection bias, this was a massive undisclosed conflict of interest that raised numerous questions over Wakefield’s motives. Within days, ten of Wakefield’s coauthors took their names off the study.
But by this point it was too late. The public had already heard the rumors and vaccination rates had begun to drop. In Ashland, Oregon, there was a 30 percent vaccination exemption rate. In Marin County, California, the exemption rate was more than three times the rest of the state.33 With such pockets of vaccine resistance, doctors began to worry about “herd immunity,” which is when the vaccination rate falls so low that one can no longer count on the “free rider” benefit of remaining unvaccinated in a community where most others have been vaccinated. And the results were devastating. After being beaten to a standstill, measles, whooping cough, diphtheria, and other diseases began to make a comeback:
[Measles] is the most infectious microbe known to man and has killed more children than any other disease in history. A decade after the World Health Organization (WHO) declared the virus effectively eradicated everywhere in the Americas save for the Dominican Republic and Haiti, declining vaccination rates have led to an explosion of outbreaks around the world. In Great Britain, there’s been more than a thousandfold increase in measles cases since 2000. In the United States, there have been outbreaks in many of the country’s most populous states, including Illinois, New York, and Wisconsin.34
It didn’t help that many in the media were whipping up the story, trying to tell “both sides” of the vaccine “controversy.”35 Meanwhile, many parents of autistic children didn’t care about any alleged irregularities in Wakefield’s work. He continued to speak at autism conferences worldwide, where he was treated as a hero. When the Lancet finally retracted his paper (in 2010), and Wakefield was stripped of his medical license in Britain, conspiracy theories began to run wild. Why was his work being suppressed? Angry parents (including a number of Hollywood celebrities) were already organized and furious with what they saw as a cover up. If thimerosal wasn’t dangerous, why had it been removed?
Then in 2011, definitive word came: Wakefield’s work was a fraud. In addition to the severe conflict of interest noted above, Brian Deer (an investigative journalist who had already broken a good deal of the earlier revelations in 2004) finally had a chance to interview the parents of Wakefield’s patients and examine their medical records. And what he found was shocking. “No case was free of misreporting or alteration.”36 Wakefield had altered the medical records of every single child in the study.
Three of nine children reported with regressive autism did not have autism diagnosed at all. Only one child clearly had regressive autism.
Despite the paper claiming that all 12 children were “previously normal,” five had documented pre-existing developmental concerns.
Some children were reported to have experienced first behavioural symptoms within days of MMR, but the records documented these as starting some months after vaccination. …
The parents of eight children were reported as blaming MMR, but 11 families made this allegation at the hospital. The exclusion of three allegations—all giving times to onset of problems in months—helped to create the appearance of a 14 day temporal link.
Patients were recruited through anti-MMR campaigners, and the study was commissioned and funded for planned litigation.37
The British Medical Journal (perhaps the second-most prestigious medical journal in Britain, after the Lancet) took the unprecedented step of accepting Deer’s work as definitive evidence of fraud and, after it had been peer reviewed, published his paper alongside their own editorial, which concluded that “clear evidence of falsification of data should now close the door on this damaging vaccine scare” and called Wakefield’s work an “elaborate fraud.”38 They concluded:
Who perpetrated this fraud? There is no doubt that it was Wakefield. Is it possible that he was wrong, but not dishonest: that he was so incompetent that he was unable to fairly describe the project, or to report even one of the 12 children’s cases accurately? No. A great deal of thought and effort must have gone into drafting the paper to achieve the results he wanted: the discrepancies all led in one direction; misreporting was gross.39
A few months later another commentator called Wakefield’s fraud “the most damaging medical hoax of the last 100 years.”40 Four year later, in early 2015, there was a measles outbreak with over a hundred confirmed cases across fourteen states in the US.41
As we can see, scientific fraud is ugly and the fallout can be massive.42 Yet one of the most interesting parts of the story is the enormous scorn of Wakefield’s work by the scientific community (juxtaposed, unfortunately, against public confusion and willful ignorance enabled by the media), before he was proven to be a fraud. Why did this occur? If fraud must be intentional, how did the scientific community seem to reach a consensus in advance of seeing proof that Wakefield had manipulated data? The answer is that although fraud is perhaps the most egregious form of intentional misconduct, it is not the only kind of cheating one can do. Once it had come to light that Wakefield had an enormous undisclosed conflict of interest, his intentions were suspicious. Even though no one had yet proven that his financial interests had colored his scientific work, where there was so much smoke, few in the scientific community could see how there was not a huge fire behind it. Since Wakefield had already forsaken a core principle of scientific practice—that one must disclose in advance all possible conflicts of interest—many concluded that he didn’t deserve the benefit of the doubt. And they were right. Yet one mourns that the scientific community’s self-correction in this case still has not made its way to all corners of the general population.43
On a Happier Note
I would like to end on a brighter note. In this chapter, we have encountered perhaps the ugliest face of science. But what should a scientist do if his or her theory isn’t working? Where the time and career pressures are massive and the data are coming out all wrong?
A few years before Andrew Wakefield’s paper, a little-known British astronomer named Andrew Lyne stood before several hundred colleagues at the American Astronomical Society meeting in Atlanta, Georgia. He had been invited to give a paper on his stunning discovery of a planet orbiting a pulsar. How could that happen? A pulsar is the result of a star that has exploded in a supernova, which theoretically should have destroyed anything even close to its orbit. And yet, after rechecking his results, the planet remained, so Lyne published his paper in the prestigious journal Nature. But now there was a problem. A few weeks before his trip to Atlanta, Lyne discovered a crucial error in one of his calculations: he had forgotten to account for the fact that the Earth’s orbit was elliptical rather than circular. This was a mistake from first-year physics. When he made the correction, “the planet disappeared.” But, standing there that day in front of his colleagues, Lyne made no excuse for himself. He told the audience what he had found and then told them why he had been wrong, after which they gave him a standing ovation. It was “the most honorable thing I’ve ever seen,” said one astronomer who was present. “A good scientist is ruthlessly honest with him- or herself, and that’s what you’ve just witnessed.”44
That is the true spirit of the scientific attitude.