BRIAN NOSEK KNOWS all about HARKing. The psychology professor at the University of Virginia realized to his chagrin that he was guilty of that bad practice himself. “When you talk to graduate students in my lab they will describe this for you. We sit down at the beginning of a semester and talk about experimental design. They go and do the study. When they come back, and we’re looking at the data, the first question that I have is, why did we do the study?” He can’t typically remember what hypothesis they were trying to test, so he can’t determine whether the results confirm a hypothesis or explore a new one. “We do both [confirmation and exploration] all the time, but it’s hard to distinguish it because we’re busy. We’re distracted. We’re just doing lots of stuff.”
After thinking about his own research practices, Nosek had an epiphany. Simply increasing transparency could go a long way toward reducing the reproducibility problems that plague biomedical research. For starters, scientists would avoid the pitfall of HARKing if they did a better job of keeping track of their ideas—especially if they documented what they were planning to do before they actually sat down to do it. Though utterly basic, this idea is not baked into the routines in Nosek’s field of psychology or in biomedical research. So Nosek decided to do something about that. He started a nonprofit called the Center for Open Science, housed incongruously in the business center of the Omni Hotel in downtown Charlottesville, Virginia. His staff, mostly software developers, sit at MacBook computers hooked up to gleaming white displays. Everyone works in one big, open room and can wander over to cupboards stocked with free food. The main project at the center is a data repository called the Open Science Framework.
Nosek put this new system of organization and transparency to the test in 2012 by trying to reproduce some of the studies in his field. After he floated this idea on a listserv, more than two hundred other scientists from around the world said they wanted to get in on the action. Over the next few years, this loose affiliation of psychologists selected one hundred research papers and set about redoing them. The results made news around the world. “Psychology’s Fears Confirmed: Rechecked Studies Don’t Hold Up,” read the page-one New York Times headline on August 28, 2015. Two-thirds of the reproduced results were so weak that they didn’t reach statistical significance. Many of those at least leaned in the same direction as the original study but could not on their own be considered evidence of an effect. About a third of the studies actually suggested there was either no effect whatsoever or even an effect opposite to what the original paper reported. There has been some pushback against these findings, but the broad conclusions still stand—and Nosek happily pointed out that his critics had made their case by accessing his readily available working material, which in itself was a triumph for transparency.
Nosek suspects that a lot of the problems he identified had arisen because the scientists running these initial studies hadn’t clearly distinguished between exploratory and confirmatory research. Many were most likely exploratory studies, but the researchers ended up using statistical tests appropriate for evaluating a predetermined hypothesis. To avoid this, the Open Science Framework invites scientists to register their hypotheses in advance so that they can later demonstrate that their studies are indeed confirmatory. This is not a new idea. The Food and Drug Administration Modernization Act of 1997 requires scientists running clinical trials on potential new drugs or devices to register their hypotheses in advance in a federal repository called ClinicalTrials.gov, set up by the National Institutes of Health (NIH) in 2000. This law has another important salutary effect: some drug companies had the habit of simply not publishing studies if the results were not favorable to the drug they were investigating. This is known as the file drawer effect because studies end up getting filed away rather than appearing in the scientific literature. That tactic is now harder to hide.
This federal registration system is far from perfect. Many scientists never follow up and report their data, despite the law. And researchers still sometimes change their goals after an experiment is well under way. But at least those critiquing their work can bring these changes to light. Ben Goldacre, a doctor and gadfly in the United Kingdom, has exposed many examples of studies that don’t report the results they said they would or present unanticipated results that are in fact exploratory rather than confirmatory. He hectors journals to publish clarifications when he finds evidence of this but has met with limited success.
Robert Kaplan and Veronica Irvin at the NIH set out to see whether the law requiring scientists to declare their end points in advance really made a difference. They reviewed major studies of drugs or dietary supplements supported by the National Heart, Lung and Blood Institute between 1970 and 2012 and came up with a startling answer. Of the thirty big studies done before the law took effect, 57 percent showed that the drug or supplement being tested was beneficial. But once scientists had to announce in advance what exactly they were looking for, the success rate plummeted. Only 8 percent of the studies (two out of twenty-five) published after 2000 confirmed the preregistered hypothesis.
This doesn’t prove that the early studies had changed their goals in midstream, but it’s highly suggestive. Kaplan and Irvin note that in many of the twenty-five papers published after 2000, scientists still reported benefits they had not explicitly set out to look for, but they dutifully pointed out that these were unexpected findings of an exploratory nature, not data supporting a hypothesis. If the scientists hadn’t been required in advance to say what they were looking for, “it is possible that the number of positive studies post-2000 would have looked very similar to the pre-2000 period,” they wrote.
Clearly, biomedicine is better off with transparency. Nosek envisions extending the idea far beyond the basic starting point of registering ideas in advance. The Open Science Framework allows scientists to organize their entire experiment by providing databases for depositing every aspect of the research, from the algorithms and methods scientists used to analyze their findings to the raw data itself. Data collected by a postdoc would still be easy to find long after he or she had moved on to another lab. And collaborators could easily (and selectively) share the resources involved in their project. Eventually, in Nosek’s glimmering future, it should be easy to convert the text and data in a project into a publishable article.
The challenge is to convince scientists to use this system. It seems bureaucratic and cumbersome—even people on Nosek’s team told me it took them a while to adjust to it. Scientists have to be persuaded that it actually solves problems that they in fact have and doesn’t just create more work for them. Nosek wants to convince scientists that the time they put in at the start of the process will actually streamline their research over the course of an entire experiment—and maybe even catch methodological problems before an experiment gets under way. He is not above paying scientists to publish preregistered experiments—he landed a $1 million grant so he could offer $1,000 to 1,000 scientists as an incentive.
Sharing data and methods, whether through the Open Science Framework or some other means, would make science more transparent and in principle more reproducible. In fact, some of the major journals make data sharing a requirement of publication. You often must give your reagents to another lab that asks for them. Federal rules have openness requirements as well, but these are rarely enforced. In truth, scientists don’t reliably play by these rules, even when taxpayers fund their research as a public good.
“Sharing of reagents, as long as I’ve worked in the field, has been highly variable,” Mark Winey at the University of California, Davis, told me. And he’s been a biologist for nearly thirty years. “Some labs, you just know they aren’t going to share something, even if they publish it in a journal that has a policy for sharing. The journals have no way to enforce it.” Winey said a scientist once refused to share an ingredient with him, even though a journal’s policy required it. Winey complained to the journal and asked it to insist. The attempt went nowhere. The journal was “initially unresponsive, and then ineffective,” he told me.
Labs gradually develop reputations. Some will send nothing; others will share everything. Winey said he’s sometimes not thrilled to share if it looks like another lab is trying to “scoop” him, but that comes with the territory. Most of the time, he said, he likes to share because often he’s done with a project and it’s gratifying to see someone interested in carrying his ideas forward.
Sharing data can accelerate progress in biomedical research by helping researchers discover errors more quickly. During the sequencing of the human genome, scientists working on the federal effort deposited all their sequence data into a public database as they went along. And in 2001, when it came time to announce a preliminary sequence, Francis Collins, who led the genome sequencing project, and his colleagues made their announcement in the journal Nature. The paper has been cited more than 18,000 times, making it a blockbuster. Of course Nature wasn’t about to publish the 3 billion or so letters that represent the human genome; instead, the article singled out a dozen or so takeaway messages that scientists gleaned from their first, excited look at the entire genome. Among the big surprises: scientists had identified 223 genes that had apparently jumped from bacteria straight into the human genome. These genes were distinct from the many that humans share with our closer evolutionary relatives, so the researchers concluded that the bacterial genes were surprising latecomers.
“I didn’t believe it,” said Steven Salzberg, who had a lot of experience analyzing genomes at The Institute for Genomic Research (TIGR) (founded by J. Craig Venter, a big-time rival of Collins’s group). “It’s very implausible that bacteria can transfer their DNA to us,” Salzberg said. Microbes would have to infect an egg or sperm cells to incorporate genetic material in a way that could be passed down through generations. “No bacteria do that. Retroviruses do that, but not bacteria.” Since the genome data were all readily available, Salzberg and his colleagues had no trouble running their own analysis. He said they started working on it the very day the Nature paper came out. Three months later, Salzberg and his colleagues got Science magazine to publish their paper announcing that the purported “bacterial genes” were nothing of the sort.
Francis Collins remembers the mad scramble to get that genome paper put together. The team leaders had asked one scientist to look for genes that might have come into our genome from bacteria or fungi. “He thought he was seeing evidence of that kind of phenomenon,” Collins told me. “And the rest of us looked at it and thought that’s really intriguing, and while this is an unexpected finding, we couldn’t find anything wrong with it. So it went into the paper.” Because the underlying data were already available to anyone who knew enough to check out the claim, it didn’t take long to disprove that. “But that’s fine. I think that’s the way it ought to be,” Collins said. “You do in fact take some chances. You make it transparent and then if somebody has a more rigorous way of looking at it,” he or she can set the record straight. “At least in this case it didn’t use a lot of resources, and no humans were put at risk—except for embarrassment, I suppose!”
Collins himself had advocated for open genome data back in the days when he was running the Human Genome Project. That was driven in part by his desire to prevent private companies from generating data in secret and patenting thousands of human genes. But it also had a salutary effect on genome research overall. “The guys doing the sequencing aren’t the best guys to analyze the data,” said Salzberg, who is now at Johns Hopkins University. “I have no doubt in the world of genomics the science is going to be better if people can look at the data.”
But sharing data opens up potential career hazards. “We’ve been scooped a couple of times,” he said. For example, he was working to assemble the genome of the loblolly pine. Competitors in Sweden were at the same time sequencing the Norway spruce. Whoever published first would be able to claim having completed the first conifer genome. Salzberg was posting his data as he went along and said his competitors were clearly watching his progress, because when his genome was getting close, the other group rushed a paper into Nature. Salzberg says the European work was much less complete, “but the journals don’t really care. They want the headlines.” When Salzberg was ready to publish, Nature passed on the paper, pointing out that the loblolly pine wasn’t the first conifer genome to be published. So it ended up in a less prestigious journal. “If we had just sat on our data and been secretive about it, they [our competitors] probably wouldn’t have published when they did because they wouldn’t have known where we were. So there’s this incentive to not share.”
This is another example of the perverse incentives in biomedical research. What’s best for moving science forward isn’t necessarily best for a researcher’s career. “The reason we publish our results is so others can build on them. So why wait? It seems intuitively obvious; the sooner people see your results, the faster the field will move.” But Salzberg ran headlong into heavy resistance to this idea about a decade ago, when he and a top scientist at the NIH were working to create a big collection of influenza genomes. At the time only half a dozen flu viruses had been sequenced, but flu researchers knew they would understand the disease a lot better if they could catalog many examples of this rapidly mutating virus. Salzberg says he was stunned to discover that many leading flu researchers wouldn’t send in samples for sequencing, even if they were paid for their trouble and promised coauthorships on any paper published. “Many of the leading flu people said, ‘No thank you. We don’t want to share our samples.’ They wanted to get NIH money to sequence their flu samples, but they wanted to sequence them and sit on them” instead of depositing them in the public database. Salzberg says some still pursue that close-to-the-vest strategy today. Eventually, enough flu researchers agreed to cooperate with the central flu-sequencing effort that it now contains tens of thousands of digital examples for scientists to use.
Researchers studying people in clinical trials also often hoard that data. Sharing isn’t straightforward—scientists have to be careful not to reveal private personal information in the process, and it’s not as easy to strip out every potentially revealing detail as you might think. That becomes a convenient excuse not to even make the effort. “The group that’s doing the study is quite happy with that,” Salzberg told me, “but that doesn’t really help public health, doesn’t help our understanding of cancer or other diseases.” And subjects aren’t in favor of keeping their data locked up in the files of the doctor doing the study. “If you ask the patient is it OK to share your data with every scientist who’s working on your type of cancer, of course they’ll say yes. That’s why they’re doing it. But they [researchers] don’t ask that question! I’d like to see that change.”
So would the man with ALS, Tom Murphy (see Chapter 3). As it happens, the experimental drug he was taking seemed to have a beneficial effect on him, even though it failed overall among the people in the study. And that got him thinking. Sure, the drug isn’t going to be a cure for ALS, but if scientists could figure out why it may have helped him, maybe that would provide some important insight into the disease or new ideas for drugs. But Murphy quickly discovered that the world of science is not set up to work that way. All the medical information about him is scattered. His doctor has some files; he’s given blood samples to other scientists for other reasons; and although researchers around the country are starting to sequence the DNA of patients with ALS, Murphy wasn’t able to get anyone excited about sequencing his genome to see if it holds clues about his unusual response to the drug.
“There’s a lot of good things going on, and I look at all this and say, ‘Oh my God, why aren’t these people working together?’” Murphy told me. When you consider how much time is spent testing potential drugs in animal experiments, if you could shorten that step, “I think they could cut eight years off the process.” That sense of urgency is palpable among people with ALS today, since most have only a few years of life ahead of them, barring a major breakthrough in drug development. Murphy was frustrated beyond belief to find that so many ALS researchers simply won’t share their data. It’s like hiding pieces of a jigsaw puzzle. Murphy had been a military contractor. “I thought defense was tough, with all the competition, but this is a really tough one, and the lack of collaboration and sharing is…” He paused to find the right words. “Man, they’re in the dark ages.”
Transparency is at the core of a major effort to measure just how much basic cancer research can be reproduced reliably. Brian Nosek paired up with a Palo Alto company called Science Exchange in an effort to replicate fifty widely cited findings. The Reproducibility Project: Cancer Biology, as it is named, not only turned out to be a lesson in how to conduct science with maximum transparency but revealed just how challenging—and controversial—it can be to design credible experiments to reproduce the work of other labs. (It also revealed that science costs a lot of money: the team busted its multi-million-dollar budget and so had to drop about one-third of the experiments it had planned to reproduce.)
Unlike Glenn Begley, Nosek and his colleagues didn’t simply handpick experiments to reproduce. They used an algorithm to identify studies published between 2010 and 2012 that had garnered a lot of attention, having either been cited in many papers or been frequently downloaded from or viewed on websites. Science Exchange and the Center for Open Science didn’t have the resources to reproduce an entire paper, so they selected one or a few key experiments from each publication to run. Here’s where the transparency part of the project kicked in. Once the potential experiments had been selected, Nosek’s group published a “registered report” laying out their proposed experiment before actually hiring an independent lab to run the tests. This preregistration allowed outside scientists to judge the validity of the experimental design in advance. It also answered the biggest criticism lodged against Glenn Begley’s study, which was shrouded in secrecy. “By making the project transparent I think that allows the community to actually look at itself a little bit more than just saying, ‘Hey we’ve got a problem,’” said Tim Errington, who coordinated many of the replication studies at the Center for Open Science. Real numbers could flow from the result.
Glenn Begley pooh-poohed the cancer Reproducibility Project, complaining that some of the studies being replicated were so poorly designed in the first place that simply achieving the same results would be meaningless. He resigned from an advisory committee at Science Exchange in disgust. In fact, scientists at the Reproducibility Project had fretted about that same issue, as did the reviewers at the journal that agreed to publish all the registered reports and the results of the experiments that followed. eLife editor Randy Schekman, a Nobel Prize–winning scientist at the University of California, Berkeley, and the Howard Hughes Medical Institute, told me the reviewers made sure that the proposed experiments were scientifically valid by expanding the number of animals or adding more control groups as needed.
Errington said designing the experiments was sometimes a challenge because researchers didn’t want to alter the original experiment so much that they would no longer be reproducing it. Sometimes the editors of eLife suggested that they study different end points, because those would have greater scientific validity. Errington explained why that would amount to a new experiment, not a replication.
Scientists whose work was being reproduced had a decidedly mixed reaction. “We had some [instances] where we could get essentially bare-bones information… versus some others who were incredibly engaged,” Errington said. Some of the original authors spent many hours helping explain their work (notably, not a single experiment was described well enough in the original publication to be redone by anybody else). The original scientists reviewed the registered reports before eLife published them. Some spent a lot of time preparing fresh batches of cells or antibodies and “digging through their lab notebooks to give us raw data.” Other scientists refused to cooperate at all.
“I’m dumbfounded by the silliness and the naïveté of this project,” Robert Weinberg at the Massachusetts Institute of Technology told me. “These are words I choose carefully.” Weinberg is a leading cancer researcher known for his science as well as his strong opinions. He was senior author of one of the fifty papers chosen for replication. He viewed the entire effort as a massive waste of time. First and foremost, he said, it often takes months for a new researcher to learn the techniques his lab uses. So Weinberg argued a one-off attempt to reproduce his work would be doomed. When the Reproducibility Project asked for more details about the experiment, he instead offered to host someone in his lab for a month to learn the techniques firsthand. “They weren’t interested.” Weinberg also argued that, while his particular experiment hadn’t been replicated directly, other labs had observed its essential conclusions. That’s always an important step in biomedical research because it shows that a result doesn’t simply apply to a single cancer in a single breed of mice. On the other hand, it’s not necessarily evidence that a finding is solid, as the story of transdifferentiation illustrated so clearly.
Weinberg was also skeptical about the labs that would reproduce the experiments. They would either be private labs, which drug companies often hire to carry out specific experiments, or they might be a “core facility” at a university—for example, a centralized mouse lab that cares for the animals and runs experiments for academic scientists. Though Weinberg may not hold them in high regard, these labs often have to meet higher standards than university labs do because their results can end up in drug applications that get scrutinized by the Food and Drug Administration. That didn’t sway him. “What motivation do these contract research labs have to try to reproduce [my experiment]?” Weinberg wondered. “They are just being paid. And if they fail maybe they want to fail. Maybe they’re not interested in vindicating me or validating the work.” He said if he were to cooperate, he’d have to lay out twenty or thirty different conditions to explain the entire protocol—everything from saying when the cancer cells were last unfrozen to specifying the percentage of carbon dioxide they had in the incubator.
That was actually a point that the cancer Reproducibility Project was trying to make. Those all-important details are rarely shared. Errington says one paper gave essentially no description at all of the methods it employed. And if any tiny detail can derail an experiment, just how robust is the result? Nobody cares about an experiment that works only in a particular strain of mice with a single strain of cancer cells if it has no broader relevance. These experiments are, after all, supposed to reveal something basic about biology and, with luck, human cancer. An experiment that requires conditions so exquisite that only the lab where it originated can repeat it hardly qualifies as reproducible. That said, “not reproduced” does not always mean “wrong.”
Errington said it’s quite understandable that Weinberg wanted to train someone to do the experiment in his lab. “I think it’s a good thing. It happens a lot in research for a good reason. But it doesn’t always have to happen,” and it’s important that the cancer reproducibility project be consistent in how it runs these studies. If they put someone in Weinberg’s lab, they’d have to do the same for every other experiment. “Imagine how much more funding we would need,” Errington said. “And we would be asking a slightly different question.”
In the end, Weinberg’s experiment was one of the studies dropped from the project due to lack of funding. But Schekman, the editor of eLife, told me Weinberg’s protestations were misguided. “If we don’t police ourselves, you can be sure the government is going to police us, since the money comes from the government,” he said. He cringed at the thought of Congress trying to ride herd on biomedical research. “Weinberg can stand on his throne and say this, but I think politically it was expedient” for scientists to take on the reproducibility project. He compared it with the 1975 conference in Asilomar, California, at which scientists voluntarily wrote rules to govern the early days of genetic engineering research.
Well aware of the trepidation that the Reproducibility Project has generated, Errington and Nosek are careful to say that the results will neither validate nor invalidate any of the papers included in their sample. After all, they are trying to reproduce just a few of many experiments in each paper. Those that succeed don’t vindicate the entire paper; those that fail don’t doom it. It is also likely that some experiments will fail just by chance. Here’s how Errington put it to me: each experiment is nothing more than a single data point, so focusing on a single outcome would be like drawing a conclusion from a single observation. That would be bad science. Errington says the meaning will come from looking at that experience as a whole.
“We did deflate some expectations because we don’t want anyone to think [our project] will do more than it really can,” Errington told me. In the end, the goal is to hold up a “mirror to ourselves as a community and say, ‘This is what we look like. These are our publishing practices; these are our research practices.’” If published reports are so thin on detail that they can’t be reproduced, that’s worth noting. And when scientists can’t share their ingredients because they’ve already thrown them away (along with the raw data), that’s worth noting as well. Errington hopes researchers will also find patterns that point out the best ways to do cancer research, so it can be more readily tested and reproduced. Ideally, the findings will be a beacon for improving the rigor of biomedical research.
Brian Nosek also hopes that this project will help biomedical scientists think differently about their craft. “Researchers care about openness. They care about reproducibility. Those are part of why they got into the discipline in the first place. They are not there cynically saying, ‘I got into science in order to publish papers.’ They got into science because they are curious. They want to figure things out.”
Right now, the structure of science makes it difficult for scientists to live by the values that often motivated them to go into research. This is a first step. Nosek has grand ambitions: he wants to change the entire culture of science. The question is how much he can accomplish with a few well-placed nudges, “walking people down that path without even needing to think that they are going down that path… and then making it easy to transition to a more open and more transparent and more reproducible workflow.”
But real change will also require a fresh attitude about openness. In a few biomedical fields, data sharing is now the norm. People who deduce the structure of proteins by studying X-rays of protein crystals, for example, work in a field where data, as well as analytical methods, are archived so that other labs can repeat the analysis. Young scientists brought up in the world of open-source software may be more receptive to these ideas. But it’s not baked into the culture of most biomedical research.
“I’ve had this argument with lots of other scientists. It’s hard to convince them,” Steven Salzberg told me. “I’m basically saying, ‘Yeah, you’re putting yourself at a little bit of risk by doing this.’ And they’re like, ‘But I don’t have to do that. Why should I do that?’ And I say, ‘Why did you go into science in the first place? Didn’t you go into science because you wanted to make the world a better place?’ Yeah, they did, but that was when they were a grad student or an undergraduate. They’ve forgotten that long ago. They’re in the rat race now.” They need to get their next grant, publish their next paper, and receive credit for everything they do. They have stepped into a world where career motivations discourage best scientific practices. As a result, the practice of science has drifted far from its intellectual roots.
Across the Johns Hopkins medical school campus from Salzberg’s office, Arturo Casadevall is trying to address not just transparency but other underlying issues that are hampering biomedical research. “The problems of reproducibility begin in the way we train scientists,” he told me. In fact, he took the job chairing the molecular microbiology and immunology department at the Johns Hopkins Bloomberg School of Public Health because he thinks this is where he can start to reform scientific education. He wants young scientists to think more sensibly about statistics, study design, and other fundamentals of biomedical research. “There is relatively little thought in biology,” he said matter-of-factly. “When people run into a problem what they do is throw more experiments at it. There is not a lot of quiet reflection about what is going on.” Casadevall said he wants to put philosophy back into the PhD. Scientists need to be taught to think, he says. “We clearly can teach scientific thinking. But we don’t do it in a formalized way.”
The first step should be to teach scientists how to design experiments properly. That’s usually missing from the curriculum. Graduate schools “mostly teach facts the first year,” said Jon Lorsch, director of the National Institute of General Medical Sciences at the NIH. “They should teach methods.” A few years ago, the NIH put out a request to the nation’s graduate schools, asking for a list of the classes that teach biomedical methods. The idea was to take the best of these classes and make the curriculum more broadly available. Lorsch said the survey was a bust. Universities apparently don’t offer a deep curriculum in research methodology for biomedical students.
Casadevall is planning to find ways to change those educational standards. “You walk down the hall here, and you stop graduate students, and you say how many times do you do an experiment, and they’ll say, ‘Three times.’ You’ll say, ‘Why three times?’” and they’ll say that’s how other people in their lab do it. But Casadevall says that’s all wrong. There’s a rigorous way to figure out how many times you need to do an experiment to get a meaningful result that scientists can and should calculate. It takes a bit of legwork, but the results are more likely to be solid. “But that is not the way most scientists are trained. Most scientists are not trained today on the basics of epistemology or logic.… We need to go back to work on the basics.” And he’s not talking about narrow mechanics—in fact, Casadevall argues that scientists need to spend more time thinking broadly about science and less about the specifics of their discipline. That creates intellectual ruts, which, among other things, make scientists cling more stubbornly to ideas that could be wrong.
“One of the things that is maddening to me is your typical scientist is born in an area and dies in the area.” Once people develop an expertise, “it’s very hard to get out of a field.” It becomes their social unit. Colleagues in the field are “the ones who determine your funding. They are your friends. They are at the conferences you go to,” he said. “But we should be encouraging people to move and not punish them. We punish them tremendously.” If you leave a field to change subjects, “they say you are not serious,” Casadevall said. And the new field will also consider you fickle.
That’s unfortunate: switching fields can help break ideas that are accepted as dogma. “When a newcomer comes in the first thing they usually do is they disturb the dogma,” he said. They may have trouble getting their ideas published and are otherwise harassed, “but those people are incredibly important. Because they come in and they unsettle the table. It’s the only way to move forward.” Academia used to encourage that by granting academic scientists sabbaticals once every seven years. That’s still an option, on paper at least, but for many in biomedical research, “that doesn’t work anymore because everyone’s writing grants and everyone’s too stressed out.” It’s just too risky to leave your lab for an academic year, given the struggle to fund labs these days.
Casadevall himself has managed a sweeping career in science, publishing on topics from immunology to genetically modified flu viruses. He’s also studied systemic problems within biology, including those that underpin the issues with reproducibility. And he’s even attracted to the job of managing a department, a commitment many scientists avoid because that means less time at the lab bench. Casadevall cuts against that grain. In his view, the culture of biomedical research is badly damaged. That big problem is more important to him than the many smaller problems that individual researchers take on, hoping to make the world a better place. “If I can figure out some way of making the enterprise work 1 percent better, that would be far more important than anything I can contribute in the lab.” That thinking is also starting to motivate other scientists, who recognize that the problems extend far beyond scientific training. They are starting to think about ways to fix a much deeper problem. Biomedicine’s entire culture is in need of serious repair.