BRUCE HOOD
Director, Bristol Cognitive Development Centre, University of Bristol; author, The Self Illusion: How the Social Brain Creates Identity
As someone fairly committed to the death of our solar system and ultimately the entropy of the universe, I think the question of what we should worry about is irrelevant in the end. In any case, natural selection eventually corrects for perturbations that threaten the stability of environments. Nature will find a way, and ultimately all things will cease to be. So I could be glib and simply say, “Don’t worry, be happy.” Of course we are not wired that way, and being happy requires not worrying. My concern then, rather than worry, is how we go about science, and in particular the obsession with impact.
Up until the last century, science was largely the prerogative of the independently wealthy, who had the resources and time to pursue their passions for discovery. Later, large commercial companies would invest in research and development to gain the edge over competitors by innovating. The introduction of government funding for science in the early part of the 20th century was spurred by wars, economic depression, and disease. This not only broadened the scope of research by enabling much larger projects that were not motivated simply by profit but also created a new professional: the government-funded scientist.
In the U.K., the end of the 20th century was the golden period for funding. Since then, there has been significant shrinkage, in the West at least, of support for research in science. Today it is much harder to attract funding for research, as governments grapple with the world recession—unless, of course, that research generates economic wealth.
It used to be the case that for a research-grant application the results of any output were expected to be disseminated in publications or presentations at conferences—expectations that could be covered in a sentence or two. In the U.K. today (and I imagine this also true in the U.S.), a significant part of the application must address something called “pathways to impact.” What does that mean, exactly?
According to the U.K. research council’s own guidelines, it has to be a
demonstrable contribution that excellent research makes to society and the economy. Impact embraces all the extremely diverse ways in which research-related knowledge and skills benefit individuals, organisations and nations by: fostering global economic performance, and specifically the economic competitiveness of the U.K.; increasing the effectiveness of public services and policy; enhancing quality of life, health and creative output.
This is not simply a box-ticking exercise. As part of the nationwide assessment of U.K. research known as the Research Excellence Framework (REF), impact features prominently in the equation. “What’s the problem?” you might ask. Taxpayers’ money funds research, and they need a return on their investment.
The first major problem is that it shifts the agenda away from scientific discovery to the application of science. I have witnessed in my own department in the past ten years that those who work on theoretical science are not as successful at procuring funding as those who work on application. Moreover, that application is primarily motivated by economic goals. Universities are being encouraged to form partnerships with industry to make up for the reduction in government funding. This is problematic for two reasons: The practices and agendas of industry conflict with those of the independent researcher; moreover, many important innovations were not conceived as applications and would probably not have emerged in an environment that emphasized commercial value. I would submit that focusing on impact is a case of putting the cart before the horse—or at least of not recognizing the value of theoretical work. We would be wise to remember Francis Bacon’s advice that serendipity is a natural consequence of the pursuit of science.
Many of us work in areas that are difficult to fit into the impact framework. My own research is theoretical. When I’m asked to provide a pathway-to-impact statement, I rely on my experience of, and enjoyment in, delivering public lectures, because frankly the things that interest me do not obviously translate into impact that will foster economic performance. However, public engagement can be problematic, especially when one is addressing issues of concern. Most members of the general public—and, more important, the media that inform them—are not familiar with either the scientific method or statistics. This is one reason why the public is so suspicious of scientists, or finds them frustrating because they never seem to give a straight answer on such pertinent issues as vaccination or health risks. Most nonscientists do not understand explanations couched in terms of probability or complex, multifactorial interactions. Weekly headlines like “X CAUSES CANCER” or “THE DISCOVERY OF GENES FOR X” reflect this need to simplify scientific findings.
Finally, most academics themselves have succumbed to the allure of impact. Every science journal has an impact factor, which is a measure of how often articles are cited. It is a reasonable metric, but it creates a bias in the scientific process by prioritizing those studies that are the most extraordinary. As we have witnessed in the past few years, this has led to the downfall of several high-profile scientists, who lost their jobs because they fabricated studies that ended up in high-impact journals. Why did they do this? Simply because you need impact in order to succeed. My concern is that impact is incompatible with good science because it distorts the process by looking for the immediate payoff and to hell with caution.
Maybe my concern is unwarranted. Science is self-correcting, and when the world comes out of recession we should see a return to the balance between theory and application. But then perhaps I should have been more alarmist; that way I’d probably have made more impact.