Research

What I have noted about education is equally present in the domain of research. The “human capital” approach pioneered by Gary Becker is present here much as it is with respect to students. That is, faculty are presumed to be—and for purposes of evaluation are treated as—autonomous individuals who are interested in investing their human capital, thereby enhancing their salaries. The notions of communities of scholars and invisible colleges are jettisoned. Furthermore, to promote the human capital approach, researchers are simultaneously made subject to various forms of uncertainty and insecurity even as they are expected to be loyal to institutional goals—what Gori and Del Volgo (2009), following La Boétie, call “voluntary servitude.” Of course, the details play out differently in different situations. Consider some of the changes in the research process over the last several decades.

 

Counting publications. It has become fashionable to count publications, especially articles in scholarly journals, in making decisions about promotion, tenure, and merit increases. Publication counts have several distinct advantages for administrators who are required (usually by some higher authority such as the government agencies that provide funding) to audit the performance of researchers: Journal articles can be relatively rapidly completed, peer reviewed, and published. Hence, they fit relatively well into the annual reviews of scholars and departments. They also require relatively little work on the part of administrators, especially as compared with reading and evaluating the actual publications. As a result, establishing expected numbers of publications—especially in peer-reviewed journals—has become the norm in reviews of faculty. Of course, this poses several problems as scholars attempt, with more or less success, to game the system.

Quite obviously, it is possible in many fields to divide one’s research into numerous small pieces and publish each as a separate article. For example, scientists who study soil percolation rates use tanks with glass beads to simulate soil profiles. They can churn out numerous papers, each reporting a somewhat different experiment. Similarly, a social scientist who engages in survey research can write numerous papers where each examines a slightly different model of relations among the variables in the survey. Even in fields such as medieval history, where books have long been the major avenue to publication, scholars feel pressure to divide their work into small pieces and to publish it in scholarly journals (Kehm and Leiðytë 2010).

In contrast, some types of research require years of work before any publishable result can emerge. For example, developing new plant varieties can easily take a decade of research. Often there is little to report before the new variety is developed. Similarly, certain forms of ethnographic research, paleontology, natural history, and ecology require years of work before publishable results emerge. Clearly, counting journal articles implicitly devalues this sort of research. Moreover, it sends a signal to persons considering pursuing such topics that they might be better off shifting to a field in which publications are more rapidly produced.

Second, the number of scientific journals has risen dramatically; hence, there is likely a home for any paper if one searches long enough (Colquhoun 2011). Quite obviously, nations that have expanded their scholarly communities greatly over the last half century—India, Brazil, and China, among others—have also produced their own scientific journals. In addition, the advent of web-based scientific journals has opened numerous opportunities for publication. One site lists 8,750 open access journals from 121 nations (DOAJ 2013). Thousands more require subscriptions from readers and/or fees from authors. Although some are legitimate journals that engage in serious peer review before publication, others are vanity presses, publishing virtually everything submitted for a substantial fee. Given the pressure to publish, it is likely that many will survive, earning a tidy profit for their owners while contributing little to any form of scholarship. Proponents of counting would do well to follow Einstein’s advice: “Everything that can be counted does not necessarily count; everything that counts cannot necessarily be counted.” In short, although it is easy to count publications, it is unclear just what it is that one is counting. In an attempt to resolve this problem, citation counting has become an alternative approach.

 

Counting citations. In general, authors usually have no direct control over citations to their work. As a result, administrators tend to see citations as a better means of evaluation than merely counting articles. Citation counting has emerged as commonplace in the United States, Western Europe, and Japan. In addition, “[s]cholars from China to Italy, from Belgium to South Africa, from Australia to the Nordic countries, … rely on such data [from the Web of Science] to discuss the success, impact, and visibility of research in specific contexts” (Paasi 2005, 772; see also Gori and Del Volgo 2009).

It is useful to review just how citation counting emerged and what its intrinsic limits are. The most widely used citation database, the Web of Science, is illustrative. It was initially developed for librarians at large US universities. It was intended to provide librarians with a convenient guide to purchasing subscriptions to journals. Through citations, librarians could gauge how frequently a journal in a given discipline was used and purchase subscriptions to those journals most widely cited. In addition, because the initial audience was US librarians and indirectly US scientists, the database had and continues to have a strong English-language bias. Indeed, English-language abstracts are required for accepting journals into the database.

Currently, the Web of Science contains information about the contents of and citations to about 10,000 journals out of the approximately 100,000 journals published worldwide. As Archambault and Larivière (2009, 638) explain, this “… likely had the effect of creating a selffulfilling [sic] prophecy. Indeed, by concentrating on the US situation and by positively biasing the sources in favour of US journals, the method placed these journals on centre stage. Had a broader linguistic and national coverage been considered, it might have revealed that these journals were not in fact more cited than others.” Hence, not surprisingly, the rankings of the top 20 institutions among highly cited scientists include 18 US institutions and two from the United Kingdom (Paasi 2005).

As a result of these rather grievous faults, such as indexing a skewed sample of about 10% of all journals and not indexing much of the non-journal literature at all, the Web of Science is a poor database for measuring citations of individual institutions or scholars. But the problems only begin there. Other issues include the following.

All other things being equal, scholars working in large fields will have a greater chance of being cited than those working in small fields. In other words, even if I do path-breaking work in an area with 25 other colleagues, the maximum number of citations I can have will be limited by the number of people working in that area. In contrast, if there are 5,000 people working in that area, the maximum number of citations I could potentially have would be much higher. Given that new and interdisciplinary areas are likely to be small, this implies a clear bias against innovation in research.

In addition, authors may be cited for reasons other than the validity or importance of their findings (Kumar 2010). For example, one may be cited for erroneous or even fraudulent findings. A 2004 paper by Korean researchers Hwang Woo Suk et al. (2004) has been cited 388 times,1 largely for fraudulently claiming to have made discoveries about stem cells.

Moreover, those who cite articles rarely do so solely because of the quality of the paper; instead, they focus on how the article fits rhetorically within their own papers. Although neophytes may believe that scholarly (and especially scientific) publication merely involves engaging in careful research and publishing the results, the actual process is far more complex. As Karin Knorr Cetina (1981, 42) noted some years ago, “Scientific papers are not designed to promote an understanding of alternatives, but to foster the impression that what has been done is all that could be done.” Hence, one is always constrained to write in a manner that shows that one’s work helps to explain or clarify a problem defined by others in the field. Therefore, one must necessarily situate one’s work within a body of literature, showing that it addresses an important issue in that field. Papers that may once have been seen as brilliant breakthroughs are not cited if their results are now considered to be common knowledge. Other papers are cited because they bolster the argument being made by the author. Still others are cited because the author’s findings contradict, expand on, use similar (or different) instruments from those of the citing paper, or perhaps differ in subtle ways from the findings in the cited paper. Yet others are cited because the cited author is widely reputed in the field, but it is at the least arguable whether that person’s reputation is the result of being widely cited or that person is widely cited because of her reputation. The quality of all these cited papers is certainly a consideration, but it is only one of the many reasons that one cites another paper. Thus, some high-quality papers may be rarely cited, whereas some low-quality papers may be cited frequently (Borgman and Furner 2002).

In addition, different counts of citations give different results, depending on which journals are included in a given databases, what measures are used to compute citations, and the differing forms and frequency of citations across different fields. Importantly, there is no global registry of all scholarly publications; each database is a purposive sample of publications based on what the compiling agency defines as important.

Furthermore, to compare scholars within fields, one must first determine to which field an author belongs. As research increasingly takes place at the interface between fields and disciplines, deciding where to place an author becomes more difficult.

What’s more, many journal articles are co-authored; giving credit to each author is a complex process that is not uniform across fields of science (van Noorden 2010). In some fields, the first author is the lead, whereas in others the last author has that role. In still other fields—certain branches of physics are exemplary—there may be several hundred authors listed on a single paper.

Also, the number of citations in a given article varies considerably from one field to another. To compare across fields, one needs to “normalize” the citation rates. However, this is rarely done.

Finally, in some fields, journal articles are the primary means of scholarly communication. Most of the natural sciences and engineering fit into this category. However, in other fields, especially the humanities, books are the main means of communication. In the social sciences, one tends to find a mix of both journals and books. However, most of the citation databases focus on journals, thereby implicitly devaluing books and those fields of study that rely on book publication.

Most scientometrics experts decry the naïve usage of citation data to evaluate individual scholars (e.g., Braun 2010). One sums up the issue of using citations to rank scientists quite well: “There is a better way to evaluate the importance of a paper or the research output of an individual scholar: read it” (Bergstrom 2010, 870).

 

Checking prestige of journals. In addition to counting articles and citations, some administrators have begun to use the rankings of the various journals in a given field as a means of evaluating researchers. This is often accomplished by using the Journal Impact Factor (JIF) developed by the Web of Science. Similar approaches are used to evaluate book publishers. Of course, as noted above, this measure was not developed for evaluation of individual faculty members (Nature 2010) or even individual journals. Despite that, use of this approach means that an article appearing in Science would receive a higher ranking than one published in a less prestigious journal. Similarly, publishing an article in a top journal in a given field would be weighted more than an article in a regional journal.

Furthermore, some nations now offer bonuses to scholars who publish in these journals. Pakistani researchers can receive a bonus between $1,000 and $20,000 based on the JIF of the journal in which they publish. Several Chinese research institutes offer bonuses along similar lines (Archambault and Larivière 2009). That, combined with the global pressure to publish in the most prestigious journals, has increased the submission rates for those journals. From 2000 to 2009, Science reported a 22% increase in submissions, although the number of papers published did not increase (Franzoni, Scellato, and Stephan 2011). The end result is that both the most prestigious journals and the scholars they use as reviewers now have a much heavier burden of review than they once did. This takes time and resources away from other more productive activities.

In addition, for all practical purposes, the various incentives to scholars to publish in prestigious “international” journals put a premium on English-language journals. One need only look at the lists produced by AERES (2013). Quite obviously, this creates a considerable advantage for scholars who write English well over those who do not, independently of the quality of their work. But the focus on English and on the particular writing style of English-language scholarly journals has an even more insidious effect, especially on scholars in the social sciences and humanities. Because much research in these fields tends to address local or regional issues for whom the audience is not English speaking, it is almost by definition excluded from publication in “international” journals. Thus, pressure to publish in international journals is tantamount to ignoring issues of local or regional importance. For example, Hanafi (2011, 300) notes, “[t]he 2008 annual report of the Faculty of Arts and Sciences at AUB [American University of Beirut] demonstrates clearly how few social science publications are published in Arabic (only three of 245 articles and two out of 27 books).”

Here, too, other problems arise. In particular, highly prestigious journals tend to have low acceptance rates. Hence, reviewers and editors are likely to be far more orthodox in deciding what to publish. Some years ago, George Akerlof (1970) attempted to publish a paper in the American Economic Review that defied contemporary economic thought. Akerlof finally published it in the less prestigious Quarterly Journal of Economics (Cassidy 2009). He was later awarded the Nobel Prize in economics largely on the basis of that article. A similar case might be made about Barbara McClintock’s brilliant work on “jumping genes.” In short, insisting that researchers publish in the most prestigious journals focuses research on the tried and true, on conventional methods and established theories, on puzzle solving rather than asking new questions.

Moreover, with few exceptions, this approach encourages publications in disciplinary journals rather than those that are interdisciplinary. After all, interdisciplinary journals are by definition at the edges of several fields. Hence, they are often unread by those at the “core” of a discipline, and they are certainly regarded by those at the core as less prestigious.

In addition, the JIF is computed using citations only from the two previous years. Hence, the JIF for a journal for 2013 would be calculated by summing the citations to articles published in 2011 and 2012. One implication of this is that journals in fields in which citations lag more than two years will have lower impact factors. Because those fields in which papers take more time to be recognized and are less likely to become obsolescent tend to be in social sciences and humanities, journals in these domains of scholarship have lower JIFs.

Finally, there is enormous variation in the importance attributed to articles even within the top journals. A few will be widely cited, used in the revision of research practices, or seen as the model for future research in a given field. However, most will receive few citations and may even be ignored. Hence, publishing an article in a highly prestigious journal in no way says much about the quality of that particular article.

 

Downgrading of books and book chapters. The flip side of the obsession with articles in scholarly journals is the declining incentives to publish either book chapters or books. Without question, many book chapters are invited and not peer reviewed. Similarly, there are many commercial book publishers who will publish with minimal review as long as they see a market for the book in question. But books and book chapters often serve scholarly purposes that are poorly reflected in journal articles. Books and book chapters may bring together in one place the current knowledge about a particular field, serving as a manual for working scholars as well as a guide for neophytes. Books also often allow authors to explore ideas that cannot be examined fully in the limited space of a journal article. Downgrading books and book chapters will have the likely effect of making such reviews and integrative summaries harder to obtain, thereby slowing progress in a given field.

 

Competing for grants. It is of no surprise to anyone reading this that grant competitions have become commonplace in research around the world. In contrast, grants to institutions have diminished in size and frequency, including those established more than a century ago and often known as block grants or formula funds in the world of agricultural research.2 Proponents of competitive grants, in line with neoliberal claims about markets, argue that competitions ensure that the most promising researchers are funded, irrespective of their institutional affiliations or geographical location. However, most competitive grants—at least as presently constituted—have several major problems associated with them.

Cost. Competitions require the development of complex bureaucracies for peer review. This includes both State bureaucracies that administer the grants programs as well as auditing procedures for the institutions receiving the grants as required by funding agencies. At many institutions, as much as one-third to one-half of each grant award goes to so-called indirect costs (i.e., the costs of administering the grant as well as of services that are not directly included in the grant itself such as heat, water, electricity, and library services). Furthermore, the relatively low success rates (in the US context about 20% of submissions, often lower in Britain) mean that much of researchers’, graduate students’, and staff’s time devoted to developing applications is largely wasted. In addition, one survey suggests that US faculty who receive grants spend as much as 42% of their time administering the grant rather than engaged in research (Decker et al. 2007; Rockwell 2009). Another study shows a doubling of administrative time over a 30-year period (Barham, Foltz, and Prager 2014). In short, competitions are costly to administer and have what economists would describe as high transaction costs.

The Matthew effect. Initially noted by sociologist Robert Merton (1968), the Matthew effect refers to the lines in the Gospel of Matthew (25:29): “For unto every one that hath shall be given, and he shall have abundance: but from him that hath not shall be taken even that which he hath.” In short, those who are already recognized are far more likely to be recognized again. This is particularly true for competitive grants, where those who are already recipients of grants are more likely than others to receive additional grants. Despite attempts by many funding agencies to reverse this process, it continues to a substantial degree, posing significant problems for young researchers who lack a record of support.

Duration. Although some research projects can take decades to complete, competitive grants are rarely for periods longer than three years. As a consequence, the shift from block grants to competitive grants makes it difficult or impossible to fund such long-term research. This is especially true of long-term research in fields such as ecology and plant breeding, which cannot be easily divided into small morsels for purposes of grant seeking.

Geographical situatedness. Especially in the natural and social sciences, research is geographically situated. For example, unlike most engineering and physical science research, agricultural, forestry, soils, and fisheries research is often regionally specific. Hence, much research of this sort relevant to Norway is not particularly useful to Italy. Occasionally, similar issues appear in the social sciences. For example, qualitative research on social movements must be done at a particular site. Competitive granting mechanisms often fail to take this feature of research into account.

Grant writing. Competitive grants tend to reward those persons who are good at grant writing. In many fields, this task is often taken on by senior scholars, whereas junior scholars actually engage in executing the research (Hyde, Clarke, and Drennan 2013). Arguably, the skills required for grant writing are different from those required to engage in research or to write articles for publication. Much as is the case with test taking among students, to some degree grant writing rewards those who are able to demonstrate their skills in grant writing, independent of the value of that research. Of particular note is that grant applications require that the writer of the grant demonstrate the importance of the project to one or another funding agency goal and the superiority of the method(s) used by the proposer to other approaches. At the same time, the applicant must avoid opening up questions that cannot be addressed by the proposed research. Scholars who are good at these skills may be far weaker in executing the grant once it is received.

Block grants versus competitive grants. Finally, there is some limited evidence which suggests that block grants are more effective than competitive grants. For example, one study argues that this is the case in the field of agricultural research, at least with respect to its impact on agricultural productivity (Huffman and Evenson 2006). Given the high costs—spread across granting agencies, recipient institutions, and careers of individual scholars—of running competitive grants programs, and their negative consequences noted above, it would appear that the presumption of the alleged superiority of competitive grants would at least require more careful analysis.

 

Greater incidence of fraud. Research fraud may involve a variety of illicit activities, including plagiarism,3 publishing what are essentially the same results in several places, conveniently deleting data that contradict the desired results, or fabricating data to support a given hypothesis. In perhaps the worst instances, it involves deliberate destruction of data collected by others so as to discredit their research (see e.g., Maher 2010). All are promoted by the need to excel in various performance audits.

Evidence suggests that fraudulent papers are now far more common than they were just a few decades ago. Although one could argue that this rise in the rate of fraud is the result of better means of detection, more likely it is a consequence of the rising pressure on scholars to publish. One recent study of 2,047 retracted papers in the biomedical and life sciences found that misconduct was the reason for retraction in 67% of the cases, whereas only 21% were retracted due to error (Fang, Steen, and Casadevall 2012). In addition, the authors noted a tenfold rise in retractions due to fraud since 1975. All of this undermines the fundamental trust implicit in peer review processes and the fabric of scholarly research. Moreover, somewhat ironically, the usual “solution” posed to fraudulent behavior does not involve modifying the current market-like reward system but enhancing training in the responsible conduct of research and expanding the audit mechanisms (Steneck 2013). Neither approach addresses the key issue.

 

Ghost and honorary authorship. Ghost authorship consists of someone writing an article whose name does not appear in the list of authors. Honorary authorship involves asking someone well known in a given field to have his name listed as an author, although that person had little or no connection with the research in question. In both instances, the intent is to promote a given product or process through publication in an apparently scholarly and disinterested place. There are now far too many documented cases of both ghost and honorary authorship of scholarly articles. This is particularly the case when the article in question is seen as having economic value to a given company.

Although there are only a few careful studies of ghost and honorary authorship, what data exist are disturbing. In one recent study of six respected general medical journals, 21% of the published papers had either honorary or ghost authors or both. Moreover, this number was lower than that reported in 1996, before the problem was well identified (Wislar et al. 2011). Whether these estimates apply to other fields is simply not known, but it is certainly likely to be an issue in other disciplines, especially those where companies value particular research outcomes and the imprimatur of key figures in the field.

 

Forced citations by journal editors. Because journals as well as researchers are now often ranked by citation counts and many research organizations reward researchers who publish in highly ranked journals, some journal editors have begun to promote their journals by putting pressure on authors to cite papers previously published in their journals.4 This means that papers only marginally related to a given published paper are now to be found in the list of references. In addition, because review articles tend to be cited more frequently than others, editors anxious to raise their journal’s place in the rankings can increase the journal space devoted to reviews. Moreover, some journal editors have promoted their journals by publishing editorials in which they cite most of the articles published in that same journal. Of course, it is difficult to determine how widespread this problem is given that one needs “insider” information about a particular journal’s policies to accurately analyze whether journal self-citation is the result of authors’ citation patterns or editorial pressure.

Moreover, regardless of how widespread forced citations are, they have little effect on the rankings among journals, in large part, because of the importance accorded to the “top” journals’ reputations. This forms a vicious circle. Given the budgetary limits of library subscriptions and the rising cost of journals, librarians try to avoid subscribing to journals that are read by few scholars. As noted above, they do this through the use of indexes such as the Web of Science. In addition, because scholars rarely read certain journals, they also rarely if ever cite them. If only a few scholars cite them, then indexes do not list them. In fact, even if an excellent article is published in a journal that is not indexed, it is unlikely to be read or cited. Furthermore, most editors and reviewers do not wish to spend their time editing or reviewing for a journal that rarely attracts high-quality papers (Edlin and Rubinfeld 2004).

This has important consequences for heterodox views in various disciplines. For example, Jakob Kapeller (2010) has shown how authors who publish in orthodox economics journals rarely cite others outside the discipline. In contrast, authors who publish in heterodox economics journals frequently cite those outside their field. Of course, the orthodox economists are far greater in number than the heterodox ones, and the orthodox journals are more frequently cited overall. Therefore, it is extremely difficult for a journal that publishes heterodox views to attract sufficient citations in orthodox journals to be indexed, let alone to rise in the journal rankings. Likely, what Kapeller found for economics is true in other disciplines as well.

 

Rising costs of journals as a few publishers corner the market. In part as a result of the pressure to publish as well as the ranking of journals, today a handful of companies publish most of the major academic journals listed in the Web of Science. One recent study of concentration in journal publishing examined all the journals indexed in Journal Citation Reports from 1997 to 2009. The study found that 0.2% of the publishers produced 50% of the journals and articles. They also noted a trend toward increasing concentration. Whereas 22 publishers produced 50% of the journals in 1997, by 2009, that number had declined to 7 (Didegah and Gazni 2011). At the same time, library subscription prices have risen dramatically (Edlin and Rubinfeld 2004, 120).

The publishers make their money by (1) requiring that authors surrender their rights to the publication in question, and (2) asking reviewers to comment on those papers at no cost to publishers. This made sense when most journals were published by professional societies or academic presses; today that is no longer the case. Instead, publishers sell the results of their endeavors to libraries and the general public at what are often exorbitant fees. Indeed, in 2010, Elsevier—one of the largest scholarly publishers with more than 16% of the market—reported profits of 36% on revenues of $3.2 billion (Gusterson 2012).

Conflicts of interest in research. Not too long ago, most public universities and research institutes were careful about what private funds to accept and under what conditions. For example, in 1931, a director of an American agricultural experiment station (then the organizations receiving the lion’s share of public funding) noted, “In accepting monies from industry, the station director must always keep in mind that such monies should have for their first purpose the upbuilding of agriculture used in the broadest sense. We should not use our station facilities for the promotion of merely private gain” (Russell 1931, 226).

However, in recent years, university scientists in many nations have been encouraged to collaborate with scientists in the private sector, even going so far as to launch joint ventures. In addition, while earlier collaborations involved just a few scientists, more recent collaborations have involved entire departments or centers. The Novartis–University of California Berkeley agreement was one such institutional collaboration. Although neither the worst fears of detractors (selling the university to the private sector) nor the high expectations of supporters (great scientific advances) were realized, the overall impact was to undermine the reputation of the department and the university in many quarters (Rudy et al. 2007).

Furthermore, many universities and some research institutes have established research parks on the edges of campus so as to encourage greater interaction between public and private researchers. Although in principle there is nothing problematic about such research parks, the devil is always in the details. In some cases, this simply permits firms to be near departments where researchers have similar interests. In other cases, it promotes overly close relations, thereby undermining both the independence of public researchers and putting barriers in place for other firms. In the worst instances, such close relations between public institutions and private firms leads to the suppression of critical voices, as was the case at one Canadian university (see page 103).

In addition, some disciplines have become so closely associated with particular industries that it is hard to find scholars who are not receiving some sort of industry support. Food science and nutrition departments are emblematic of this problem. As Marion Nestle (quoted in Warren and Nestle 2010) has noted, “soft-drink companies such as Coca-Cola and PepsiCo lose no opportunity to sponsor professional meetings; provide training positions; send free samples and technical materials; and support professional newsletters, teaching materials, and journals.” Moreover, she notes that most independent studies clearly link childhood obesity with soft drink consumption, whereas those sponsored by industry dispute this claim.

Similarly, pharmaceutical companies only fund research on drugs of interest to the firm. They also frequently provide travel money to researchers whose findings support use of the drug being promoted. In addition, they may pay selected researchers who agree to serve as honorary authors (Mirowski 2011). When used by agencies that approve drugs for general use, such practices make it possible for potentially dangerous drugs to be approved. Even projects approved by Institutional Review Boards usually fail to reveal these issues because they are designed to protect subjects in research projects, not to ensure that no conflicts of interest exist.

Conflicts of interest may also exist when supposedly disinterested scholars serve on standards setting committees, a common practice for many standards development organizations. Such organizations set standards for a wide variety of physical objects, industrial processes, and “best practices.” If the scholars who serve have industry ties, at least the appearance of a conflict of interest is in place.

Such conflicts of interest may also exist when researchers consult for so-called “expert network firms.” These companies seek out the top academic researchers in a given field and link them with companies willing to pay for their expertise. They then receive phone calls from those companies asking about ongoing research projects and programs. In some instances, such researchers have stepped over the line and become involved in insider-trading schemes, as either active participants or informants giving away corporate secrets. So risky is this kind of consulting that the prestigious Cleveland Clinic has forbidden its staff to enroll in expert networks (Mervis 2013).

What all this means is that a large and growing portion of scholars in public institutions are no longer disinterested. Instead, they are linked to those who advocate particular positions on various issues of concern to both the scholarly community and the general public to advance their personal or institutional market-driven goals. Therefore, conflicts of interest have become a major problem in journal article reviewing and publication, debates about various issues of public policy, and submission and reviewing of research grants. For example, one study of some 94 articles reporting health risks and nutritional value of genetically modified crops found that financial or professional conflicts of interest were commonplace when negative findings were reported (Diels et al. 2011).

 

Changes in intellectual property rights. Markets can only exist when objects or processes are considered as saleable property. One way to create markets where they did not exist before is to expand intellectual property rights (IPRs). Clearly, in capitalist societies, certain activities become so unprofitable without IPRs as to be abandoned. Hence, some sort of copyright and patent protection is necessary if there is to be any monetary incentive to engage in or reproduce creative work at all. Furthermore, openly releasing certain products and processes leads to the orphan drug problem. Specifically, if a product or process for which there is little demand is released to the general public, there may be an insufficient monetary incentive for any particular company to invest in its development and sale. This is particularly the case for products such as specialized drugs for rare diseases. Without a patent, no pharmaceutical company will produce the drug for fear that some other company will do the same and undercut them on the price. Similar issues appear for improved varieties of minor crops.

However, over the last several decades, IPRs have been greatly expanded. There are now IPRs for organisms, seeds, research tools and instruments, computer software, and a variety of other domains. The expansion of IPRs has had a number of consequences for university research.

Open versus proprietary knowledge. The entire academic enterprise is built on the notion that knowledge should be freely available. As economists rightly note, most knowledge is nonrival and nonexclusive; anyone can use it without depleting the stock of knowledge for others. We academics speak of “contributions to the literature,” where a contribution is a gift to the discipline or research field as well as to the larger society.

In contrast, the private sector values proprietary knowledge, for which erecting barriers to use of that knowledge (e.g., trade secrets, patents, copyrights, steep learning curves) enhances profitability. The expansion of IPRs shifts the boundaries between scholarly research and private gain. In consequence, in the biological sciences especially, “materials transfer agreements” have become commonplace. Such agreements both slow down the spread of knowledge and block researchers from engaging in certain kinds of experiments. In the most egregious examples, university researchers find their research projects tied in knots as various organizations have IPRs that relate to different parts of their research agenda. This creates what some have called an “anti-commons” (i.e., a situation in which each advance requires that one obtain [purchase?] permission from other persons or firms) (Heller and Eisenberg 1998). In addition, the tendency toward granting patents on genes sharply reduces their use by both medical practitioners and plant scientists (Berthels, Matthijs, and Van Overwalle 2011).

Engaging in research about protected innovations. Once something is protected by IPRs, researchers often find themselves denied access to that innovation even for purposes of research. For example, purchasers of genetically modified seeds must sign an agreement barring their use in research. Technically, they do not purchase them at all but only “lease” them for one season. Scientists who wish to do research on the environmental consequences of such crops must get corporate approval, which can be both limited (in terms of what research can be done and what information can be published) and agonizingly slow. Whether this restriction is legal is unclear, but it has had a chilling effect on public research. As a result, a few years ago, a group of scientists wrote a letter of protest to the US Environmental Protection Agency about their inability to examine the environmental effects of genetically modified plants (Pollack 2009). As they feared reprisals by the companies involved, they initially requested anonymity. Later, a subset of them published a paper outlining their concerns and expressing their hope that a recent agreement with the American Seed Trade Association would bring about a resolution (Sappington et al. 2010). However, it is still unclear whether the agreement has had the desired effect (Stutz 2010).

The rise of IP offices on university campuses. Since the passage of the Bayh-Dole Act (PL 96–517), US universities have been encouraged to patent inventions developed with government funds. Similar events have taken place in Canada (Martin and Ouellet 2010). As a result, most large research universities have formed intellectual property offices. A few have benefitted enormously as they have been able to obtain license fees from patents on extremely lucrative inventions. However, the tail on the distribution of revenues from patents is extremely long. One or two patents may generate nearly all of the revenues accumulated by a single university. Hence, although a few universities have benefitted, far more have found their intellectual property offices to be large sinkholes. After all, most things that are patentable are of little or no economic value.

Perhaps more important, it is at least questionable whether IP offices serve the public good. A recent US National Research Council report, for example, tells us that “[t]he first goal of university technology transfer involving IP is the expeditious and wide dissemination of university-generated technology for the public good” (Merrill and Mazza 2010, 2). It also warns university officials that IP offices should not be seen as revenue sources but as one more means for promoting the public good. One can hardly object to this. However, it is an open secret that universities do see IP offices as potential cash cows and try as best as possible to maximize the returns on their “investments.” In Canada, that has been encouraged by the national and provincial governments, although it has been singularly unsuccessful (Martin and Ouellet 2010).

Yet the expansion of IPRs and the embracing of them as a source of revenue show how inappropriate this model is for university research. As Simon Marginson (2011, 8) explains,

This peculiar, public good-laden character of knowledge helps to explain why universities have been consistently disappointed in their expectations of commercial returns to research. ... There are normally several steps that must occur before ideas become enfolded into commodities, and by that stage the ideas have long been transformed by other economic processes in which the commercial value is created. It takes deep pockets to hold onto private ownership of the idea in itself all the way down the commercial value-creating chain.

Hence, to date, only a few universities have been able to turn IPRs into a significant source of commercial returns on investment. Universities are ill prepared to become venture capitalists. With few exceptions, they lack the capital, the enthusiasm for economic returns (in contrast to increased status or prestige), and the forms of organization necessary to move from knowledge to objects or processes that do well in the marketplace.

Rewarding IPRs. Yet encouraging faculty to produce knowledge that is subject to IPRs shifts the reward system for scholars. In addition to including intellectual property as a criterion for merit increases, at many universities and research institutes, a significant share of the royalties on intellectual property are given to the inventor. Arguably, this shifts the boundaries between knowledge as a public good and knowledge as a private good. Especially in the sciences and engineering, faculty have become perhaps all too attuned to the lure of producing patentable inventions, to the detriment of those who may benefit society but have no immediate economic payoff.

In short, the expansion of IPRs has had the (intended) effect of enclosing parts of the commons. As a result, many scientists at public universities and research institutes have shifted their research agendas so as to pursue an elusive monetary prize. It is unclear how differently knowledge might have developed were IPRs to have remained more limited in scope. Yet it is clear that the expansion of IPRs has not been particularly profitable for most universities.

***

In summary, at universities and research institutes in many nations, there is little doubt that, at least for permanent faculty, merit increases, promotion, and tenure are focused heavily on research publications to the virtual exclusion of all else. Education and public engagement are seen as of lesser importance, but also other faculty activities that are essential to the university are downgraded—advising students, attending seminars, organizing professional meetings, reviewing articles and grant applications, and informal interactions with other faculty members. Kathleen Lynch (2006, 9) put the matter even more strongly: “Once academics are only assessed and rewarded for communicating with other academics that is all they will do. In a research assessment system where one is rewarded for publishing in peer-reviewed books and journals, there is little incentive to invest in teaching, even the teaching that is part of one’s job. The incentive to teach or disseminate findings in the public sphere through public lectures, dialogues or partnerships with relevant civil society or statutory bodies is negligible.”

In short, the roles of researchers tend to be narrowed as they are individualized (i.e., focused more and more on generating the next peer-reviewed publication, itself often a highly individualizing activity). Such individualization often comes at the detriment to students, colleagues, and the public.

Notes