Although all universities and research institutes worthy of the name have some administrative staff, the scope, division of labor, activities, conventions, and expectations of those administrators vary considerably across organizations, jurisdictions, regions, and nations. Despite this great variety, over the last 30 to 40 years, a number of sometimes subtle yet obvious changes have occurred. Nearly all can be traced to the neoliberalization of public universities and research institutes. In particular, administrators have been given greater formal autonomy as long as they play the neoliberal game of competition. In other words, administrators now find fewer direct legal barriers to action, although these vary considerably across and even within nations. However, the removal of legal barriers is linked to new modes of market-like competition, which create a variety of new reporting requirements, rules, and barriers, thereby limiting the ability of administrators to act independently. Boer and Jongbloed (2012, 558) explain, “Performance as measured by means of the number of graduates, study progress, academic output (e.g. publications or citations) or successful valorisation (e.g. number of patents) may be translated into a financial reward (or sanction) for institutions. A desire for potential gains and a fear for possible losses are expected to drive institutions towards high quality and efficient service delivery.”
This hardly implies that the State is retreating or reducing its role in higher education. To the contrary, through the establishment of various forms of competition and the focus on measurable outcomes, the State has actually tightened its grip on higher education in the name of an elusive claim to efficiency. Hence, the seemingly technical project of neoliberalism—establishing markets to improve efficiency—is actually a highly prescriptive, value-laden, and controlling project. This change in the means of governance has had numerous repercussions.
Changing roles and increasing numbers of administrators. Over the last several decades, we have seen both a change in the role of administrators as well as an increase in their numbers. Although informal contact with colleagues in other nations suggests that this is more or less true in most nations, I have only been able to uncover reasonably reliable data for the United States. In the United States, the number of administrators has increased as university reporting requirements of national and sometimes regional or local governments have become more and more burdensome.1 According to one analysis of 193 research universities in the United States (Greene, Kisida, and Mills 2010, 1), “Between 1993 and 2007, the number of full-time administrators per 100 students at America’s leading universities grew by 39 percent, while the number of employees engaged in teaching, research or service only grew by 18 percent. Inflation-adjusted spending on administration per student increased by 61 percent during the same period, while instructional spending per student rose 39 percent.” Similarly, a study by the US Department of Education suggests that “employment of administrators jumped 60 percent from 1993 to 2009, 10 times the growth rate for tenured faculty” (Hechinger 2012).
Although there is doubtless some truth to the notion that administrators have the ability to increase their share of the pie with greater ease than faculty, staff, or students, this is likely a minor aspect of the problem. US universities now must report crimes on campus, textbooks ordered, salaries and career paths of graduates, incidents of sexual harassment, diversity procedures, extramural research grants received and administered, gender equality in sports, and services provided for persons with disabilities, among other things. Some 50 new administrative rules were implemented with respect to research grants since 1991, even as the federal government put a cap on the costs of research grant administration (Leshner and Fluharty 2012). Moreover, different agencies often have different reporting requirements (because different legislative committees and units of the executive branch put those rules in place). Ostensibly, this reporting makes it easier for “customers” to determine which university they wish to attend, as well as providing legislators and the general public with useful information about the effectiveness of university operations.
In other nations such as Britain and France, national “quality assurance” agencies have been established to monitor university performance. These agencies, often only weakly accountable to legislators, add yet another layer of bureaucratic rule making and often unaccountable auditing to the freedom to operate that universities are alleged to gain. Quite obviously, they employ a considerable number of bureaucrats and administrators.
As noted above, both university administrators and government bureaucrats (differently in different nations) have produced and responded to the competitions among universities through NPM. NPM policies “… are characterized by a combination of free market rhetoric and intensive managerial control practices” (Lorenz 2012, 600) (see Box 2). In short, in the name of enhanced accountability, efficiency, transparency, and quality, they have increased the frequency, detail, and importance of collecting massive amounts of data on the “productivity” of university faculty and the overall operation of the university. Moreover, NPM has been used to transform the reward structures and monitor the performance of faculty, students, and staff in an effort to promote market values and selected ways of measuring them—without subjecting them to any serious debate. Indeed, much as in former communist states, NPM puts control firmly in the hands of largely unaccountable bureaucrats.3 Ironically, “…the illusory solution to the fiscal crisis in higher education is to monitor, regulate, and reduce the costs of intellectual production, but to do so requires an ever larger, and more coercive, administrative apparatus” (Barrow 2010, 321).
For several years, I was on the faculty at Lancaster University, where I found my colleagues to be intellectually challenging and collegial. But like all English universities, administrators there are under enormous pressure to perform in ways that raise the rankings in various measures. During my stay in Lancaster, each member of the faculty received an email from the then Vice-Chancellor, Paul Wellings (2010), that included the following:
Lancaster’s current set of [Times Higher Education (THE)] rankings is a testament to the efforts of staff across the university. The University was ranked 124th in the THE World Ranking which is our best result to date.
Two factors made large contributions to our success. First, the fact that the Arts and Humanities ranked 31st is a considerable achievement and, second, the University’s overall high citation score demonstrates the significance and relevance of all our research.
While we all know that there will be volatility in these measures from year to year, we also know that our 1:10:100 target (Top in the North-West, Top 10 in England, Top 100 in the World) is now achievable. ….
The results of the 6th National Student Survey … show very high levels of overall satisfaction with the experience at Lancaster. However, there was a drop in satisfaction from 89% to 87%. It is important that the University plays close attention to these data and that Departments with poorer scores sort out local issues.
What Professor Wellings must surely have known, given his doctorate and numerous publications in ecology, is that the statistics he cites are fundamentally flawed. There are several issues to highlight about this email. First, the THE, since 2010 has been compiled by Thompson Reuters, the same company that produces the Web of Science.2 Although arguably better than some other indicators, in 2010, some 32.5% of the ranking score was based on the highly biased citation indicators produced by that company. Moreover, 30% of the score was based on research volume, income, and reputation (Times Higher Education Supplement 2010). Whether this was controlled for the differing sizes of universities is unclear. Indeed, full disclosure would mean that anyone with access to the proper information could reproduce the results, thereby undermining The Times’s monopoly on those results. But this in turn means that, unlike all good science, the results of these ranking cannot be replicated.
Furthermore, it is entirely unclear what being “at the top” means, given the wide range of activities of universities. For example, the term “satisfaction” is vague. It could mean that students received a first-rate education, but it could also easily mean that they found the programs easy to complete and the faculty were friendly and gave high grades. Moreover, it assumes (wrongly) that students are customers—consumers of the products offered at the university. Finally, even if a 2% drop is statistically significant, it is clearly not substantively so. Being concerned about it is at least as problematic as noting an increase in the THE ranking of the university.
As a result, faculty and staff must devote a growing percentage of their time to ever more elaborate administrative tasks such as annual merit, tenure and promotion, and departmental reviews, as well as a seemingly endless barrage of forms to be filled out about virtually every aspect of one’s work. Such forms cannot merely be dismissed; they encourage those who are audited to think about and enact their work in certain ways, to note how their activities conform (or not) to certain norms implicit in the forms (Rose and Miller 1992) (see Box 3). All of this is in addition to the administrative burden associated with research grants as discussed below.
Recently, my university undertook a study as part of an “initiative to enhance the diversity and quality of MSU faculty and their work environment, ...” Each faculty member was requested to complete an online survey.
In almost every instance, the questions on the survey assumed widespread agreement on the current situation where little or no evidence existed of such agreement. Consider the following: The survey posed a series of statements about promotion and tenure, to which respondents were to specify a level of agreement such as, “I have a clear understanding of the promotion/tenure process in my unit” (Woodruff 2013, 3). However, the statements failed to address whether the respondents thought the existing criteria and processes for tenure and promotion were the right ones.
Similarly, the questions about tensions between family obligations and workload masked the obvious. Untenured faculty are under increasing pressure to compete in an unwinnable race among individual faculty, departments, and universities based on flawed rankings that would never pass muster in an elementary statistics class. Moreover, that pressure has increased annually over the 27 years I have worked at MSU. Younger faculty members are obsessed with meeting ever-rising expectations for the production of journal articles, to the detriment of teaching, advising graduate students, university service, and even coming into the office to work and converse with colleagues. Ultimately, this encourages faculty to do mundane research and divide their work into ever smaller pieces, rather than produce a few excellent papers. Yet none of these issues was even hinted at in the survey.
Furthermore, a series of questions were asked about “MSU’s culture of high performance.” Like most faculty, I had no idea what that might mean.5 Of course, without knowing what it means, the answers are essentially meaningless as well. However, that in no way stopped the designers of the survey from including statements such as, “In your view, to what extent are the following attributes representative of the culture [of high performance] in your unit?” (Woodruff 2013, 16). This was followed by a series of boxes to check (from strongly agree to strongly disagree) as to whether the unit goals were articulated and measurable, and whether high standards had been set. However, it failed to ask the preliminary questions about the proverbial elephant in the room: Are the standards the right ones? Are the measures appropriate? Are they measuring what they claim to measure? Without answers to these questions, the survey merely reproduces a gloss on the neoliberal policies adopted and fails to address key issues about workplace organization.
The survey also suggests a more serious problem subject to several differing interpretations. It may be that the neoliberal understandings of what universities are all about are so widespread among administrators that they are unaware of these concerns. Alternatively, it may be that administrators are afraid to raise these questions because they see them as a potential challenge to their authority. Either way, the survey encourages faculty to consider certain aspects of their work environment as fixed and agreed on. Whether that is in fact the case is an empirical question.
Furthermore, neoliberals have rarely if ever included the work of collecting, analyzing, and acting on those data as a real cost (e.g., Barrow 2010). Government bureaucrats (mandated by legislators to collect and act on certain kinds of information) use that information to rank universities in various ways, including the “efficiency” of their use of State funds, the graduation rates of their students, the time required to complete a degree, the salaries of recent graduates, the economic value of competitive grants received, and so on.
These rankings,4 however poorly designed, have real consequences for the universities involved. For example, consider that there are, broadly conceived, three ways to increase graduation rates: One can provide extra support for students who are having difficulties, including special tutoring and remedial courses. Alternatively, one can encourage grade inflation, thereby moving students who would otherwise fail through the university. Finally, one can restrict admissions to those who will be most likely to graduate. Of course, because funds are always in short supply, the first method is usually ruled out; the other two are at least tacitly encouraged.
Moreover, the information collected is used by administrators to grant or deny tenure and promotion, pursue or reject merit increases, make what are usually called market adjustments in salaries of individual faculty, and market their respective universities through advertising of various sorts. In short, NPM has allowed (or perhaps required) administrators to put into place market-like mechanisms and promote market values in places where the legitimacy of such values should be at best questionable.
Shift from academics to managers as administrators. We can also see a subtle but important shift in the notion of what it means to be an administrator. In the past, administrators tended to be academics or researchers who temporarily took on administrative roles; more recently, universities and research institutes have begun to hire administrators who have managerial rather than academic backgrounds. Although such persons may arguably be able to improve the efficiency of universities, it is far less clear that they are able to improve their quality.
Creation of administrative careers. Related to the shift to full-time managers and the higher salaries of administrators is the creation of administrative careers. It is now no longer uncommon for administrators to spend most of their careers in administrative posts. They are rarely if ever to be found in the classroom, laboratory, or library. Such persons often move from institution to institution as a means of climbing the increasingly corporate university or research institute ladder. Moreover, given that they are often disconnected from those who teach and/or engage in research, their views and actions with respect to universities and research organizations tend to differ markedly from those directly engaged in those activities.
Growth in salaries of top administrators. It will come as no surprise that salaries of top administrators of universities and (to a lesser extent) research institutes have grown far more rapidly than have those of faculty and staff. Total compensation for US public university presidents can now run as high as ~$2 million per year (The Chronicle of Higher Education 2013), the median being $248,000 per year. In contrast, the median salary for a full professor at a doctorate-granting university has remained level at about $87,000 per year (The Chronicle of Higher Education 2011). This parallels the growing gap in salaries between top management and ordinary workers found throughout the corporate world. It is generally justified on the grounds that very high salaries are necessary to attract the best talent to these high-level positions. Yet, curiously, this appeared unnecessary just a few decades ago. It is particularly disturbing coming as it does in the midst of declining public support for higher education and research.6
Growth in advertising and marketing of universities and research institutes. As few as 50 years ago, universities and research institutes rarely advertised their offerings. Information about universities was available through catalogs and various compendia available in bookstores and public libraries. Information about research institutes was perhaps available in short brochures available to visitors. Today, both tend to have large and growing marketing offices. Currently confined to the rich nations, middle-income nations are beginning to do the same. For example, one of the central messages of a recent report to the Malaysian government was “… the need for local higher education institutions to engage in self-promoting activities in the outside world” (Mok 2011, 3). These offices attempt to “sell” the organization to prospective students, faculty, legislators, research funding agencies, private corporations, and donors. Such marketing efforts often emphasize the scores obtained on a wide range of rankings. In some instances, institutions even pay to be included in these rankings. At best it is questionable whether such rankings serve their intended purposes. In addition, with respect to US universities, in a recent poll of admissions directors, only 14% felt the rankings were useful to students in choosing among institutions (Houry 2013).
Moreover, even if they do serve their intended purposes, they also promote precisely the kind of unethical behavior that universities and research institutes are supposed to avoid. Hence, in their efforts to promote their institutions in academic rankings, administrators at three well-known and respected US universities—George Washington University, Claremont-McKenna College, and Emory University—deliberately inflated their scores.
Furthermore, universities market themselves not only to the “outside world” but to those who are part of the university community. My university recently revamped its internal webpage with news about the university. Moreover, recently every member of the faculty and staff received an effusive email telling us about the new look for one of several weekly electronic newsletters. Glossy brochures proliferate telling anyone willing to read them about the herculean feats performed in research, academics, sports, environmental protection, traffic flow, and so on.
Finally, it is worth noting that funds used for internal and external marketing are not otherwise available for research or education. Given the scope and growth of the marketing apparatus, this is hardly a trivial sum.
Growth in numbers of part-time and temporary (adjunct) faculty. The flip side of the growth in administration is the growth in the number of adjunct faculty and short-term research appointments. These people—who now constitute about 75% of the faculty at US universities—are hired on fixed term appointments and have little or no opportunity to become permanent members of the faculty. They are usually poorly paid and overworked, yet they make up a large and growing portion of the faculty. Over the last several decades, they have created a two-class system for faculty appointments: those who are hired in part-time and temporary positions and those who are full-time permanent employees.
Changing sources of university and research institute financial support. The decline in State support to universities and research institutes has led both administrators and those in State agencies to consider other major sources of support: tuition, public and private sector competitive grants, and donations. Tuition has been raised substantially in many nations over the last several decades, but its growth has been slowed by objections from some legislators who argue (occasionally with some justification) that universities are inefficient and engage in outdated practices.
In the United States, competition for and receipt of public sector research grants has been encouraged by the awarding of “startup costs” to new faculty members. Initially found only in the natural sciences and engineering, where labs are necessary accoutrements for most research, startup costs are now commonly made available to social sciences and humanities faculty as well. Some years ago, receipt of grant funds was seen solely as an extra. Now it is often seen as a central criterion for awarding merit increases and promotions.
Corporate research grants have also been encouraged in a variety of ways. First, the older barriers to collaboration between universities and public research institutes and the private sector have been reduced or eliminated. Yet such relations might be rejected for many reasons that remain largely unaddressed: conflicts of interest, public subsidies to one firm to the detriment of others, and the private sector’s desire to keep knowledge protected versus the public sector’s desire to make knowledge public. Despite these issues, public–private collaborations of all kinds are very much in vogue. Second, these are supplemented by the formation of university-led industrial research parks, and in some cases by the building of new facilities that permit university and industry scientists to work side by side. A $500 million investment by BP at the University of California Berkeley is a case in point; it has been the subject of continuing conflict between administrators and some faculty.
Finally, many universities and research institutes have expanded their development offices, seeking donations from wealthy alumni, large corporations, and large foundations. Special facilities have been built in which to entertain donors and a variety of arrangements have been made with those willing and able to donate large sums. Occasionally, this approach backfires, as donors demand to take an active involvement in university or research institute affairs. This is true of both foundations that now engage in “strategic philanthropy” and wealthy individuals and corporations that feel obligated to ensure that their funds are used in certain ways. The former often saddle grant recipients with the need for measurable goals and outcomes, such that larger, more creative, and longer term projects become impossible. The latter often attempt to directly intervene in university affairs, leading to clashes with administrators and, in some cases, withdrawal of funds (Katz 2012).
Universities by the numbers. The entire shift toward markets and competitions has pushed administrators to govern universities as much as possible by numbers. Indeed, a European study noted that “… at least 980 universities proposed, in their mission statements, to achieve a high level of international excellence in research [as measured by scores in various rankings]. It reflects both an unrealizable aspiration and a lost potential for many other areas where universities bring benefit to their communities” (Boulton 2010, 6). Yet numbers, usually in the form of quantitative data collected from convenience samples, appear to have a kind of concreteness, although nothing is quite as abstract as numbers.
Consider the “discovery” in 2003 that French universities lagged behind those of other countries in the Shanghai rankings. This sparked a great deal of concern and considerable reorganization of French higher education and research along largely neoliberal lines. That said, it appears few politicians or academics spent much time examining the formulation of those rankings. As it turns out, the Shanghai rankings, developed at Shanghai Jiao Tong University, are at best a poor (or extremely limited) measure of quality. As Gingras (2008, 8–9) explains:
It is composed of six measures of which four have a weight of 20% each: 1) the members of the faculty who have received a Nobel prize or a Fields Medal (for mathematicians), 2) the number of researchers at the institution who are on the “most cited” list of Thomson Reuters, 3) the number of articles from the institution published in Nature or Science, 4) the total number of articles listed in the Web of Science by the Thomson Reuters company. Two other measures each have a weight of 10%: 5) the number of former students who have received a Nobel Prize or a Fields Medal, 6) an adjustment of the preceding results according to the size of the institution. It is clearly evident that the final index of success is based on several heterogeneous measures, because the number of publications in Science and Nature is not commensurable with the number of Nobel Prizes. Even more surprising, it has been shown that the data on which it is founded are difficult to reproduce.
In addition, the focus of the rankings is entirely on the natural sciences and engineering. No attempt is made to include the social sciences, arts, or humanities. Nor does the measure include anything about teaching and learning. The use of the Web of Science gives the entire exercise an Anglo-American bias. In short, what we have is a highly flawed measure that was accepted at face value.7
Nevertheless, the French government engaged in reorganization at considerable cost. Universities were linked more closely to the Grandes Ecoles and research institutes through the creation of “Poles of research and higher education.” Attempts made to restrict university admissions provoked considerable pushback from both high school and university students. The passage of a law somewhat misnamed the “Liberties and Responsibilities of Universities” (LRU) forced universities to compete for the meager funds allotted to them. World Class Universities Programmes (WCUPs) were created by mergers among institutions and reorganization promoting various kinds of competition. Several large national bureaucracies, including France’s Evaluation Agency for Research and Higher Education (AERES), were created (Cremonini et al. 2013). Not surprisingly, virtually no change occurred in France’s position in the Shanghai rankings.
That said, reforms such as those in France and elsewhere have pressed nearly every large research university and institute to establish some sort of office that collects and analyzes massive amounts of numerical data about the institution in order to engage in what have become global competitions. Similarly, State agencies that monitor higher education and research also maintain such databases. One tendency (also common in the business world) is to attempt to govern both universities and research institutes largely by these numerical data.
However, the search for the perfect metric is at best illusory for several reasons. First, many of the things for which universities are designed are difficult or impossible to put in numerical terms. For example, it is easy to measure the percentage of students who graduate within five years but much more difficult to measure what they have learned. It is easy to count the number of publications produced by a faculty member but far more difficult to measure their import. It is easy to count the number of persons attending public meetings organized by university or research institute faculty but far more difficult to determine whether they found the information exchanged at those meetings sufficiently valuable to be taken seriously and acted on.
For many metrics that involve human activity, those measured will restructure their behavior so as to maximize their scores. In contrast, when measuring the growth rate of a bacterial culture, the measure used has little or no effect on the bacteria. This means that almost as soon as a new human metric is put into place, those measured adjust their behavior in ways that (at least potentially) undermine the very metric employed. It is perhaps best expressed as Campbell’s (1979, 85, emphasis in original) Law: “The more any quantitative social indicator is used for social decision-making, the more subject it will be to corruption pressures and the more apt it will be to distort and corrupt the social processes it is intended to monitor.”
Furthermore, metrics may give those in charge the impression that the metric eliminates the need for judgment. In point of fact, however, metrics never substitute for judgment. One must always determine whether a given bit of information conforms to the categories defined in the metric. As statistician Marcello Boldrini (1972, 203) noted some years ago, “[t]he truth is that without starting from the formation of cases, there can be no induction: here begins the creation of the uniformity of nature by the human mind, from which are produced the structures of every factual regularity.” Moreover, often those who are charged with analyzing the data generated by the metrics in question have little or no direct connection with the data-collection process. Hence, they may well make erroneous assumptions about just what the data mean. For example, they may assume that the subject students spend the most time on is the one they are most interested in pursuing, but that might not be the case at all (Parry 2012).
Moreover, as contrasted with data for scientific and scholarly research, data from forms, webpages, directories, test scores, and other materials designed for other purposes may contain many errors. Hence, for example, it is well established that citation counts from Google Scholar, Scopus, and the Web of Science give different results. None of these databases was designed with the creation of faculty productivity metrics in mind.
Finally, virtually none of the numbers includes measures of student experience and learning, diversity of the student body, democratization of higher education, or the independence of thought and action—the liberty—of scholars. Instead, most are focused on making universities and research institutes even more elitist in their organization by (1) increasing competition among students to be admitted, (2) increasing concern among administrators (and perhaps faculty) about enhancing the relative rankings of the institution in question, (3) using indicators to focus the attention of scholars on those things that are easily measured and deemed politically desirable by elites, and (4) discouraging interaction between scholars and those outside the scholarly world.
In summary, as a result of a wide range of legislative and executive policies implemented by various governments over the last three to four decades, the administration of universities and research institutes has been restructured so as to undermine collegiality and promote managerial control and hierarchy. This has been done (allegedly) in the name of efficiency in the use of public funds. But it has promoted vast bureaucracies at each institution and in government agencies that are both opaque and largely unaccountable. As we shall see, it has transformed higher education and research in ways that treat scholars and students as isolates, reduce their autonomy and freedoms, and undermine free inquiry.