The Green and White Papers signalled substantial change in research arrangements. John Dawkins proposed to give greater priority to fields that would improve the nation’s competitive position, to increase interaction with industry and to impose greater accountability for the universities’ use of public funds. At the same time he was extending research across the whole of the Unified National System without any significant increase in financial support. The gap between expectation and provision created alarm among the established universities, which had most to lose, and their worries were not allayed by the mixed messages of the two policy documents. The government would continue to fund basic research across the entire range of fields, but that funding was not a ‘right’ and had to be ‘earned through demonstrated achievements’. In the new parlance, concentration and selectivity would be the guiding principles of the research system. Dawkins had already transferred a first instalment of research money from the pre-1987 universities for competitive allocation. Just how much would be clawed back was still to be determined, as were the research priorities, the performance measures and much else.1
Australia came late to such close direction of university research. It had followed the United Kingdom in 1965 by establishing a funding agency, the Australian Research Grants Committee (ARGC), which, with the National Health and Medical Research Council (NHMRC), awarded project grants and fellowships on a competitive but open basis. Then, in 1982, CTEC created a program of Centres of Excellence to build concentrations in areas of strength, and this was followed in 1985 by a similar program for Key Centres of Teaching and Research, which had a more applied mission oriented to the colleges. Higher education also attracted grants from a number of Commonwealth agencies that promoted research and development in energy, water and rural industries. These various sources of discretionary funding amounted to $182 million in 1987 and were far outweighed by the money that CTEC allocated through the operational grant, which was calculated to amount to $903 million. That calculation, however, relied on an imputed $720 million representing the proportion of Commonwealth funding for salaries, buildings and equipment that supposedly supported research.2
Together, the two streams of direct and indirect research funding made up what was known as a dual system, the one directed to specific activities and the other allowing academics to pursue their own interests. The proportion of direct funding (17 per cent) was low by international standards, but then Australia spent much less on research and development than leading industrial countries: just 1.24 per cent of Gross Domestic Product against 2.71 per cent in the United States and Germany, and 2.42 per cent in the United Kingdom. It lagged particularly in research and development in the private sector, despite the recent introduction of a company tax concession, so that government spent a disproportionately high 0.74 per cent, of which about half was directed to universities. The universities in turn contributed more than half of the country’s basic research effort.3
‘Basic research’ was defined by the OECD as part of its taxonomy for the collection of international statistics on research and development. It referred to the acquisition of knowledge, whether exploratory, descriptive or explanatory, but the definition relied on what it was not—basic research was not directed to any practical end other than the advancement of knowledge. Applied research, on the other hand, was an investigation directed to an explicit purpose, and experimental development was work that drew on such knowledge to produce or improve a product, process or service. The three categories were sometimes understood as suggesting a hierarchy or linear sequence whereby discoveries made through basic research were refined and then turned into practical outcomes, although that expectation was not shared by all who conducted basic research, and it did not convince sceptics who regarded many of their projects as an esoteric indulgence.
Vannevar Bush, former president of Massachusetts Institute of Technology who directed the Office for Scientific Research during World War II, was perhaps the most influential champion of basic research. The nuclear bomb used to end that war had been made possible by academics exploring the mysteries of the atom with no thought for its use as a weapon until circumstances led to the application of their discovery. Bush set out the implications in his 1945 report to the President, Science, the Endless Frontier: ‘New products and processes are founded on new principles and new conceptions, which, in turn, are developed by research in the purest realms of science.’ Governments should support such research but they could not direct it, for some of the most important discoveries came as by-products of work on the frontier of knowledge with outcomes incapable of prediction, whose significance only scientists could evaluate. It was in the university that such research flourished through the ‘free play of free intellects, working on subjects of their own choice in the manner dictated by their curiosity’.4
From this proclamation came the establishment of the National Science Foundation as a government agency that funded such research, and other countries followed with their own support arrangements. But government patronage came with expectations that grew as the scale and cost of ‘big science’ increased. By the 1980s it was common to distinguish ‘pure’ basic research, concerned solely with the advancement of knowledge, from ‘strategic’ basic research carried out in the expectation that the knowledge would provide a base for the solution of recognised problems. Hence the White Paper spoke of an increasing emphasis on strategic basic research yielding economic and social benefit.5
Concentration and selectivity was to be achieved by transferring funds from the universities to the new Australian Research Council (ARC) for competitive reallocation. The ARC would determine areas of priority, and all higher education institutions were required to develop research management plans identifying and supporting their areas of strength. The new policy role of the ARC aroused particular resistance because it was to be chaired by Don Aitkin, an outspoken member of the Purple Circle whose criticism of the universities’ research performance was on record. He condemned their cloistered unworldliness, derided their neglect of the national interest (‘the great majority of researchers believe fundamental research is next to cleanliness in the universal scale of values’) and challenged the hallowed axiom that all academics should be engaged in research. That heresy had found its way in the White Paper’s distinction between research and scholarship: all academics should be familiar with the way that knowledge was generated in their discipline and keep up with recent developments, but not all would participate in the production of that knowledge.6
Aitkin was soon at loggerheads with the scientific members of his Council, some of whom doubted the capacity of a mere political scientist to understand, let alone direct, real research.7 Beyond this personal antagonism, the universities were angered by the magnitude of the clawback announced in the White Paper. After $5 million in 1988, it was to be $20 million in 1989, $40 million in 1990 and $65 million in 1991; in the course of the triennium, the ARC would advise the government of any further transfers from operating grants to its own coffers. The Minister professed surprise at the indignation this caused; after all, the money would be available for universities to win back. But his argument overlooked the fact that grants from the ARC and other agencies did not cover the full costs of research. They met direct costs but made little allowance for the buildings, equipment, support staff, library and computing services, all of which were run down after a decade of frozen Commonwealth funding. A number of recent reports had identified the deficiency, and the one produced by the Australian Science and Technology Council (ASTEC), which proposed the formation of the ARC, did so with the express proviso that there be no transfer of funds from universities’ operating grants. Any reduction of their existing allocation, it warned, ‘would place an intolerable strain on the higher education research system’. The White Paper acknowledged that the ‘reallocation of funds may aggravate the problems of general infrastructure support’, but left those problems to a future inquiry.8
This was the point at which the Australian Vice-Chancellors Committee (AVCC) jibbed. In response to a threat from the vice-chancellors in September 1988 to stay out of the Unified National System until the effect of the clawback was evaluated, Dawkins offered a review by the National Board of Employment, Education and Training (NBEET). But as was related in chapter 4, this did not satisfy David Penington of Melbourne, who thought there was little chance that NBEET would overturn the directions taken by one of its constituent councils, and insisted on an independent investigation, to be chaired NBEET’s Bob Smith but with a broader composition. Aitkin’s response that the universities’ concerns were ‘pretty hollow’ could not disguise the rebuff to his authority.9 Although he was a member of the Smith committee, it gave the universities a platform to set out their own views on research policy and brought significant modification of the support arrangements.
The Committee to Review Higher Education Research Policy had ten members. Apart from Bob Smith, Don Aitkin and David Penington, there was Gregor Ramsey as chair of the Higher Education Council (HEC), Ray Martin as chair of ASTEC, Ian McCloskey as deputy chair of the NHMRC and four members of the ARC. One of them, Denis Gibson, head of Queensland Institute of Technology, came from the college sector and one from the business sector, but Max Brennan was a professor of physics and Adrienne Clarke a professor of botany, while Martin held a chair of chemistry before serving as vice-chancellor of Monash, McCloskey was head of the School of Physiology and Pharmacology at the University of New South Wales and Penington had been professor of medicine until 1987. This was a committee well versed in the needs of scientific research, although it affirmed the importance of the humanities and social sciences.
The members worked to a tight timetable, but their procedure provided an implicit rebuke to the smoke and mirrors of the Green and White Papers. They called for submissions, received 132 (the AVCC employed Hugh Hudson to help prepare a substantial one) and documented them. The committee held consultations with interested parties, went carefully over all relevant policy documents and presented detailed statistical analyses. Penington, who drafted two substantial chapters, even anticipated DEET’s three working groups by working out a comprehensive relative funding model.
The committee accepted that research funds should be allocated competitively and go to the institutions, research groups and individuals best able to use them; a corollary was that the allocation should be based on explicit criteria and applied in an open and consistent manner. It also accepted that ‘higher education research must become more attuned to national social and economic objectives’, but insisted that the most fundamental objective was the advancement of knowledge. There was a need to set national priorities, providing they included ‘a substantial commitment to basic research which provides the advances in knowledge crucial to Australia’s long-term development’. In determining priorities it was incumbent on NBEET and the ARC to consult widely. The ARC had to be accountable to government, but it if it was to exercise effective leadership it had to have ‘the confidence and respect of the academic community’. Above all, institutions needed the capacity to play their part in the country’s research system.10
Penington was not able to persuade his colleagues to recommend the clawback be set aside. The committee did, however, propose that $65 million be injected into higher education over the remaining two years of the triennium to remedy the deficiencies of research infrastructure. Half of this was to be allocated by the HEC, and the other half provided to the ARC and NHMRC so that they could make better provision for infrastructure in their funding of projects and centres. An additional $45 million was sought, again over two years, to create a new infrastructure fund. The Department of Finance encouraged the committee in these requests, for it believed that the Unified National System had spread research support too thinly. And the committee also recommended the creation of a new kind of centre, a Strategic Research Centre, to bring universities and industries together to work on fields of science and technology with potential economic benefit.11
The Minister responded to this and another review of higher degree research in May 1989. He affirmed the need for a broad research capacity in a wide range of fields but insisted that ‘choices have to be made and priorities established’. There would ‘always be a place for the outstanding individual’, but as costs rose and competition became more intense the future lay with selectivity—‘increasingly it will only be the best researchers who can expect to obtain the support of institutions and granting bodies’. The Smith committee’s emphasis on basic research was rejected on the grounds that ‘modern research’ no longer expected discoveries to come from undirected inquiry: the future lay with concentrations of researchers from many disciplines working together on a common task, and often at the point of application. Even though the Minister allowed the ‘considerable potential’ of the Strategic Research Centres proposed by the committee, he was not prepared to fund them.12
Nor did Dawkins accept the committee’s recommendation for an injection of funds to universities to make good the effects of the clawback. Instead he announced the provision of $107.5 million over three years to help with infrastructure. His method of providing this assistance was through the ARC and on a competitive basis. From 1990 the ARC distributed infrastructure funds through four streams. The first, known as Mechanism A, directed block grants to pre-1987 universities based on their success in winning research grants. The second, Mechanism B, provided development grants to the former colleges (which were also assisted by a 35 per cent infrastructure loading on their project grants), while Mechanism C funded large-scale initiatives shared between institutions and a fourth stream allowed for the purchase of expensive equipment.13 In the same year the Minister accepted the advice of NBEET that the clawback be left at $65 million annually, which amounted to 3.5 per cent of the operating grants of the pre-1987 universities.14 Together, the new infrastructure mechanisms restored about 70 per cent of the money lost in the clawback, although the pre-1987 universities received little more than half of it.
The other change to research funding came with the creation of the relative funding model in 1990 to align the operational grants of universities and colleges. In addition to the teaching component, it created the Research Quantum, which was set initially at 6 per cent of the Commonwealth’s allocation—that being the portion of the pre-1987 universities’ grants that could not be attributed to teaching. The Quantum was meant to support university-funded research, including study leave, and assist with the costs of externally funded research. Principally because there was a lack of reliable data for a more comprehensive measurement of performance, it was determined solely by the success of institutions in winning Commonwealth-funded research grants and thus favoured the established universities. The seven most successful won two-thirds of such grants and gained two-thirds of the Research Quantum; it contributed more than 10 per cent of the operational grant of some of them.
Yet the Quantum provided only a small proportion of a Commonwealth competitive grant, a figure well below the infrastructure support provided in other countries. All members of the Unified National System, regardless of their success in winning grants, were faced with a vexatious dilemma. Should their Research Quantum be used to meet the needs of those who won the grants or should it support the endeavours of their grantless colleagues? It was in making such decisions that the consequences of concentration and selectivity were most painfully apparent.15
The mismatch between researchers’ needs and their institutional support became more pronounced as the direct funding of research increased. It was exacerbated by a substantial increase in the cost of equipment, most of which was bought overseas at a time when the exchange rate disadvantaged purchasers—the Australian dollar bought somewhere between 65 and 75 US cents throughout the first half of the 1990s. Tellingly, it was Ross Free, Minister for Science and Technology, who recognised the problem, and at his instigation NBEET conducted a review of research infrastructure in 1993. It calculated that the increase of research grants and centres had imposed an additional $125 million per annum in unfunded infrastructure costs. This finding brought little relief, other than hotly contested proposals to change the method of allocating the existing infrastructure provision, and the government continued to create new externally funded schemes with insufficient consideration of their effect on the research fabric. The AVCC called repeatedly for a consistent long-term investment plan, without success.16
A review in 1990 had already recognised the problems of university libraries, although the reviewers were required to frame their recommendations ‘within the context of current resourcing levels’. They found substantial variation in provision: universities spent between 4.9 and 9.9 per cent of their recurrent funds on libraries; colleges between 2.9 and 13.3 per cent. Across the Unified National System, moreover, the proportionate expenditure was falling. The reviewers remarked on the common disregard of library needs: a new course would be proposed without considering the availability of reading material, a research centre would frame its budget on the assumption that the library would service its requirements. University administrations, they suggested, viewed the library as a bottomless pit, a cost centre without any limit to its needs, and the shift to devolved budgets brought little relief. As the librarian of the University of Sydney remarked, ‘the Library is everybody’s second priority’. The review was asked to look for economies through cooperative arrangements, automated library systems and electronic resources, although its report noted how new technology generated new expectations and new demands; ‘mirage-like, the predicted savings constantly recede’. Even so, it was not possible to opt out for ‘the old ways of doing things will no longer suffice either’.17
If nothing else, the review’s report made clear that the library was a crucial research resource. It found that the library costs of a higher degree student were more than ten times those of an undergraduate, and estimated the annual cost per researcher was $2300. It also persuaded DEET to set aside part of the Mechanism C infrastructure funds for library projects. Even so, libraries struggled to keep up with escalating costs. As with equipment, the greater part of their acquisitions budget was spent on overseas material. Aside from the low purchasing power of the Australian dollar, they were faced with steep increases in the cost of serial subscriptions as large commercial publishers acquired indispensible journals from the societies that created them and then sold the information back to the academics who had created it, except that the cost was borne at the institutional level. Even though university libraries culled subscriptions during the 1990s, expenditure on journals continued to rise and squeeze out book purchases. There was no more intense argument than those between faculties, schools and departments over the formulae devised at this time to ration library acquisitions.18
Such were the consequences of the rapid growth of externally funded research. The universities were under no compulsion to accept parsimonious project grants or bid for marginally costed centres. They did so because there was no other way of building up their research. Similarly, they complained that direction of funding to priority areas came at the expense of other fields and reduced their ability to determine the projects they wished to pursue. Again, they controlled the use of their infrastructure and Research Quantum income but had no alternative other than to provide for those who won the grants and contracts. The new arrangements used concentration and selectivity to steer research, underfunding and competition to shape its architecture.
In 1988, the first year of the clawback, the ARC provided $42.3 million in research grants and $7.3 million in fellowships. Both schemes grew rapidly so that $104.5 million was spent on grants in 1991 and $14.3 million on an expanded range of fellowships.19 But as the number of grant applications increased, the success rate fell—from 42 per cent in 1989 to 24 per cent in the following year and 19 per cent by 1993. With just 100 ARC fellowships from postdoctoral to professorial level on offer, the competition for full-time posts was even more intense. The increase in funds was made possible by the clawback and plateaued when it was left at its 1991 level, so these schemes marked time in the years that followed. There were 3088 applications for new grants in 1993, of which 627 were successful; 2832 applications and 667 grants in 1996.20
The result was a highly selective system of direct research support, although the patterns of concentration were open to question. The seven leading universities, Adelaide, Melbourne, Monash, New South Wales, Queensland, Sydney and Western Australia (ANU’s Institute of Advanced Studies did not become eligible until 2002) consistently won two-thirds or more of the available funds. There was also an astonishing masculine bias that discouraged women, who submitted just 12.8 per cent of applications in 1992 and 11.8 per cent in 1996 when they comprised a third of the academic workforce. Patterns of disciplinary support underwent limited change. The biological sciences received 25 per cent of grant funding in 1988 and 26 per cent in 1996; humanities and social sciences improved slightly from 18 to 19 per cent. On the other hand, engineering, earth sciences and applied sciences increased their share from 25 to 31 per cent at the expense of physical, mathematical and chemical sciences, which fell back from 32 to 23 per cent.21
The ARC had less control over an auxiliary program of Small Grants that was introduced from 1989. It allocated the funds, amounting to a fifth of its grant expenditure, to universities on the basis of Large Grant success in the expectation they would assist disciplines that did not require large amounts of money, but left the distribution to their discretion. A higher proportion of these Small Grants went to such disciplines and to women, but here too a policy of concentration and selectivity was evident. Especially in the more successful universities, Small Grants were distributed to the departments that had ‘earned’ them by their Large Grant success, and an increasing number of Small and Large Grants were held concurrently.22
Two methods were used to direct ARC support to specific kinds of research. The first was by reserving some of its grants for designated priority areas. Five priorities were declared in 1989: materials science and mineral processing; scientific instrumentation; cognitive science; molecular approaches to the management of biological resources; and marine science and technology. The last of these was dropped in 1990, and Australia’s Asian context added as a sop to the humanities and social sciences. The management of resources went in 1992, and there were further changes in later years that enshrined food science, optics, sustainability and citizenship. It is difficult to discern any coherent rationale for these choices, which came and went with an unsettling frequency. ‘If nothing else’, a report on such direction setting observed, Australia was a ‘world expert in generating lists of technologies’.23
In 1989 the ARC set aside 7 per cent of its grant funds for priority areas, and this sequestration increased to 16 per cent in 1992 before falling below 10 per cent after 1994 in tacit acknowledgement that such special funding had limited success.24 The other device was the introduction in 1991 of collaborative research grants and fellowships, funded jointly by the ARC and an industry partner. These were directed to more applied research, with no restriction of the field of investigation, and social scientists made good use of them. They proved extremely popular, partly because there was a higher success rate than that for Large Grants, and grew rapidly from an initial $3.2 million to $12.6 million in 1994, when 180 collaborations were funded.25
The structure and composition of the ARC militated against any more substantial redirection of effort. The expert panels that determined the grants were composed largely of professors and guided by assessments of applications by leading researchers in the field. It was frequently alleged that the disciplinary constitution of the ARC’s panels disadvantaged more innovative interdisciplinary proposals, that the Australian research community was too small to allow robust peer judgement and that the importance attached to track records meant that few grants went to early career researchers. For its part, the ARC held fast to academic excellence as the overriding consideration. As Max Brennan, the physicist who succeeded Don Aitkin as chair of the Council put it, ‘excellence and relevance’ were ‘two sides of the same coin’.26
The competition was intense. A system of peer evaluation is usually able to reach agreement on the outstanding proposals and set aside others that are not competitive, but if only one in five applications can be funded, it is very difficult to discriminate between those at the margin. When a success rate drops below 30 per cent, moreover, applicants are likely to think twice about investing the effort in preparing a proposal. But with so much riding on ARC income, institutions expected their staff to do so; by 1996 some made it part of performance appraisal. So each summer several thousand academics toiled at their homework, most doomed to disappointment. Apart from the waste of time that might have been spent on a more practical endeavour, the effects of failure could be crushing.27
The NHMRC administered a similar program of university grants and fellowships in addition to its block funding of medical research institutes. Separately financed through the Department of Health, the NHMRC matched the ARC in the monetary value of grants to universities in 1988 but then fell behind—it was after a major review in 1999 that its project funding grew rapidly to parity with the ARC. Yet even in the 1990s the NHMRC had an appreciable effect on research concentration, for there were just ten medical schools and all but those at Flinders and the Universities of Newcastle and Tasmania were part of the seven leading research universities and therefore augmented their share of research income as well as the Research Quantum and infrastructure funding that came with it. In 1991, for example, Melbourne received $12.3 million in ARC grants and $10 million in NHMRC grants, Sydney $10.8 million and $8 million, Monash $7 million and $6.2 million. Medical and bioscience already represented half their research effort.28
Support for medical research came with a far-reaching condition. In 1966 the NHMRC had issued a statement of ethical principles governing medical research involving human subjects. In 1976 it required institutions to establish an ethics committee and in 1985 stipulated that any institution conducting research on human subjects must conform to its code of practice as a condition of eligibility for funding.29 That code applied not just to clinical trials but to all forms of research involving human subjects, including the social sciences. It is striking that neither the ARC nor the AVCC objected to the doctors’ disciplinary aggrandisement. By 1990 all universities had bowed to the NHMRC’s unethical demand and established ethics committees. All academics and students were required to obtain prior approval for their research proposals if they involved any form of survey, interview, observation or ethnographic study.
The ANU philosopher Phillip Pettit protested against the intrusive regimen in an influential article published in 1993. Regulation of ethical conduct, he argued, arose in response to a flagrant abuse—the Helsinki declaration of the World Medical Association, on which the NHMRC based its policy, followed the Nuremberg Code issued in 1947 in the aftermath of trials of doctors who conducted experiments in German concentration camps. Pettit discerned a repeated cycle whereby outrage following exposure of malpractice led to new rules governing the practice that had to be tightened after each new scandal. Ethics committees accordingly erred on the side of caution; their tendency to self-righteousness and an asymmetry of penalties (there was no sanction when a committee rejected a research proposal that deserved support) led to a growing intrusiveness. No system of ethical inquisition, he argued, could achieve a high level of ethical conduct, and an adversarial system of regulation brought only resentment and superficial compliance.30
A researcher who observed ethics committees at work noted how in asserting their authority they commonly cited worst-case scenarios. ‘Oh yeah, another one of those “Trust me, I’m a researcher”,’ one member laughed during discussion of a proposal that failed to specify the required safeguards. An economist working at the Australian Defence Force Academy in Canberra related how the human ethics committee of the University of New South Wales (to which the Academy is attached) came down to explain its requirements. ‘Their posture was one of missionaries bringing ethics to us poor savages for the first time’, and when their allusion to Nazi medical experimentation was contested, they resorted to the threat of blocking further research. One of Pettit’s suggestions was that ethics committees should be answerable, and their common practice of communicating by ‘do not reply’ emails is hardly good ethical practice. He also thought they should be required to justify their activity, for we have no information on the time spent on ethics approval, nor any comparison of the costs and benefits, including the opportunity cost.31
Compliance with the NHMRC’s Statement on Human Experimentation certainly imposed an additional cost on universities, and the need to obtain approval before commencing a project created delays (which particularly affected honours and postgraduate students working to a pre-determined date of completion). It also became part of universities’ research management, part of the way in which they recorded, regulated and directed the activity.
A further feature of research policy was the concentration of effort in specially funded centres. This had begun under CTEC with what were originally called Centres of Excellence, ten of them chosen in 1982 for their potential to contribute to the country’s economic, social and cultural development; the last of the descriptors, which did not survive the decade, indicated that these centres were meant to exhibit Australia’s strengths to the international research community. Seven more Special Research Centres were chosen in 1987, when the ARC resumed responsibility for them, and there was a third round in 1991. The centres were funded at a rate of between $400 000 and $1 million annually, for six years with the possibility of extension, and in 1992 there were twenty-six, all but four in fields of science—and one of those four was Wollongong’s Special Research Centre for Research Policy. The Key Centres for Teaching and Research began in 1985 with additional rounds in 1988 and 1989. They attracted around $200 000 per year, and in 1992 there were twenty-four of them. Altogether the government spent $20.7 million on centres in that year. A review of the two schemes found that the Special Research Centres had ‘significant impact’ and the performance of the Key Centres represented ‘a good investment of funds’, although it provided few measures of performance and funding fell away in subsequent years.32
By then a new and more ambitious kind of research centre had been devised. Its architect was Ralph Slatyer, a biologist and former chair of ASTEC who in 1989 became the country’s first Chief Scientist. He developed the idea of the Strategic Research Centre that had been advanced by the Smith committee and in turn followed similar developments in the United States and United Kingdom—a partnership of universities and industry to work on applied research of strategic significance. Slatyer’s scheme included public research organisations such as the CSIRO with the aim of strengthening links between researchers and users and capturing the potential of the country’s research capability. Slatyer had the Prime Minister’s ear (he had been a contemporary of Hawke at Perth Modern School and then the University of Western Australia), and Hawke declared in his election speech in March 1990 that Australia could no longer rely on its past good fortune as the Lucky Country and must instead become the ‘Clever Country’. To harness the talents of outstanding research groups, the government would provide $100 million a year to create a ‘network of fifty world-class Cooperative Research Centres’.33
Thirty-five Cooperative Research Centres were chosen in 1991 from 194 applications. By 1993 there were fifty-two of them, and in 1996 the government was providing $139 million for sixty-one centres whose income from all sources were estimated to amount to $400 million annually, although the university contribution was largely in kind (which was another way of saying that the centres made an additional call on higher education infrastructure). Here was concentration and selectivity with a vengeance; the selection being made by a committee consisting of members of the Prime Minister’s Science Council, industrialists and heads of research agencies. The centres ranged across agriculture and rural industry, environmental science, manufacturing, information technology, manufacturing, mining and health. The first of many reviews of this expensive program was hard pressed to demonstrate the returns. It calculated that they assembled 2300 person years of research time in 1993, published 951 papers in refereed journals and lodged twenty-two patent applications. If that seemed a low rate of productivity, the reviewers took comfort in a fashionable addition to management jargon: ‘there is already a significant beneficial change in research culture’. Roughly paraphrased, this meant ‘we cannot find a lever to pull’.34
The most appreciable change in research culture was the importance attached to income. The unit of currency for teaching in the Unified National System was the weighted Equivalent Full Time Student Unit. For research it was the dollar. Money was no longer simply the medium that allowed research to be conducted; it became the measure of value and even the goal of research activity. And since external income determined the size of an institution’s Research Quantum and infrastructure block grant, work that did not depend on such support had no value.35
There were numerous problems with this monetary measure. First, it measured input and gave no indication of how well the money was used. Second, it was based on the supplementary cost of a project rather than the full cost; a research grant paid for research assistance, special equipment, materials and other expenses but not the time of the principal investigators. It was strongly influenced by the different cost of equipment and support staff in particular disciplines and therefore disadvantaged universities that emphasised basic research in the humanities and social sciences. As Russell Linke pointed out, this funding system violated four core principles for the use of performance indicators: that the measures be appropriate, account for institutional context, be subject to expert judgement and provide incentives and opportunities for improvement.36
There was general agreement that research income alone was an inadequate determinant of the Research Quantum, but no agreement on alternative indices. NBEET’s 1993 report on research infrastructure proposed that the Quantum be replaced by block grants based on output and quality, and this was taken up by a working party of DEET, the ARC, HEC and AVCC. The difficulty of settling on appropriate measures was compounded by consideration of their likely effects on pre-1987 and post-1987 universities, so the composite research index adopted in 1994 was to be introduced over several years. National competitive grants would make up 75 per cent of the Research Quantum in 1995, reducing to 55 per cent by 1997, with other public sector and industry grants contributing an additional 15 per cent. Two output measures were introduced, publications to rise from 7 per cent to 18.75 per cent and research higher degree completions from 3 to 11.15 per cent. At the same time the Mechanism A and B infrastructure schemes were replaced by a single Research Infrastructure Block Grant, based solely on national competitive grants.37
Within a year these weightings were altered. Research income was restored to 82.5 per cent in 1996 (with national competitive grants counting 2.5 times more than other grants), reducing to 77.5 per cent; publications would increase to 17.5 per cent and higher degree completions to 5 per cent.38 The AVCC was closely involved in the design of this composite research index, especially the broadening of research income and the introduction of publications—it took responsibility for collecting the data for both the new components—and also wanted to develop a quality measure for publications. It soon became apparent, however, that disciplinary patterns were so diverse as to obviate any system-wide evaluation that relied on bibliometrics.39
There were hopes that the inclusion of publications in the composite index would bring some relief to researchers who were poorly served by the grant system. Partly for this reason it cast the net wide, taking in not just books, chapters and journal articles but also audio-visual material, computer software, technical drawings and creative works. But in order to provide a common metric it also assigned weights to every one of the twenty-two publication categories. A research monograph was worth five times an article in a scholarly journal or a refereed conference paper, but a non-refereed article had only a fifth the value of a refereed one and a major recorded work in the performing arts just two-fifths that of a book chapter. These relativities incited an extraordinary unhappiness and disputation. The sums of money involved were negligible. A refereed article ‘earned’ no more than a few hundred dollars through the composite research index but, as institutions strove to maximise their performance, the weightings became a symbol of the discontent of disadvantaged disciplines.
They also tested the academic conscience. The AVCC compiled the publications index from information provided by its members, who in turn relied on returns from their academic staff. In 1996 the Department engaged the accounting firm KPMG to carry out an audit of publications from a sample of institutions. It found an error rate of 59 per cent and was then commissioned to examine the publications data for 1995 after institutions were given the chance to revise their returns. The error rate on this occasion was 34 per cent. As a consequence publications were reduced to just 10 per cent of the composite index (and higher degree completions raised to 10 per cent) while the categories of publication were reduced to just four: books, book chapters, journal articles and conference papers.40 This only compounded the discontent, and it also had a perverse effect on patterns of activity. Journal editors were besieged by authors seeking to place an article but had great difficulty in finding book reviewers. Who could afford to write a case note for a law journal or an entry for a reference work now that these were no longer deemed to be publications?
Lost in the transition to the Gradgrind calculation of research output was any measure of quality. Other countries were moving at this time to systems of evaluation, most notably the United Kingdom, which introduced a comprehensive Research Assessment Exercise in 1986. All university departments and research groups there were assigned to a ‘unit of assessment’, and all were invited to submit a portfolio of indicators, among which were the best ‘outputs’ produced by each practitioner. A panel of specialists then ranked the quality of the work done in that discipline on a five-point scale, and the results were used to distribute 15 per cent of universities’ funding. The Research Assessment Exercise was repeated in 1988, 1992, 1996 and thereafter on a five-year cycle, and with each iteration the stakes were higher—the one conducted in 1996 determined the allocation of £4 billion over five years. Those departments that failed to achieve a satisfactory result lost their funding and in many cases were closed down.41
This form of research assessment by peer review was adopted by a number of other countries, but not Australia until 2010 and then only in an attenuated form. Among other reasons, it was extremely expensive. The 1996 British round considered the research of 56 000 academics in 3000 departments of 192 institutions; universities spent much time preparing their submissions and the task of evaluating the outputs (each academic could submit up to four publications for the period under review) tied up the members of nearly seventy panels for six months. Research assessment undoubtedly concentrated research (and stimulated a transfer market for leading researchers), but even though there were attempts to assess the utility as well as the quality of research, the comprehensive nature of the exercise did little for selectivity. It also produced an increased separation of research from teaching, yet the dire consequences of failure spilled over into the loss of successful undergraduate programs.42
There was an element of peer assessment in the Australian system, especially in the determination of research grants, which in turn drove block funding through the Research Quantum and infrastructure support schemes, but the size of a grant was determined more by the cost than the quality of the project. Centres were used here to achieve concentration and selectivity, but all too often they produced a mismatch of government purpose and institutional response. The review of the Special Research and Key Centres regretted that many universities were ‘budgetarily shy’ of meeting their commitment to such centres and reported frequent difficulties in getting recompense for the centres’ teaching and supervision. Industry partners in Cooperative Research Centres were astonished by the reporting requirements of universities, their cumbrous decision-making and commercial naivety.43
Concentration and selectivity was a government policy to which universities accommodated themselves with varying levels of acceptance. It was a consequence of opening up research to the whole of the Unified National System, a counterweight to the spread of research effort. But in the absence of rigorous assessment of the quality of the research conducted in centres—and neither of the major reviews provided such evaluation—selectivity brought accusations of picking winners. Peter Karmel was a strong supporter of the clawback of funds from the universities for competitive allocation by research agencies, for he believed that ‘in research the strong, not the weak, should inherit the earth’. He was also a consistent critic of the imposition of poorly conceived priorities and special funding arrangements. ‘It is ironic’, he declared in 1994, ‘that a government committed to deregulation of the economy has increasingly treated higher education institutions as components of a system subordinate to political and bureaucratic priorities. It is equally ironic that institutions committed to their autonomy have accepted this.’44
The Unified National System thrust colleges into a highly competitive research environment while changing the arrangements for existing universities. College teachers were appointed for their professional expertise with duties that made very little provision for research. Universities expected their academics to conduct research and had discretionary funds to support it. Not all did so. A survey of academics in pre-1987 universities found that in the five years between 1985 and 1989 the median number of publications was five refereed journal articles and two conference papers. A fifth had failed to publish an article and a third had presented no conference paper. There was also a marked disparity in productivity: 14 per cent of the academics produced half the publications. It was hardly surprising that the output of the college sector was much lower. Staff in the larger institutes of technology produced one article and one conference paper over the five years, those in the other colleges just one conference paper; half of the CAE sector failed to trouble the scorer.45
There were also marked differences between disciplines. The same survey was restricted in its disciplinary coverage and found that natural sciences had the highest rates of publication, followed by engineering, the social sciences and law. These patterns corresponded to differences in orientation and opportunity (80 per cent of university chemists surveyed had won a research grant in the past two years, 27 per cent of lawyers) and indeed in the meaning of research. Ingrid Moses, who conducted the survey with Paul Ramsden, explored disciplinary practices using two dimensions: pure/applied and hard/soft. The distinction between pure and applied research was well established, that between hard and soft more contentious. Its proponents used it to assert the greater methodological rigour of science.46
Chemistry was a hard–pure field where the frontiers of advance were known and each finding typically built on previous ones using commonly agreed ways of conducting research to generate new discoveries or explanations. Research in hard–pure disciplines lent itself to clearly defined projects that could be conducted with the assistance of postgraduates; it was also competitive, the pace rapid and communication frequent, typically in short papers. Engineering was a hard–applied field, practical and pragmatic, again using standardised techniques to solve problems but directed more to outcomes than to academic communication. English literature was a soft–pure discipline characterised by task uncertainty, highly personal and subject to individual interpretation—and reinterpretation. The boundaries of problems were unclear, while the significance and relevance of research results were difficult to establish. Publications were longer (books being regarded as more important than journals), often delayed and with a higher rate of rejection. Finally, law was taken as a soft discipline straddling pure and applied research, its strong orientation to practical expertise suggestive of an applied orientation but with higher prestige and a greater degree of agreement on techniques of interpretation than, say, education or sociology. Moses then tested these characteristics against research patterns. Chemists and engineers secured most grants, the chemists published more often. Literary scholars geared research to their teaching and stood out for their enjoyment of writing. The lawyers attached importance to keeping up to date but were least involved in departmental seminars and scholarly networks, the most individualistic in their conduct of research.47
As Moses observed, the patterns in English and law bore some similarity to what the White Paper described as scholarship, staying abreast of the discipline without producing new knowledge. Scholarship had long been associated with the erudition and learning that were the hallmark of a proper university until the laboratory came to define its research mission. Since the direct funding of research favoured projects with clearly defined purposes and preferably useful outcomes, universities had to find ways of directing scholarship into the research mould. More than half the academic workforce was employed in non-scientific fields where the principal research requirement was time, but research agencies made very little provision for practitioners to buy relief from other duties. The former colleges, meanwhile, had to find ways of entering the research system.
In order to join the Unified National System institutions were required to submit a research plan as part of their educational profile. Those that complied in the first two years typically gave little more than a general description of current activity, so in 1990 DEET issued guidelines (and forced the former colleges’ hand by making a research management plan a condition of any application for a Mechanism B infrastructure grant). The plan was expected to specify goals and objectives, identify strengths and indicate how they were to be supported, set out strategies for aligning research with government initiatives for the national benefit, explain the internal allocation of resources, the methods of evaluation and much else. Even with this exhaustive checklist, the Department remained critical of the shortcomings in the documents universities submitted. They displayed a reluctance to set priorities and direct resources to them, offered little in the way of performance indicators or ways of evaluating their strengths and weaknesses; ‘very few institutions indicated areas they wished to vacate’.48
Within the universities there was considerable scepticism. Frank Larkins, Deputy Vice-Chancellor (Research) at the University of Melbourne, has recalled doubts whether DEET had the expertise even to evaluate the volume of information it requested, much less make use of it. Nevertheless, he concedes that the preparation of such detailed plans forced universities to change the ways they planned and managed research. A necessary first step was to compile the information DEET required, much of it new in format and necessitating a central research office to ensure a consistent and comprehensive return. Then, as the research management plans evolved after 1990 into policy documents setting out objectives, strategies and performance measures, universities centralised their planning. Frank Larkins was one of a new breed of deputy and pro vice-chancellors who took charge of this activity, their enhanced importance recognised by the formation of a standing committee of the AVCC in 1990.49
Most came to the job with a strong research record, although few were able to maintain it. They usually assumed administrative responsibility for a central research office, which advised academics on sources of funding, massaged their grant applications, handled contracts and dealt with compliance requirements. They took advice on policy from a research committee, through which academics provided feedback. While line-management authority and control of budgets varied, the deputy vice-chancellors set the guidelines for the distribution of ARC Small Grants and postgraduate scholarships, as well as early career and seeding grants, and the allocation of infrastructure funds. This way of organising and integrating research was increasingly replicated at the faculty or divisional level with an associate dean for research, a research office and research committee.50
All universities competed for external funds and the prestige they conferred, but their strategies varied. The newer and less developed universities tended to a top-down approach, seeking to build strength in selected areas. The larger and stronger universities wished to maintain a broad research profile and preserved more of the peer-based disciplinary traditions through which they maintained their international reputation. Here the deputy vice-chancellor and their staff were more facilitators than directors of the research effort, with fewer sticks and carrots at their disposal.
All universities, however, made use of the government’s settings. Even though the Department told them not to apply the Research Quantum or composite research index in their internal allocations, they did. Every one of the research management plans submitted in 1990 made use of competitive funding, and a growing number used the research index’s formula for grants, publications and research higher degree completions to link external to internal funding. Hence the lament of a scientist at Monash who did not need an ARC Large Grant but felt an obligation to apply for one because success would increase his faculty’s income and enable others to pursue research: ‘it’s your duty to your science colleagues to get a grant even though you don’t particularly want the money’.51
Universities also adopted the government’s policy of concentrating research in centres. From an institutional perspective the centre possessed a number of advantages. In new universities with very limited resources, it was a device to initiate research. In established ones with an eye on the government’s support of Special Research and Cooperative Centres, it was a sprat to catch a mackerel. If disciplinary departments were regarded as unresponsive and inward-looking, the centre offered greater flexibility and malleability with more soft money and contract staff. It also allowed the university to shape priorities and encourage more interdisciplinary, purpose-oriented research. Centres grew with astonishing rapidity. A study conducted in 1993 found a felicitous 888 of them, two-thirds established in the past four years, and they employed nearly 7000 effective full-time staff. It estimated that they conducted half of all university-based research—although the attribution of publications to centres was often contentious.52
Centres differed greatly in size, from large purpose-built Cooperative Research Centres where no expenses were spared to tiny outfits that consisted of little more than a business card and a name on a door. There were 212 of them in the social sciences, with an average of six staff; 102 in medical and health sciences with average of eighteen. Apart from the centres supported by the Commonwealth, some were formally recognised and centrally funded, others no more than an informal arrangement within a faculty or department. Centres were created in the expectation of attracting external support, so that after initial core funding they could become self-sustaining, but very few did. Long-term survival usually depended on earning additional university income through postgraduate teaching, but this came at the expense of the home faculty’s teaching load. Most centres owed their existence to an entrepreneurial academic. The centres could be a way of opting out of the travails of the Department or an initiative to enhance its fortunes.53
The proliferation of centres attests to the pervasive influence of the maxim ‘concentration and selectivity’. It rested on the belief that resource concentration increased productivity, despite a substantial international literature that revealed no economies of scale beyond a certain point. The point of critical mass varied according to discipline. It was larger in natural sciences such as chemistry than in humanities disciplines such as English, but even in science a research team of around six members was usually optimal. Beyond that, a survey of Special Research Centres and ARC grant recipients suggested, cohesion declined and the tasks of administration absorbed too much energy. Like-minded researchers working on the same problem were better served by interaction through email than an agglomeration of teams pursuing different projects in under the same roof.54
One of the consequences of the policies adopted in the 1990s was a homogenisation that ignored the different needs and practices of academic disciplines. It was initiated by the government’s reward system and taken up by universities and researchers in the competition between and within institutions. The disciplines that diverged most markedly from the new understanding of research suffered the greatest disadvantage. The publication patterns in law, for example, were quite discordant with the publications index; the creative activity of visual and performing arts found no recognition in the norms governing competitive grants. Those working in more academic disciplines, such as the humanities, were able to adapt more readily than those oriented to professional practice, such as architecture.
Competition between institutions was the most powerful force for compliance. The leading research universities began with an advantage they were determined to maintain. We have seen already that they won two-thirds of the competitive grant funds throughout the period, and they were equally advantaged when the composite research index introduced other measures. In 1994 the University of Melbourne had a weighted publication count of 4843, while that of Ballarat was 116, Southern Cross University 215 and Swinburne 225. On a per capita basis the disparities were just as wide. Melbourne had $32 100 of research income per academic staff member, Edith Cowan $1100, the University of Southern Queensland $2400 and Deakin University $2800. In 1996, 24 per cent of the income of the Universities of Adelaide and Queensland came as research income. It was 1 per cent at the Australian Catholic University, 2 per cent at each of Charles Sturt and Edith Cowan. But no institution could afford to withdraw from this lopsided contest.55
Postgraduate studies were the fastest growing branch of higher education. Much of the growth during the 1980s was in coursework programs with a professional orientation. The other path, the original one and still the most popular, was a higher degree awarded on the basis of original research in an academic discipline. Paradoxically, this form of advanced study was described as research training—a training that prepared the candidate for an academic career.
The Green and White Papers accepted the distinction but foreshadowed substantial change. Although opportunities in the academic labour market had declined during the 1980s, 61 per cent of all doctoral and masters students were still pursuing traditional research degrees. These had once been regarded as a good preparation for other careers but were now considered too narrow and specialised for the ‘broad analysis needed in rapidly changing social, technological and economic circumstances’. In the absence of an advanced coursework component, it was thought doubtful that they provided an adequate training even for a research career. Too many of these students worked in isolation. Not enough were in fields of high demand such as accounting, computer science and electronic engineering. As with research, there needed to be greater concentration and selectivity, and even though research training would no longer be confined to the pre-1987 universities, the government looked to the ARC for advice on rationalising provision. There was to be increased mobility so that postgraduates could gravitate to the programs best able to offer a full training in their field and cooperative arrangements between universities to assist such congregation. In view of the Green Paper’s emphasis on competition, its expectation that ‘specialisation at the postgraduate level should be easier to achieve than at the undergraduate level’ was remarkably naive.56
Two ARC reports in 1989 charted the future course. One, a general review of graduate studies and higher degrees, clarified the degree structure of the universities and colleges (there were discrepancies in terminology, entrance requirements and the volume of work required) and recommended that load to be transferred from undergraduate to the postgraduate level. It wanted attention to more timely completion of research higher degrees, raised concern over the low enrolments in areas of national priority and encouraged institutions to introduce professional doctorates as an alternative to the generic, thesis-only doctor of philosophy.57
The other report, on postgraduate scholarships, looked more closely at research training. It started with current provision. There were 15 289 higher degree enrolments in 1988, of whom 12 586 were local students; 5826 of these were studying on a full-time basis and eligible for a Commonwealth scholarship. In the same year the Commonwealth awarded 725 new research scholarships to university candidates, and made a further thirty college awards. The scholarships came with a living allowance, or stipend, were tenable for four years (two for masters candidates) and 2284 (39 per cent) of all full-time higher degree candidates currently held one. They were distributed to universities on the basis of the number and quality of their higher degree enrolments; the universities in turn allocated them to their strongest applicants. Science and Arts faculties won the largest share; history was exceeded only by chemistry and physics. Males outnumbered females by three to two.58
In addition, universities offered their own scholarships to candidates who missed out on a Commonwealth award, which nearly always required a first-class honours undergraduate result. A worrying finding was that these university scholars, who mostly achieved only second-class honours, had a significantly higher completion rate than those on Commonwealth scholarships. The reason was unclear—perhaps universities were simply more careful with their own money. There were 1605 such university-funded scholarships in 1988, but that number was not likely to survive the clawback.59
The review committee urged an increase in the number of Commonwealth research scholarships and an improved stipend. It also wanted the scholarships to be removed from the Student Assistance Act for administration by the universities as part of their research activity, with provision for additional scholarships from ARC grants and centre funds. This would encourage concentration, although the report had little to say about the White Paper’s expectation of greater mobility or the cooperation between institutions that it hoped would build stronger programs. To attract more research training in areas of national importance, the committee proposed an augmented stipend and also a pilot scheme of research awards with an industry partner. As a remedy for slow completion, it recommended the maximum tenure of a doctoral award be cut back from four to three and a half years. Apart from providing a smaller increase in the value of awards than the committee wanted, the government accepted all these recommendations.60
The changes sought to strike a balance between research training to assist in the tasks of national reconstruction and to meet the needs of the academic workforce. Both ARC reports cited a recent study by the Victorian Post-Secondary Education Commission on the academic labour supply, which predicted a shortfall of between 8000 and 12 000 higher degree graduates required to satisfy staffing vacancies over the following five years. This caused such concern in Canberra that DEET commissioned a further study by Richard Blandy’s National Institute of Labour Studies. It projected an annual shortfall of at least 1300 academics throughout the 1990s. Here was a powerful inducement to expand the number of higher degree places.61
As Peter Karmel often warned, labour market predictions were notoriously unreliable. This one turned out to be wildly mistaken. It rested on three false assumptions. The first was that the PhD would be a necessary qualification for academic employment. That was becoming the case in some fields, but even in 1996 it was not in professional disciplines such as architecture, business studies and law. Former colleges encouraged their staff to acquire research qualifications, but only a third had doctorates in 1996. The second was that staff numbers would increase in line with enrolments. In fact there was a marked deterioration in the student–staff ratio and little growth in the academic profession. And the third was that a substantial proportion of research-qualified graduates would be lost to better-paid employment in the private and public sectors. The economic downturn of the early 1990s reduced such opportunities, so more looked for academic appointments. There was no shortage.62
That did not become apparent before the government embarked on a substantial expansion of research training. It allowed universities to increase their research higher degree enrolments and extended the provision of HECS-exempt scholarships, which enabled these students to pursue their candidacies free of charge. The number of new postgraduate awards was lifted to 900 in 1990, 1000 in 1991 and 1200 in 1992, with an additional 100 industry awards. But since full-time enrolments surged from 5826 in 1988 to 8814 by 1992, the competition grew more intense. A new review in 1992 recommended a further increase in the number of awards and also proposed that completions be included in the formula for allocating them to universities. The government responded by lifting the number of new awards to 1375 in 1993 and tweaking the formula. From 1995 it would comprise load (40 per cent), completions 20 (per cent) and the new composite research index used to distribute the Research Quantum (40 per cent).63
The original formula had used the number of research enrolments moderated by the proportion of award holders with first-class honours as a quality measure—but by the 1990s almost every one of them did. This was a method of distribution that advantaged the stronger universities, and in 1988 the eight leading ones (Adelaide, ANU, Melbourne, Monash, New South Wales, Queensland, Sydney and Western Australia) secured 540 of the 725 awards. The changes to the formula made little difference: the same eight universities won 834 of the 1200 awards in 1992 and 973 of the 1550 in 1996. The fiercest competition was between other pre-1987 universities and thrusting new members of the Unified National System. By 1996 RMIT (34 awards) had passed Deakin (20), as Curtin (31) had Murdoch (26); the Queensland University of Technology (25) was closing in on Griffith (27), the University of Technology, Sydney (21) on New England (28).64
The former colleges were perhaps the principal beneficiaries of the misreading of the academic labour market. With the assistance of a Commonwealth Staff Development Fund, they enrolled many of their teachers in higher degrees through which they obtained the qualifications to supervise their own candidates. The adoption of research training in the new universities was uneven. In 1990 Curtin had 370 higher degree research enrolments and the RMIT 290, but there were just 32 at Central Queensland University, 6 at Charles Sturt and 3 at the Australian Catholic University. Thereafter the numbers grew rapidly, at an annual rate of 16.3 per cent against 13 per cent in the leading universities and 14.6 in the other pre-1987 ones. Even so, this growth came from a very low base. In 1996 research students made up 12 per cent of the enrolment at ANU, 11 per cent at Adelaide and Sydney. Of the newcomers, RMIT did best with 5 per cent of its enrolment; most were well below that proportion.65
The policies the former colleges used in building up training were tailored to their research strategies, concentrating the activity in niche areas. The older universities, on the other hand, offered research higher degrees in all their disciplines. They were better placed to do so, of course, since they had more research funds and more awards to distribute. Consequently there was limited change in the fields of study. Professional disciplines such as business, education and law increased their enrolment, but the humanities and social sciences held onto a much larger portion and eclipsed science as the largest category.66
If this militated against the government’s policy of concentration and selectivity, it served the goal of mobility. The ARC’s 1992 working party on postgraduate support and student mobility found that some 33 per cent of higher degree research students moved to another institution, and estimated that another 10 per cent moved at the undergraduate level to undertake their honours year. Given the concentration of the population in a small number of seaboard cities separated by long distances, this was taken as satisfactory.67 Most moved to a leading research university, drawn by its allure as much as the expertise, and a subsequent study noted that these universities recruited more of their own to academic posts. A startling 43.3 per cent of those working at Melbourne, and 36.3 per cent at Sydney and Queensland, acquired their highest qualification in the same institution, against a sector average of 22.7 per cent. Such inbreeding was partly an effect of longevity, exacerbated by the reduction of overseas recruitment during the 1980s as Australian salaries stagnated, and it was noted that two-thirds of Cambridge academics had Cambridge degrees. But it was hardly a sign of vitality.68
Of greater immediate concern was the low and slow completion rate. In 1988, when enrolments numbered 15 289, there were just 2274 completions. By 1996, when enrolments reached 33 131, the 4730 completions had barely kept pace. A lag was to be expected in a period of rapid growth, and more candidates were choosing the PhD in preference to a masters, but a series of cohort studies found little sign of improvement. The completion rate of those who commenced doctorates in 1983 was 54 per cent, with an average time of five years. This improved to 64 per cent for a 1990 cohort, but a subsequent study of those who enrolled at the beginning of 1992 found only 53 per cent finished their degree by 1999; 18 per cent were still on the books, and the rest had dropped out.69
Much attention was paid to improving practices, especially after research training was included in the third round of Quality Assurance and completions were made part of the Research Quantum. Strengthened selection procedures and induction programs were expected, more regular meetings with supervisors, clearer guidelines for the provision of space, facilities, conference and fieldwork funds, better monitoring of progress and career support.70 In 1990 ANU created a school of graduate studies, which complemented the services offered by the relevant department with support services, workshops, seminars and opportunities for professional development. Other universities followed suit, and soon all of them had a Dean of Graduate Studies overseeing policies and practices.
Higher degree research had long been a form of disciplinary apprenticeship and a rite of passage to the academic vocation. At its best it served that purpose well, but the close personal relationship between supervisor and candidate was too variable in its effectiveness to satisfy the new expectations of accountability. A much larger postgraduate body from a greater variety of backgrounds brought different purposes and expectations. Supervision panels and regular progress review meetings were instituted, practices standardised and monitored across the university. Candidates who had once been left to sink or swim were now drawn into the system of research management and trained in its requirements as well as the production of knowledge.
A new word entered the research lexicon during the 1980s, or rather an old word formed from an ancient language: innovation. Literally it meant a novel practice or the act of changing what was established; in the burgeoning literature on innovation it referred to the process whereby new technologies, processes, products and services were taken up and diffused. Joseph Schumpeter, the economist who regarded constant change (he called it creative destruction) as the driving force of capitalism distinguished between invention and innovation. Only some inventions were brought to market. So, too, many research discoveries with the potential to increase productivity or improve people’s lives remained dormant in academic publications for want of arrangements to translate them into practice. An innovation system was needed to stir research into profitable activity.71
Innovation extended beyond research and development to design, production, marketing and distribution, and involved not just researchers and firms but also networks through which knowledge flowed between market and non-market institutions. It became common to talk of a national innovation system that embraced public and private research, a country’s economic structure and the sources of venture capital, its stock of advanced skills, the institutional rules that governed intellectual property and the conventions that determined the production of knowledge.
The distinctive characteristics of the Australian innovation system at the end of the 1980s were described aptly as ‘a low level of science and technology expenditure, a high level of government involvement in financing and undertaking research, a low level of private sector research and development and an exceptionally high dependence on foreign technology’. Australia was not a major contributor to world research and development, and many of its research-intensive industries were dominated by foreign ownership. Even though the emphasis on innovation systems promulgated by the OECD during the following decade stressed the importance of distinct national paths, the tendency here was to imitate the United States with its exploitation of advanced technologies. Hence the Australian government promoted more private research and development and directed more of the public research into applied science practised closer to product markets. From 1988 the CSIRO was directed to obtain 30 per cent of its income from external earnings, and it was recommended in 1991 that universities should raise 5 per cent of their research income from industry. The Cooperative Research Centre scheme gave them every incentive to do so.72
The weakness of this approach was the assumption that an increased supply of applied research would create a demand for it; to put it differently, there was too much technology push and not enough market pull. Early attempts to develop a national innovation system paid insufficient attention to the willingness or capacity of Australian industry to capture the benefits of Australian research; indeed, the emphasis on a national innovation system ignored the increasingly globalised nature of the knowledge economy.73 Australian researchers were good at finding new medicines, but the daunting cost of taking a discovery from patent through proof of concept, clinical trials, development of a business model and finally into production meant that very few of these medicines remained in the hands of the local pharmaceutical industry. At the other end of the scale, the Cooperative Research Centre made little provision for small enterprises or start-up companies to exploit disruptive technologies. Of all Australian vice-chancellors, Sir Bruce Williams, a former adviser to the OECD and the United Kingdom’s Ministry of Technology as well as a member of the Reserve Bank of Australia, had the greatest knowledge of technological innovation. His concern with the policy adopted in the 1990s was that because government could exert greater leverage on academic research than on business sector research and development, there was a danger that universities would be propelled into ‘types of research that they cannot do very well’.74
Some Australian universities had already created commercial arms that undertook commissioned research. The University of New South Wales was the first in 1959 under Sir Phillip Baxter, who had worked for ICI in England before becoming Vice-Chancellor. Institutes of technology followed his example and then other universities. These commercial entities were organised as companies, and by 1990 they had a turnover of $150 million, made up principally of income from consultancy services and tailored non-award courses; the sale of intellectual property remained a small part of their business. They relied heavily on the initiative or responsiveness of academics, provided them with professional indemnity and offered financial and accountancy services.75
The increased direction of research brought commercialisation into the very heart of the university. It now included collaborative as well as contract research, spin-off companies and joint ventures as well as licensing arrangements and the assignment of intellectual property. The example of the industrial park established by Stanford University in what became Silicon Valley stimulated a number of universities to create their own research and technology parks, with mixed results.76 These developments brought new pressures. The ownership of intellectual property absorbed universities in protracted arguments over the division of spoils, and confidentiality requirements strained expectations of peer review. There were some striking success stories. Cochlear Limited, a company that holds two-thirds of the world hearing implant market, began from the research conducted by Graeme Clarke while he was professor of otolaryingology at the University of Melbourne, with support from the ARGC and NHMRC. But, as Bruce Williams predicted, universities were not good at capturing the benefits of their discoveries. Even in 2000 returns from commercialisation made up less than 2 per cent of their expenditure on research and development.77
Should they have worked more assiduously to increase returns? In 1991 two American specialists in the study of higher education who came to Australia as Fulbright scholars were struck by the way universities here had taken on the attributes of academic capitalism. By ‘academic capitalism’ Sheila Slaughter and Larry Leslie meant the market behaviour and associated commercial values that institutions and those who worked in them adopted as their core funding declined. The two Americans began their examination of the changes to higher education in four countries in the expectation that the United States had gone furthest, Canada still relied overwhelmingly on public funding and Australia and the United Kingdom lay somewhere between. They soon formed the impression that ‘Australian universities might well be ahead of US universities in implementing market mechanisms’. This hypothesis was tested by examining the research strategies employed at several universities here, especially in research centres, and they found that a modest proportion of income from grants and contracts had brought a pervasive entrepreneurial orientation. Had they extended their examination to the recruitment of international students, and the ways universities recast their management around this form of revenue, they might well have observed an even more powerful manifestation of Australia’s embrace of academic capitalism.78
Slaughter and Leslie observed that universities are both profit maximisers and prestige maximisers. Money enabled them to pursue research, whereas prestige accrued from the competitive grants and high-status publications that were traditionally oriented to basic research. They found evidence of discord between these objectives. An accountant working for a university research centre complained: ‘The faculty aren’t trying to make money as they should be. They’re simply striving for excellence.’ The head of a university research company told them his business clients wanted to protect information, the academics to publish it since that was ‘the whole basis of promotion’. Slaughter and Leslie could see a substantial reorientation as researchers became more adept at commercialisation. They still talked of research to advance knowledge and benefit humanity but accepted that it had to pay its way.79
These findings relied heavily on the authors’ concentration on the research sector, where government influence was greater than on teaching, and on what they called ‘techno-science’, where the effects of government policy were most marked. The investigation was also conducted in the early 1990s, when the disruptive effects of changes to research funding were still fresh. Between 1988 and 1996 higher education expenditure on research and development increased from 0.31 per cent of Gross Domestic Product to 0.43 per cent to reach $2.3 billion in current prices. The Commonwealth still provided 88 per cent of these funds in 1996, with just 5 per cent coming from business. The imputed portion of the operating grant remained the principal source; hence the common complaint that teaching was subsidising research. Direct support through competitive programs schemes provided just 18 per cent, the Research Quantum and other infrastructure funds another 10 per cent.80
The kinds of research changed little. In 1988 universities spent 38 per cent of their research funds on pure basic research and 34 per cent in 1996. Strategic basic research increased from 24 to 25 per cent and applied research from 31 to 35 per cent, but experimental development declined from 7 to 6 per cent. Universities contributed 58.4 per cent of the country’s basic research at the beginning of the period and 60.2 per cent at its end; their share of applied research and development rose slightly from 13.0 to 14.4 per cent.81
The pattern of expenditure does not reveal any consistent trend to the fields of research identified by national innovation strategies. Medical and health sciences increased their share between 1988 and 1996 from 17 to 21 per cent, and natural science and technology from 26 to 27 per cent, but biological sciences slipped from 14 to 12 per cent and engineering from 14 to 13 per cent. Meanwhile the humanities fell from 12 to 8 per cent and the social sciences rose from 18 to 19 per cent.82 These statistics are affected by the attribution of part of the operating grant, and are therefore partly an artefact of changes in student enrolments. They serve as a reminder, nevertheless, of the persistence of the research endeavour and the limited capacity of government to restrict it.
Of all forms of academic activity, research is particularly resistant to external direction. It can be encouraged, directed and rewarded, but there is no way of predicting what an investigation will discover and no substitute for the curiosity, intelligence and resourcefulness of the investigator. In allowing all members of the Unified National System to be research universities, the government expanded the number of investigators. It also increased the pool of money to support their projects, but the first rule of research is that no matter how much money is provided, there is never enough. The money was therefore rationed through competitive schemes and directed by means of funding formulae, priorities and centres. Universities in turn adopted the same procedures.
None of them worked as intended. The expectation of universities that all staff should engage in research nullified the government’s tenuous distinction between research and scholarship. Its priorities were ill conceived; few centres fulfilled the extravagant claims of their founders; commercialisation proved to be fool’s gold. Perhaps the greatest weakness was the failure to reconsider these ways of directing research. When they failed to achieve the desired result, the response was to strengthen the competition, revise the formulae and increase the selectivity and concentration. In few fields of research was rigour and creativity in such short supply as research policy.