CHAPTER 7

Finance

A consequence of the abolition of fees in 1973 was that universities and colleges became dependent on a single source of support—the states as well as the students were relieved of any financial obligation. Such was the largesse of the Whitlam government that few foresaw the consequences, which quickly became apparent with the onset of the steady state. Commonwealth funding fell in real terms during the following decade, from $2185 million in 1976–77 to $2093 million in 1986–87 while student load rose from 235 500 in 1976 to 290 000 in 1986.1 Other sources of income provided little relief. Research grants and contracts were tied to specific projects, just as donations and earnings from trust funds were restricted to designated purposes. Institutions were able to charge fees for non-award courses and other services, but were not permitted to do so for students enrolled for a diploma or degree. They relied on the Commonwealth for 85 per cent of their revenue.2

Until 1988 the Commonwealth Tertiary Education Commission (CTEC) allocated this money on a triennial basis in separate components for operating expenses, equipment, minor works and new buildings. Each component had to be spent accordingly, and permission was needed to carry over unexpended funds from one year to the next—financial stringency intensified this close supervision of expenditure. The Unified National System introduced a rolling triennial system, which annually determined grants for the following three years and thus allowed better forward planning. The operating, equipment and minor works components were combined into a single, one-line operating grant, and funding for capital works was rolled in as well from 1994. There was now a separate stream of government funding for research—which included research grants, fellowships, centres and Commonwealth postgraduate scholarships—although it had to be won on a competitive basis.

The Green and White Papers projected an increase in the annual number of graduates from 88 000 in 1988 to 125 000 at the turn of the century, or a little more than 40 per cent. That required a substantial growth in enrolments as well as an expansion of facilities, and the Higher Education Contribution Scheme (HECS) was devised to offset an increased government outlay. Since the colleges were to be drawn into a Unified National System, it was also necessary to devise a new method of determining grants that would enable them to take on their expanded role. The economist Ross Williams, who was involved in designing this reallocation of Commonwealth support, has remarked on the speed with which the two far-reaching changes to funding arrangements were made. HECS was introduced from the beginning of 1989, and a new allocation mechanism was determined in August 1990.3

Yet HECS provided little assistance with the initial enlargement of higher education since most students opted to defer repayment. Two new sources of university income, fees from international students and domestic postgraduates, would become increasingly fruitful, but not in the early years. The continued reliance on Commonwealth outlays during the establishment of the Unified National System in turn hampered the redistribution of funding. Augmenting support for the colleges to bring them up to the level of the universities would have been much less fractious than giving to one at the expense of the other. With so little extra to distribute, the formula for apportioning institutional grants was hedged with qualifications that satisfied neither the established universities nor the new ones. Policy changes of such magnitude typically have a difficult transition. The combination of ambitious growth targets and financial stringency increased the difficulty of this one.

Funding expansion

Towards the end of 1988, the Minister’s funding statement for the 1989–91 triennium announced the creation of 40 000 new places; in the following year he went further and said there would be 63 000 additional students by 1992. This figure proved far too low. From 329 000 EFTSU in 1988, enrolments surged to more than 440 000 EFTSU in 1992, an increase of 34 per cent. Commonwealth outlays rose during the same period from $2009 million in 1988–89 to $2534 million in 1992–93, or 26 per cent.4

After the prolonged impasse of the steady state, this was a remarkable achievement, although the magnitude of the enrolment growth surprised the government. There was substantial over-enrolment in 1988, but the White Paper’s demographic projections anticipated a fall in the 17–19 age group after 1990 and failed to anticipate both the rapid rise in Year 12 completions and the growth of non-school-leaver demand. The result was a much greater over-enrolment of 23 000 students in 1991, which took some time to pass through the system and cramped intakes in the following years. Growth slowed after 1992 to reach 491 000 by 1996 as the government concentrated on consolidating an expansion that had far surpassed its intentions—the goal of 125 000 graduates each year by the turn of the century was achieved in 1993.

The strain on the sector during the early years of the Unified National System was intense. The Minister’s funding statement for the 1989–91 triennium projected an increase of 15 per cent in enrolments and 15 per cent in resources. But not all of the additional income would find its way to the operating grant since new buildings were needed to accommodate the larger number of students and make good the backlog of facilities. CTEC had cut capital works to the bone during the steady state, so that by the mid-1980s they made up less than 2 per cent of government funding. Now they were to rise from 3 per cent in 1988 to 6 per cent by 1991. The ‘clawback’ from universities to the Australian Research Council (ARC) would reach 2.3 per cent of total operating grants by 1991, and another 1 per cent was withheld for the Priorities Reserve Fund. The effect of these allocations was to reduce the operational grant per student by 8 per cent in real terms. This had fallen since 1983 from $10 390 per EFTSU to $9766 by 1988 (in 1990 prices); in 1991 it fell further to $9167.5

The over-enrolment of students in that year brought the problem to a head. It was not uncommon for institutions to exceed their funded load and then bargain with Canberra for additional assistance, but in 1991 they took in far more students than they could accommodate. A third of the respondents to the academic unions’ survey of members said they had abandoned small-group classes, and half claimed to have reduced student assessment tasks in order to cope with the increased numbers. After media reports drew attention to these and other economies, a heated argument arose over who was to blame. Dawkins said the universities had made too many offers, while the Australian Vice-Chancellors’ Committee (AVCC) insisted that the circumstances were beyond their control—and following a trenchant press release that condemned the government’s obduracy, the head of the Department of Employment, Education and Training (DEET)’s higher education division temporarily forbade his officers to deal with the AVCC.6

Nearly all institutions were over-enrolled, some by as much as 23 per cent, and nationally the figure was 6.6 per cent. Part of the excess was caused by a higher than expected number of re-enrolments, which occurred after offers were made to commencing students (universities would need to bring forward their timetable for re-enrolment) and improved retention rates. Most was the result of a higher than expected acceptance rate of offers made to new students as opportunities for employment shrank with the onset of an economic recession. The government provided no assistance with these extra students, but did allow that, if in future a university anticipated that re-enrolments would cause a significant excess in the load target, it could reduce its commencing load by 5 per cent.7

Beyond that the government would not go, since the higher than expected number of students in secondary education staying on to Year 12 made it necessary to preserve room for school-leavers. As the pipeline of over-enrolments in 1991 flowed on to subsequent years, universities had to develop far more efficient management systems to balance their overall load and commencing load targets. In 1993 an additional postgraduate load target was introduced, and in the following year the commencing load target was replaced by a school-leaver intake target—an indication of the government’s concern with the drop in the number of these new students from more than 72 300 in 1991 to 65 600 in the following year.8

After 1992 the expansion slowed: student load rose over the following four years by less than 50 000 EFTSU, to 491 000 in 1996. The slowdown was justified by a fall in the level of unmet demand for higher education, which peaked at more than 34 000 in 1992 after over-enrolment reduced the number of new places, and was down to 21 000 by 1994. A joint investigation by DEET and the Higher Education Council (HEC) in that year judged that the backlog of demand had been met and that there was little need for a substantial number of additional places ‘in the medium term’. Nor was the funding needed to create them likely to be provided for, with the onset of recession in the early 1990s, the Commonwealth budget came under intense strain. A record deficit of $18 billion (4.1 per cent) was incurred in 1992–93, and the Keating government ran substantial deficits for the remainder of its term of office.9

A modest growth in Commonwealth outlays on higher education was made possible by the increase in HECS repayments, which provided $933 million in 1996, when the operating grant was $4361 million). This meant that the operating grant per EFTSU improved to $9719, or almost back to the 1988 level, but still well short of that earlier in the 1980s. Income from fee-paying international students and postgraduates brought in $620 million by 1996, although the full cost of their tuition, services and overheads was still not well understood, and at this stage they probably made only a marginal improvement to most institutions’ dollars per EFTSU.10

The other problem looming over the sector in 1996 was a salary claim. The unions had been negotiating since late 1994 for an increase to make up for inflation; 2 per cent was to be found from productivity improvements, and the additional 5.6 per cent they sought would require additional funding. The government had funded previous wage increases, but this time offered only a loan. Both the National Tertiary Education Union (NTEU) and the vice-chancellors rejected that arrangement, and the offer disappeared when the Coalition was elected to office in March 1996.11 That was only one of the blows inflicted by the new government; others included a deep cut in government funding and an increase in student charges. A common measure of the state of higher education was the ratio of students to staff. It was 11.7 in 1983 and rose during the mid-1980s to 12.9 in 1988. The staff–student ratio reached 15.3 by 1992 and then stabilised during the last years of the Labor government before deteriorating to 18.7 per cent by the end of the century.12

CTEC had eked out a fixed quantity of government funding during the steady state; the Unified National System went for broke. Two hundred major building projects were undertaken in the first three years, a larger research capacity was created and new students flocked to campuses. The speed of the transformation strained both the provision and the management of resources, and by 1991 there were clear signs of overreach. Thereafter a period of stabilisation allowed the system to proceed according to its own momentum until a new government once more threw it into confusion.

The allocation of funds

Until 1988 there were separate funding systems for universities and colleges in recognition of their different roles. The operating, equipment, minor works and capital components of the Commonwealth grants to universities supported both teaching and research. In determining the operating grant, which was by far the largest component, CTEC used a formula devised back in the 1970s by Peter Karmel to measure the relative costs of different disciplines. Medicine, dentistry and veterinary science made more intensive demands on staff and incurred much greater costs for facilities, equipment and consumables than did disciplines in the humanities and social sciences, while science and engineering lay somewhere in between. But CTEC was not bound by this formula. It made adjustment for the higher fixed costs of smaller universities, the particular needs of newer and regional ones, and other relevant circumstances. Nor did it publish its relative funding formula, partly because it was not applied precisely and partly because Karmel judged that publication would have reduced the flexibility of universities in their internal allocation of resources.13

The funding of colleges worked differently. They were not supported for research, and their operating grant was calculated on the assumption that staff devoted themselves to teaching only, in contrast to that of the universities, which imputed about a third of staff time to research activity. (This included sabbatical leave, whereby academics were freed from teaching duties every seventh year to pursue their own studies.) In the CAE sector, moreover, there was much greater reliance on the state coordinating authorities for distributing funds, so that CTEC lacked the detailed information on loads and costs that would have allowed it to use a funding formula. In any case, the disciplinary categories of the colleges differed from those of the universities.14

When the two sectors were brought together into a Unified National System it was therefore necessary to devise a consistent method of funding them. Peter Karmel provided some preliminary advice in his April 1988 report on educational profiles. He estimated that the colleges were underfunded by 20 per cent against universities for a comparable mix of disciplines. As a third of university academics’ time was supposed to be spent on research, this implied that the college staff already had some capacity to take on research. For them to do so, however, it would be necessary to build up colleges’ research infrastructure, and here Karmel warned against an abrupt reallocation of funds at the expense of established universities. Their funding advantage supported better libraries, computing and other services as well as concentrations of staff with established research records, and it would make no sense to destroy these strengths. A far better method would be to transfer a portion of the universities’ operating grant to the ARC and supplement research grants won by the colleges with an additional amount so that they could build up their own infrastructure. This was the basis of the ‘clawback’, to be examined in a subsequent chapter.15

In determining the first round of funding agreements at the end of 1988, DEET used a modified version of the formula Karmel developed in the 1970s. It was a hasty exercise based on incomplete data and yielded a more compressed range of teaching costs than the universities believed appropriate: humanities, law, economics and business studies were weighted at a base rate of 1; science was weighted at 1.797 and medicine at 2.051.16 Since the limitations of these misleadingly precise calculations were evident, a joint working party of the Higher Education Council and the Department commissioned three independent studies of relative teaching costs. Each analysed the direct and indirect costs of comparable courses taught at a selection of universities and colleges, using different methodologies but producing broadly similar results. They were discussed at a national forum in November 1989 and followed by the working party’s report in the following year.

Guided also by funding models already used in the United Kingdom and New Zealand, the working party recommended five clusters of disciplines at the undergraduate level ranging from 1 for humanities, law and economics to 1.25 for behavioural science and education, 1.45 for languages, 2.2 for science and engineering and 2.95 for medicine, dentistry, agriculture and veterinary science. Postgraduate coursework had just two clusters, weighted 1.45 and 3.35, and research higher degrees also had two, weighted 2 and 5.2. Peter Baldwin thought these relativities too great, so the highest undergraduate cluster was reduced to 2.7 and the higher rate for research higher degrees came down to 4.7.17

The older universities pressed hard for higher funding of the more expensive disciplines that they alone taught and also wanted a greater weight for the final honours year of their Science and Arts courses, where advanced seminars and research projects made heavy demands on staff time. They argued at length that the funding model should be based on the ‘real’ cost of teaching a discipline (by which they meant what was needed to do it properly) and not on the existing pattern of expenditure. To this the government replied that since the funding was to be allocated within ‘existing resource levels’, it would have to use ‘historical’ rather than ‘ideal’ costs. The effect was to entrench a set of relativities derived from past practice that stifled innovation.18

The former colleges, on the other hand, believed the spread of weights was excessive; Graeme McCulloch, secretary of the college staff union, alleged that the relative funding formula had been ‘cooked up in consultation with vice-chancellors from the ivy league universities’. The heads of the former institutes of technology were particularly aggrieved by the limited scope of the relative funding exercise. While there was to be a common rate for teaching, no allowance was made for the advantages pre-1987 universities had accrued through their previous years of favoured treatment. The teaching formula was not concerned with past disparities, and Dawkins made it clear that there would be no major transfer of resources across the binary divide. Nor would the government take into account the earnings of established universities from their endowments, for that would lead to ‘a dull grey uniformity’.19

A funding model based on teaching costs was poorly calibrated for the costs of research. The White Paper had signalled a shift to a more competitive system of research funding through the award of grants for specific projects, and to this end the clawback was to transfer part of the universities’ operating grant for distribution by the ARC. Its provision of money to meet the direct costs of a project attracted an additional sum for research infrastructure, but even with that assistance did not meet the full cost and made no provision for the maintenance of other research activity—a point made forcefully by the committee Dawkins appointed to review research policy at the end of 1988 in order to placate the vice-chancellors.20 Accordingly, the relative funding model incorporated a second component. In addition to the allocation of funds for teaching according to an institution’s weighted disciplinary load, there was an additional allocation—the Research Quantum—set initially at 6 per cent of the operational grant. Pending further development, the distribution of the Research Quantum was determined by an institution’s success in attracting national competitive grants from the ARC, the National Health and Medical Research Council (NHMRC) and other Commonwealth agencies. That it excluded funds from industry, the principal source of support for the colleges’ research, increased their chagrin.21

A comparison of institutional operating grants for 1990 with those produced by the relative funding model revealed substantial differences. According to the model, the new Northern Territory University was overfunded by 21 per cent, as was ANU by 17 per cent (even when the amount provided as a block grant to its Institute of Advanced Studies was set aside) and the smaller universities of Tasmania and Murdoch received 12 and 10 per cent more than the relative funding model yielded. The new universities of Southern Queensland and Central Queensland were 22 per cent and 19 per cent underfunded, and other former colleges were disadvantaged by smaller amounts.22

The government’s intention was to bring all institutions within a tolerance band of plus or minus 3 per cent over the following three or four years. This was to be done by assisting underfunded universities with an increased operating grant or a reduction of load and requiring overfunded ones to accept additional load or less funding, although allowance would be made for special circumstances such as size, location and regional role.23 Most overfunded institutions preferred to take on additional students rather than sacrifice income, and the $30 million available for adjustment packages fell far short of the underfunded ones’ requirements. Reallocation of load was hampered by the imbalance of provision between states with high participation and low population growth, such as Victoria, and others such as Queensland and Western Australia, where participation was low and the population growing strongly. In practice there was very little redistribution, and the government relied on growth of the system to make adjustments through the selective allocation of additional funded load.24

When John Dawkins opened a 1989 forum on the relative funding model, he warned that that no formula could provide a precise measure of institutional circumstances; the best that could be expected was ‘a band of rough justice’. That would not have caused such agitation if this first calculation of institutional teaching costs was subject to further refinement, but it was decided from the outset that it would be both the first and last determination.25 The former colleges, at this time still teaching mainly undergraduates courses with low discipline weights, could not expect any subsequent recognition of their move into more expensive disciplines and postgraduate studies. They were no happier with the formula for distributing capital funds. New universities benefited during the first years of the Unified National System when the Minister decided which building projects should proceed, but the formula for rolling capital funds into operational grants simply apportioned the $262 million of capital funding for 1994 according to weighted discipline load. The AVCC’s decision to endorse this method of distribution by the narrowest of margins—just seventeen votes to sixteen—indicated the division of interest between its established and new members.26

It was recognised from the outset that the teaching component of the relative funding model gave only a rough approximation of actual costs. Since it assigned a large number of disciplines to just five clusters, there were bound to be differences within them. Visual and performing arts, in cluster 3, were more expensive than computing science, as was veterinary science than medicine in cluster 5. The expectation was that such variations would not cause problems in large institutions teaching a large number of disciplines because the higher cost of some disciplines within each cluster would be compensated by the lower cost of others. The wide tolerance band of plus or minus of 3 per cent in applying the model gave further flexibility. It followed that institutions would be ill advised to adopt just five levels in their own distribution of funds. Peter Baldwin explained when the model was announced that it was ‘designed for use at the system-wide level and does not provide a mechanism for the internal allocation of institutional resources’. That advice was repeated often over the following years as institutions disregarded it. At the end of 1992 Baldwin felt it necessary to warn that ‘the relative funding model is not a sophisticated model and was not designed for application at an individual faculty or departmental level’.27

Universities had always made allowances for different disciplinary costs. Some used a formula, most relied on custom. But as soon as DEET published the relative funding model, it quickly found its way into the deliberations of budget committees. Once it was possible to calculate a faculty’s ‘earnings’, the dean was able to argue it was entitled to receive a commensurate amount. It did not matter that this argument rested on a misapprehension—the formula was used to make a one-off adjustment of institutional grants and was quickly overlaid by further changes—for the availability of these price signals had a pervasive effect on institutional behaviour. University administrators spent much time during the adjustment period scrutinising DEET’s definitions of discipline codes so that they could wring every last dollar out of their teaching profile. Having done so, they were able to measure the notional income of faculties and departments, and adjust their budgets accordingly. Even when they did not follow the model, they described variations from it as ‘cross-subsidies’.

In early 1992 David Flint wrote to the AVCC in his capacity as convenor of the Committee of the Australian Law Deans to complain of the consequences of law having been designated a cluster 1 discipline. Several universities had already adopted this weighting, which he said made it impossible to provide an adequate legal education. Flint, Dean of Law at the University of Technology, Sydney, was a forceful advocate, but his protest was unavailing. Others followed him, equally indignant on behalf of the humanities, foreign-language studies and other disciplines that suffered funding cuts as a result of this reification of DEET’s expedient device.28

The Department could hardly absolve itself of responsibility for such behaviour. Its subsequent determination of institutional grants made no further reference to the formula—it simply took the previous year’s allocation as the basis for determining the next—but it continued to apply the relative funding formula when it distributed additional load, and if a university wished to alter its teaching profile, the rearrangement of places had to conform to the same formula—so that 2.7 places in an Arts course had to be sacrificed for an additional undergraduate place in medicine. It is little wonder that universities responded as they did. After the government made institutional competition for funds a dynamic of the Unified National System, internal competition was bound to follow, and the relative funding model accordingly became a management tool.29

Performance-based funding—and three years of rewards for quality

A relative funding model based on student load assumed that all institutions used their funds with equal efficiency and effectiveness. Since Dawkins was convinced that they did not, the Green and White Papers foreshadowed the introduction of measures to assess performance and provide incentives for improvement. At the most basic level, the Commonwealth would fund institutions for outputs rather than inputs through comparison of unit costs for graduates in various courses. Beyond that, it would create measures of quality, using indicators such as student demand and employment outcomes, surveys of students and employer satisfaction, measures of staff performance and evidence of achieving equity goals. The development of such indicators required much better information than was available in 1987, especially as the statistics for universities and CAEs were not consistent, so a first task was the development of ‘a comprehensive, nationally consistent statistical base’. And if comparison of performance was to be valid, there had to be a more equal distribution of resources. Hence the load-based funding model was seen as a necessary first step towards funding institutions according to performance.30

Vin Massaro spotted the reference to performance indicators in the Green Paper and persuaded the AVCC to take the initiative. In May 1988 it established a working party chaired by Michael Taylor, Deputy Vice-Chancellor of the University of Sydney, and accepted the request of the Australian Committee of Directors and Principals in Advanced Education (ACDP) to participate. The working party’s final report in December 1988 identified more than thirty possible indicators, although it warned of their limitations and said a feasibility study was needed to test their appropriateness. ‘Something resembling a Cargo Cult seems to have grown up around the notion of performance indicators’, the working party observed; at best, they should be used as part of the ‘raw material’ that would inform expert assessment of institutional performance.31

Dawkins accepted the advice of the working party and in February 1989 commissioned Russell Linke, who was now working at the University of Wollongong, to define its performance indicators in operational terms and then conduct an empirical study of their feasibility. Linke had been a member of the AVCC–ACDP working party, and Taylor as well as other participants in that study were appointed to a ‘research group’ to work with him—this was an early example of the government recognising the limits of its operational understanding of higher education and turning to the sector for guidance. Linke and his colleagues found deficiencies in the scope and precision of the current data and advised on how it should be improved. They proposed that DEET collect data for a comprehensive list of twenty-seven indicators and publish it annually.

At the same time they sought to dispel the belief that academic activity was amenable to summary measurement. The report argued that indicators needed to be relevant, reliable and reflect as accurately as possible the purpose of higher education. Performance was highly context dependent, and there was a danger that a restricted range of system-wide indicators would divert universities from their distinctive missions. The measures they examined and recommended for use corresponded closely to those suggested in the earlier report. They encompassed institutional context (staff and student numbers, student demand, the ratio of academic to support staff, the level of recurrent income, etc.), ‘teaching and learning’ (progress and completion rates, completion times, graduate employment rates, surveys of graduate experience and the like), ‘research and professional services’ (the number and value of grants, publications and paid consultancy activity) and participation and social equity (covering both staff and students).32

The report recommended that each university identify the indicators appropriate to its particular mission and use them in a process of regular self-evaluation—but it was replete with warnings about making quantitative indicators a substitute for qualitative judgement. It followed that ‘indicators of this kind should not be inserted into formula funding processes to determine the overall operating grants to institutions, where their impact would almost certainly exceed their capacity for valid and reliable assessment’. If universities were to use indicators for self-evaluation, they needed to yield real benefit, and a strategy that gave them incentive to do so was ‘more likely to be successful than any punitive regime’. As with the charter of academic freedom, Linke sought to dispel the government’s assumption that an appropriate instrument lay readily to hand.33

Peter Baldwin made no further effort to incorporate performance measures into the determination of institutional grants. From time to time his successors revived the idea: in November 1993, for example, Kim Beazley suggested a list of twelve criteria that might be used (most of them replicating indicators listed by Linke).34 The most sustained consideration came in a 1996 report commissioned by the National Board for Employment, Education and Training (NBEET). By this time the practice had found its way into the public funding of higher education in a number of OECD countries. Some applied it to research, some to teaching. In Australia the only performance-based component of recurrent funding was the Research Quantum, which by this time had fallen to 5 per cent of the institutional grant.

Don Anderson, the principal author of the report, gave an incisive analysis of the consequences. Since universities obtained almost all their funds from student load, there was an inbuilt pressure for growth—more students were the only source of more money. There was no countervailing incentive to serve them, improve the quality of their experience or indeed to maintain educational standards. Of twenty-one universities that responded to his survey, most used research performance to distribute the Quantum but allocated the rest of their institutional grant on the basis of load: only four considered teaching performance and three equity.35

Anderson was by no means impressed by the performance indicators that determined the Research Quantum (which had expanded from grant income to take in publications and research higher degree completions); he thought the publication count difficult to verify and likely to diminish quality (and a later study suggested it did).36 More particularly, he drew attention to the perverse effect of seeking to alter behaviour through the use of financial incentives based on performance indicators—that participants would bend their efforts to meeting the specified measure of performance regardless of progress towards the underlying objective. As he put it, ‘Universities are complex organisational systems capable of distorting or deflecting to their advantage interventions designed to steer them to particular ends. Only indicators which are refractory to distortion and very close to desired objectives are likely to produce responses that are consistently within the desired range.’

Teaching was also much harder to measure than research since so much depended on the composition of the student body. How was the teaching performance of a prestigious metropolitan university with high entrance standards that took in students from privileged backgrounds to be compared with one catering to disadvantaged students in a region where employment opportunities were restricted? If the government wanted to introduce performance-based funding for teaching, Anderson suggested it first examine the effects of the Research Quantum, especially on teaching.37

When Peter Baldwin received Russell Linke’s report, his mind had already turned to another way of evaluating and rewarding performance. He was aware of allegations that the raising of colleges to university status and the much greater cohort of students entering higher education were compromising standards. He was also confronted by instances where fee-paying international students were short-changed. The worst abuses occurred among private providers of English-language courses, but adverse publicity was damaging Australia’s reputation in countries where it recruited university students, so some way of validating the quality of higher education was needed. Baldwin was also concerned by the deterioration of the staff–student ratio, which was exacerbated by over-enrolment in 1991. By linking his request for additional funds to a rigorous system of quality assurance, he was able to obtain the support of the Prime Minister, and the prospect of financial relief secured the agreement of a group of vice-chancellors with whom he canvassed his proposal. In a major policy statement at the end of 1991, Baldwin was able to announce the provision of an additional $70 million annually that would be distributed on the basis of universities’ quality arrangements.38

The HEC consulted widely after Baldwin asked it to advise how this quality assurance program should operate. On its advice an independent Committee for Quality Assurance in Higher Education (CQAHE) was established to conduct annual audits and determine allocation of the available funds. Brian Wilson of the University of Queensland was made the chair, with Ian Chubb, Deputy Vice-Chancellor of Monash and chair of the HEC, as his deputy; other members were drawn from higher education and industry. Universities submitted a portfolio ahead of a one-day visit by a panel (consisting of two CQAHE members, two more academic officers and a university administrator, who acted as secretary), which then prepared a report that formed the basis of the Committee’s funding recommendation.39

This was a method of evaluation quite different from the practices to which Australian universities were accustomed. They conducted periodic reviews of their departments and courses; CTEC had initiated a series of national reviews of major disciplines, and the AVCC established an Academic Standards Panel, which concentrated on the honours years of more specialised disciplines. Those were exercises in peer assessment of academic standards, whereas quality assurance was concerned with how a university conducted its affairs, the procedures it followed and the ways it upheld its standards. Australia’s quality assurance also differed from that of other countries. It did not simply assess an institution’s system of quality management but rewarded performance with additional funds. More than that, it ranked the performance against that of other institutions and published detailed reports on all of them.40

The first round in 1993 was complicated by a federal election. For the past two years the Labor Party had trailed in public opinion polls, and there was a general expectation of a change of government. Quality assurance went into abeyance during the election campaign, and after the re-election of the Keating ministry in March 1993, Kim Beazley assumed responsibility for higher education. Peter Baldwin had been keen to distribute the money widely, but Beazley wanted to be more selective and reward outcomes. The CQAHE was not able to issue guidelines until the middle of the year, which gave institutions just two months to prepare their portfolios. And by the time the committee was considering its verdicts, Simon Crean replaced Beazley in a Cabinet reshuffle. He was more sympathetic to the newer universities and wanted them to share in the benefits. The goalposts kept shifting.41

In keeping with Baldwin’s original intention, the first quality round examined the full range of university activity: higher education teaching, research and community service. The panels found much wet paint and observed that quality assurance lagged some way behind industry standards. Seeking to reconcile the wishes of successive ministers, the CQAHE decided to rank all universities in six groups. Those in Group 1 received 3 per cent of their operating grant; Groups 2 to 5 were awarded 2.5, 2, 1.5 and 1 per cent, and only eight universities in Group 6 were denied additional funding.42 It came as little surprise that the six universities in Group 1 were large and well established, although Sydney and Monash were affronted by their relegation to Group 2 in the company of Wollongong. ‘Sydney: Dropped from the Honours Class’ was the headline of a feature article in the Sydney Morning Herald when the results were published, and it quoted a former member of the university saying ‘such a public shaming is unprecedented’. So affronted was the Senate that it subjected the Vice-Chancellor to additional humiliation by taking over preparation of the university’s submission in the following round.43

The other pre-1987 universities were placed in Groups 3 and 4, with the exception of James Cook, Murdoch, New England and Newcastle, which languished in Group 5. More worrying was the ranking of the former colleges: all were placed in Group 5 or Group 6, with the exception of RMIT (Group 3) and Queensland University of Technology and University of Technology, Sydney (Group 4). The post-1987 universities had feared such a result, especially after Beazley directed the CQAHE to consider ‘excellence of outcomes’, for they knew the evaluations would affect their reputations. The Vice-Chancellor of Edith Cowan believed its placement in Group 6 set the university back several years. When Mike Rann, the former South Australian minister for education, visited Singapore in 1994, he was appalled to discover that agents recruiting students to Australian universities were using the quality outcomes to disparage the lesser-ranked ones. A measure designed to safeguard the standing of the country’s universities was having the opposite effect.44

That was not the only unhelpful signal given by the first round of quality assessment. The inverse relationship between quality rankings and equity performance attracted particular attention, and critics alleged the universities that did best displayed a conspicuous lack of attention to advancing equity. Peter Karmel gave a typically incisive summary of the adverse effects of this well-intended scheme. First, its utility was limited by the academic community’s lack of confidence in the procedure and outcomes. Second, considerable resources were tied up in the work of the CQAHE and university responses. And third, there were the serious reputational consequences for those universities labelled as lower quality.45

The CQAHE took care in the following rounds to overcome this damaging impression. Its audit of teaching in 1994 emphasised the progress that had been made and reduced the ranking to just three groups; universities in Group 3 received half the dollars per EFTSU of those in Group 1. The 1995 audit went further. It used a matrix arrangement with four dimensions—research management, outcomes and improvement, along with community service—and bestowed praise on the widespread adoption of quality management. With prizes for everyone, there was now general support within the AVCC for further rounds, but Simon Crean was not interested and this unusual hybrid of quality assurance and performance evaluation came to an end.46

By then it had probably achieved its objectives. Universities spent much time in documenting their processes and preparing their portfolios. The quality rounds were a catalyst for the adoption of strategic planning, with systematic monitoring and evaluation of performance. CQAHE’s reports habituated academics to the language and values of quality assurance, so that by 1995 Brian Wilson was able to claim that ‘the words “client”, “customer” and “stakeholder” are no longer shocking’. The efficacy of the new procedures is less clear. Vin Massaro, who observed them as the registrar of Flinders, said that it was difficult for a panel to discern the difference between the ‘gloss and the reality’ in a single-day visit, and that universities spent as much time devising structure and procedures to impress panel members as they did in bringing about genuine improvement. Sample groups of staff, students and external stakeholders, who were coached assiduously in their testimony to the panel, were likely to feel the same. Most universities prepared their submission as a management rather than collegial exercise—some even employed international advisers—and in her study of the efficacy of quality assurance Ingrid Moses was adamant that academics did not respond to such concepts as client, customer and stakeholder, let alone inputs and outputs.47

Born of good intentions, the Australian experiment with quality assurance had unintended consequences. It was meant to provide financial relief, but the distribution of monetary rewards turned it into a competitive exercise, while the inclusion of performance evaluation led to rankings. A guiding principle of quality assurance is that an organisation’s arrangements should be fit for purpose and serve its particular objectives; those who designed its application to universities affirmed the need to maintain diversity. But the procedures adopted by the CQAHE, especially the generic observations made in its system-wide annual reports, had the opposite effect. Since all institutions aspired to the financial and reputational benefits of a positive assessment, they sought to conform to the Committee’s commendations of best practice. As all were assessed for performance, the stronger universities with their inherited advantages were bound to do better. The result was less diversity and more differentiation.

International fees

In November 1988, when John Dawkins opened a conference in Canberra on the export of education, he pointed out that the number of overseas students studying in Australia had doubled over the previous two years. He said there were 22 000 enrolled in colleges and universities, ‘many of them on a full fee basis’, and he expected the growth to continue. ‘While I’m not suggesting that we will replace riding on a sheep’s back to balancing on a mortar board, I think that our education sector can play a very important part in our overall trade performance.’48

The changes Dawkins introduced in 1985 as Minister for Trade, which allowed institutions to impose fees for overseas students, are seen as marking a transition in Australian provision ‘from aid to trade’ that would make this country one of the world’s largest exporters in the burgeoning trade in higher education.49 The 9000 overseas students in 1985 made up 2.7 per cent of total enrolments, a smaller proportion than Britain, France or Germany. Little more than two decades later, there were 300 000 of them enrolled in Australian universities—220 000 studying here and 80 000 on overseas campuses—and they constituted 20 per cent of all enrolments, a higher proportion than any other OECD country. Education was lauded as the country’s third largest export industry (wool no longer even in the top ten), although that claim rested on unreliable estimates of student spending and was inflated by including their earnings while here. Nevertheless, the effects were apparent in the towering apartment blocks that arose alongside city campuses and ancillary businesses that sprang up to service these students. Even in 1996, the fees they paid were crucial to the sector’s financial viability.50

The changes that came into effect in 1986 allowed universities and colleges to recruit fee-paying overseas students, but the early response was weak. Established universities, with their prior involvement in the Colombo Plan, continued to draw on government-funded or assisted students from South-East Asia—for at this stage trade was no more than an addition to aid—and more enterprising CAEs, TAFE institutes and private providers moved first. There were just 3595 full-fee paying overseas students in higher education in 1988 against 14 693 who were sponsored or paid part-cost fees. The largest market arose outside higher education, in business colleges and English-language courses.51

Additional changes announced at the end of 1988 required universities and colleges to charge fees—and set them at a level that met the full cost, including overheads. Further intakes of subsidised students ceased from the beginning of 1990, leaving the government’s Australian International Development Assistance Bureau to conduct the aid program. The commercial provision of educational services was lightly regulated: students could enter Australia as long as they could show they had been accepted into an accredited institution and had paid the required fee. By design, the government allowed providers to determine entry standards. Having established a network of Australian Education Centres to promote the trade, it put this service out to tender at the end of 1989 and transferred the centres to the International Development Program (IDP). As its name suggests, the IDP had its origins in the AVCC’s promotion of international links and was then turned into a company with a more commercial orientation and a board made up of vice-chancellors, college principals, and representatives of the Commonwealth Departments of Education and Foreign Affairs and Trade.52

Because overseas students provided additional revenue at a time of acute financial strain, universities now moved quickly into the export market. They had the advantage of surging demand in the countries to the north of Australia, for they were more accessible geographically than the English-language higher education systems of Britain and the United States and, at least initially, they were cheaper. They were also enterprising, using recruitment agents, forging links with feeder institutions and creating their own foundation courses to prepare newcomers for university studies. Fee-paying enrolments, drawn principally from Malaysia, Singapore and Hong Kong, rose dramatically, from 3595 in 1988 to 8465 in 1989 and 16 805 in 1990.

The dangers of unrestrained commercialism became evident in private institutions that offered English Language Intensive Courses for Overseas Students (ELICOS), which grew fastest and had 25 000 fee-paying students by 1990. Since the government had stopped testing the bona fides of applicants for student visas and allowed entrants to work in paid employment for up to twenty hours a week, these courses provided a backdoor entry for cheap labour rackets, with some providers colluding in the manipulation of attendance records. The prospect of a wave of refugees from China following the Tiananmen massacre in June 1989 brought a crackdown on entry standards, course attendance and overstays, with a consequent drop in revenue that bankrupted many ELICOS and commercial business colleges. This was a salutary lesson in the dangers of a largely unregulated market, and the government responded in 1991 with mandatory registration of providers and protection of student fees. Henceforth all institutions were required to conform to the requirements of the Commonwealth Register of Institutions and Courses for Overseas Students (CRICOS).53

Universities avoided such scandals but were by no means immune from criticism. Their use of private agents who were paid a commission for every student they recruited tarnished the reputation of the sector. So, too, some of the IDP’s Australian Education Centres remunerated local staff on a similar basis. Language proficiency requirements varied, as did academic entry standards. Adherence to the code of ethical practice the AVCC developed for recruitment and support of overseas students was uneven, and there were already claims from some academics that they were expected to ensure that their fee-paying students did not fail.54

Kim Beazley’s 1992 statement, International Education Through the 1990s, signalled a ‘shift away from commercialism to a new professionalism’. Just as international students (that term now replaced overseas students) enriched the experience of Australian students, so international education was to assist the internationalisation of Australia as an outward-looking economy and society, and support the government’s policy of closer engagement with the Asia-Pacific region. Beazley called for more exchanges of staff and students, more research links and partnerships with universities in the region.55

Universities did pursue these strategies. Most adopted plans to internationalise the curriculum and provide cross-cultural training for staff. There was a proliferation of agreements between Australian and overseas universities, and an increase in research partnerships. But the movement continued to be largely in one direction. By 1996 there were 52 899 fee-paying international students in Australian universities, and they made up 8.4 per cent of all enrolments, whereas very few Australian students pursued an international education. Those who did study overseas typically went as exchange students for a year of their course and mainly to Europe. The emphasis remained on recruitment, with a marked expansion of non-academic staff engaged in promotion, marketing and servicing international students.56

The overwhelming majority still came from South-East Asia. The numbers from China and India remained negligible in 1996, and the largest non-Asian source country, the United States, sent just a thousand degree students—although there was a thriving trade in single-semester ‘study abroad’ students from that country. Of international students pursuing Australian qualifications, half were enrolled in business studies, a quarter in science or engineering and a tenth in the humanities and social sciences. More than 90 per cent undertook coursework since those interested in research higher degrees were likely to look first to more prestigious universities elsewhere.57

As the international market developed, it became apparent that the factors determining student choice were (in descending order) country, course, institution and city, these indicating a hierarchy of reputation and prestige. Surveys of international students who came to Australia in the early 1990s found that considerations of cost and quality were balanced against proximity, a congenial and safe environment, and the company of friends and relatives—word of mouth was a significant influence. Australia had the advantage that all members of the Unified National System were large and comprehensive, and offered courses similar to those of the United States and Britain at a cheaper price. It positioned itself in the market as a high-volume, medium-quality provider—not unlike the wine industry—and achieved rapid growth, albeit on relatively narrow lines.58

Institutions with substantial numbers of international students before 1986, such as the University of New South Wales, Monash, RMIT and Curtin, made the early running. They were quick to appoint a Deputy Vice-Chancellor (International), establish an international office and develop offshore programs with partner institutions; by 1996 they had moved into international campuses. Some new universities also built up numbers, but many older ones held back—David Penington at Melbourne was not alone in worrying that an indiscriminate pursuit of the export market would erode quality. In 1996, 18.9 per cent of RMIT’s enrolment was international, followed by Curtin (18.5 per cent), the University of Southern Queensland (16.4 per cent), Wollongong (15.4 per cent) and the University of New South Wales (14.5 per cent). International student fees contributed 16.1 per cent of RMIT’s operating revenues, 16.5 per cent of Curtin’s. Adelaide, Melbourne, Sydney and the Universities of Queensland and Western Australia were all under the average international load of 8.4 per cent and revenue contribution of 6.6 per cent.59

Domestic fees

Dawkins announced in 1987 that universities could charge fees for domestic students undertaking postgraduate courses to improve their professional qualifications. Some developed new courses while others converted existing ones, but this new market was poorly understood and many set fees well below cost. Initially institutions could include fee-payers within their funded load, so that any charge greater than HECS brought an additional return; but because these students were in employment and able to claim the cost of tuition as a tax deduction, they paid less than the non-fee-payers. Accordingly, DEET imposed a minimum charge of 185 per cent of HECS from 1991 and took 85 per cent (equivalent to an upfront HECS payment). The minimum charge was scrapped in the following year and institutions allowed to keep the full amount for fee-paying students outside their funded load, but as a consequence these enrolments no longer attracted any Commonwealth support. The restriction of fees to improvement of professional qualifications was gradually relaxed until by 1994 universities could charge for any postgraduate course except teaching and nursing.60

Many took the opportunity to reclassify existing courses. At the ANU, for example, there was a graduate diploma in legal practice that allowed students completing their LLB to qualify for professional practice. In 1994 the University imposed a fee of $5000 for this eight-month course and gave notice that it would be made fully self-funding. As part of a National Day of Action called by the National Union of Students in its campaign against fees, protesters occupied the Chancellery for eight days and secured concessions. One reason for their anger was that this charge was imposed after the students had embarked on their degrees in the expectation they could proceed to employment through HECS arrangements; now, unexpectedly, they were required to find a substantial sum to complete their training. HECS allowed for deferred payment, coursework fees had to be paid in advance.61

The government’s frequent changes to guidelines created further confusion and hindered forward planning. The fee-paying domestic market grew to 13 507 EFTSU by 1996, 20 per cent of all postgraduate coursework. By then fees had climbed and leading business schools were able to charge up to $45 000 for their executive MBA program, but that was exceptional and significant cohorts were required in other courses to make them successful at a price the market would bear. Two-thirds of all enrolments were in business studies and law. Postgraduate coursework made an appreciable contribution to the budgets of universities such as Swinburne (5.7 per cent of operating revenue), Macquarie (5.6 per cent) and the University of Technology, Sydney (3.5 per cent), but national income from such fee-paying courses amounted to just $90 million dollars in 1996, 1.1 per cent of operating revenue. The sum earned from continuing education (whereby members of the public undertook undergraduate subjects on a non-award basis) was nearly as great.62

So while universities became more adept at identifying and exploiting openings in the domestic market, postgraduate fee income afforded little relief from their financial constraints. It was therefore hardly surprising that vice-chancellors were attracted to the possibility of charging undergraduate fees. This idea was first canvassed in a policy document the AVCC issued for the federal election of April 1990, although some members expressed strong objections. It was pursued vigorously following over-enrolment in the following year. If the government was unable to provide places for all eligible students, the AVCC argued, it was ‘absurd’ that they should be denied the opportunity to enrol on the same fee-paying basis as overseas students.63

The Industry Commission took up the same argument. Its 1991 report, The Export of Educational Services, compared the ‘dynamic, market-oriented export sector’ with a ‘highly regulated, largely government-funded domestic sector’ that was inelastic and lacked effective price signals. Since domestic places were rationed, the expectation that overseas students meet entry standards at least as high as those for local ones was limiting the size of the export industry, while the ‘flawed’ arrangements for funding domestic students shut out many qualified applicants. The Commission canvassed various ways of opening up the domestic market to fee-payers and sought a further reference to examine them. The government ruled out any change on the grounds that it would disadvantage those unable to buy a place and could even displace HECS students from courses in high demand.64

There were modifications to the HECS scheme. The rates of repayment were increased in 1990 from 1, 2 and 3 per cent of income thresholds to 2, 3 and 4 per cent, and in 1993 the thresholds were reduced (using average weekly earnings instead of the consumer price index) and repayments increased again to 3, 4 and 5 per cent. These changes were justified by the need to accelerate repayment of the student contribution in order to support growth. The HECS charge was indexed for inflation, although, because of the fall in per capita funding through the operational grant, it rose from the original estimate of 20 per cent of the cost of tuition to 29 per cent by 1993. Beyond that the government would not go, ruling out any increase in the HECS charge or adding an interest charge to HECS debts. As Peter Baldwin explained in rejecting the advice of the Industry Commission, the Labor government believed ‘there is a limit to the appropriate role of market mechanisms in higher education before its public good is compromised’.65

A higher education market?

When Dawkins took charge of higher education, it relied on Commonwealth funding. By 1996 Commonwealth grants constituted only 56.7 per cent of operating revenue. Students contributed 11.6 per cent through HECS, and another 13.4 per cent came from overseas students, fee-paying domestic postgraduates, continuing education and other fees and charges. The balance was made up of additional research grants and contracts, state government payments and donations, bequests and investment income. There was clearly a broadening of the funding base, albeit one that left the Commonwealth through its budgetary allocations and collection of HECS in control of universities’ fortunes.66

Staff costs made up the largest item of expenditure (34.4 per cent for academic staff and 29.1 per cent for non-academic), and the differences within the Unified National System were indicative of the varying fortunes of its members. Salaries and related costs amounted to 70 per cent of expenses at Charles Sturt, 63 per cent at Sydney; Melbourne spent 60 per cent on staff, La Trobe 70 per cent; Adelaide 63 per cent and the University of South Australia 73 per cent. All universities suffered a deterioration in their student–staff ratio, but it was greatest in those that spent most of their available funds on staff salaries: 20 at Charles Sturt and 13 at Sydney, 18 at La Trobe and 15 at Melbourne, 18 at the University of South Australia and 14 at Adelaide. The old universities had more of the staff-intensive courses and complained that they were underfunded. The new universities had smaller libraries and spent less on them. Such were the consequences of a Unified National System.67

It was becoming common to talk of the financial arrangements emerging over this period as establishing a higher education market, one that opened up universities to competition, gave them incentives to become more efficient and required them to respond to consumer demand. Both then and now, informed commentators observed that the Unified National System fell well short of a full market. A British analyst has suggested that a market model of higher education has six characteristics.68

First, it requires that institutions have a high level of autonomy to determine their programs, awards, prices, admissions, student numbers and staff arrangements. Australian universities had autonomous, self-governing status with some control of their programs, awards and admission policies, but no control over the number or prices paid by the overwhelming majority of students, and they were confined in their staff policies by an industrial award.

Second, a competitive market requires low barriers to entry and lots of competing suppliers, including private institutions that extend student choice and promote product and process innovation. Higher education had very high barriers to entry, as the unchanging size of the Unified National System indicated. There were hardly any private degree-granting institutions: initially Avondale College, a tiny religious institution north of Sydney that taught theology, education, business and nursing, and Marcus Oldham Farm Management College near Melbourne. Bond University, founded in 1987, commenced teaching in 1989; the Fremantle-based Notre Dame was established in the same year, and a short-lived William E. Simon University opened a business school in 1991—but capital costs were high and few students unable to enter the public system could afford the fees of these private ventures.

Third, in a market fees cover all or a significant proportion of costs. But the proportion of fee-paying students in the Unified National System remained small, and HECS payments bore no relationship to cost: the indexed charge of $1800 per annum covered a small proportion of the cost of teaching medicine, a large part of teaching law. Price signals to consumers were therefore, and by design, very weak. On the other hand, the prices set by the government through the relative funding model as purchaser of educational services from universities created rigid and uniform price signals that inhibited product differentiation.

Fourth, a market assumes that students make a rational choice based on information about price, quality and availability, and that such information plays an important part in their choice of program and provider. We shall see in chapter 8 that substantial efforts were made to create information about product quality, although the failed attempt to establish performance indicators demonstrated the difficulty of judging a product as complex as educational services. Higher education was aptly described as a ‘post-experience good’, the outcomes of which might not become apparent for many years and were difficult to attribute to any particular educational experience. The more powerful considerations in consumer choice were prestige and reputation; as Simon Marginson observed, at the lower end of the Unified National System higher education was becoming an economic market but at the upper end it was a ‘positional good’.69

Fifth, market regulation facilitates competition while providing basic consumer protection through information provision and attention to consumer complaints. This is an apt description of the approach taken in overseas education until the problems that arose in 1990 and 1991 brought closer supervision and protection of standards. As the Industry Commission observed, domestic higher education was far more tightly regulated, constraining competition that might threaten standards.

And sixth, quality is ultimately determined in a market by what consumers will pay for, so that entry scores, employment outcomes and measures of esteem are the principal determinants of student choice. Such guides to consumers gained greater potency in the Unified National System, but the three rounds of quality assurance from 1993 to 1995 operated on a quite different basis. They were conducted by academic officers who concentrated on the internal processes of universities and promoted a strengthened framework of academic self-management.

Those who appraised the advent of the Unified National System were sceptical of Dawkins’ claim that he stood back to allow the market to operate. ‘The market for higher education in Australia is basically a planned market’, insisted two economists who wanted to go further. Their assumption that the state was stifling the market persisted. Max Corden, an eminent economist who worked in the United States during the 1990s, was baffled by what he found on his return. He had become accustomed to the freedom and plenitude of that country’s leading research universities and expected that Australia would have learned from the collapse of the Soviet command economy, only to discover the same misconceptions governing the Unified National System. He described this stultifying regime as ‘Moscow on the Molonglo’. An educational economist less enamoured of deregulation noted how the Minister’s claim that institutions were free to manage their own resources was belied by the way his design of the Unified National System shaped the outcomes: ‘Dawkins is turning feigned innocence into an active art form’. And a perceptive American scholar suggested: ‘The Commonwealth wants universities to be more responsive to market forces. But rather than unleashing competitive market forces, it is trying to orchestrate the results of a market process.’70

Among the outcomes the Commonwealth sought to secure was growth, directed particularly to fields it judged essential to a more dynamic economy. It achieved the increase in graduates, and by specifying the disciplines of additional funded places it lifted enrolments in these fields. It wanted a mixed funding system to replace sole reliance on government funding, and at the same time wanted to maintain access; HECS did that. It sought to create an export sector in higher education and succeeded. It promoted a more corporate style of institutional management with greater attention to quality assurance, and universities responded. But it was less successful in its overriding goal of stimulating competition between universities, for they competed for only a small part of their income—in 1996 the tuition fees, payments from industry and research funds amounted to only 20 cents in every dollar of revenue. This was not a market in the sense that Adam Smith understood that term; it was more aptly described as ‘a government-steered quasi market’ and one that was steered by a visible hand. If this seemed paradoxical to some observers, it was a paradox that would endure.71