Since the Second World War the economics profession has grown enormously. There have been rises both in the number of economics graduates and in the number obtaining postgraduate degrees. In part this has reflected a rise in the number of people entering higher education, and in part a general expansion in the social sciences. Demand for the rising supply of graduates, at both first-degree and Ph.D. levels, has come not just from academia but increasingly from business, government and international organizations. Economists have been employed as technical experts on a scale unknown before the war. With this have come changes in the way the subject has been conceived.
One reason why the Second World War was, in many countries, a watershed in the growth of the profession was that this was when economists first became firmly established in government. In the United States, in 1940 Laughlin Currie became economic adviser to the President – the first economist to be employed full-time at such a high level. The role of economists at the heart of the US government was institutionalized with the establishment, in 1946, of the Council of Economic Advisers. The exact scope and the effectiveness of this varied according to the economic climate and the attitudes of the Council's chairman, but its existence indicated that economists had acquired a new role. The list of economists serving on the Council or associated with it includes some of those whose academic work shaped the post-war discipline: Robert Solow (1924–), James Tobin (1918–) and Joseph Stiglitz. Similar developments occurred in Britain with the establishment, in 1941, of the Economic Section in the War Cabinet Secretariat. After the war, however, the Economic Section and its successor, the Government Economic Service, remained small (around twenty members) until 1964, but by 1970 the numbers employed had risen tenfold. In both countries there was also a large increase in the number of statisticians as governments became increasingly involved in the production of national accounts and economic statistics.
Economists were also employed in international organizations. There was a precedent for this in that the League of Nations and the International Labour Organization (ILO) had both employed economists. The League of Nations had sponsored economic research by Haberler and Tinbergen on the business cycle. After 1945, however, the number of such organizations increased dramatically, and with it the employment of economists. In addition to the ILO (established before the war) there were the United Nations, which had regional commissions, the International Monetary Fund (IMF), the World Bank (originally the International Bank for Reconstruction and Development) and the General Agreement on Tariffs and Trade (GATT). These were later followed by the Organization for Economic Cooperation and Development (OECD), originally the Organization for European Economic Cooperation (OEEC), and the United Nations Conference on Trade and Development (UNCTAD).
These organizations were largely concerned with practical policy questions, and economists were not always influential. However, despite the fact that the organizations' primary goals were technical, economists based in them undertook important economic research, including theoretical research, and could make an impact on economic thinking. One example was Jacques Polak (1914–), who at the IMF in the 1950s was engaged in influential work on exchange rates and the role of money in determining a country's balance of payments. Another was Raúl Prebisch (see pp. 302–3), who at the UN's Economic Commission for Latin America developed a theory about the relationships between industrial and developing countries.
In its early years the World Bank was concerned more to establish its credibility as a sound banking institution than with applying economic analysis, with the result that, as in most other international organizations, economists were marginalized. This situation did not change until the 1960s, under Robert McNamara (1916–), when between 1965 and 1969 the number of economists employed rose from 20 to 120. McNamara also encouraged the idea that, because the World Bank's loans would always be small relative to any country's total investment, the dissemination of ideas was important. As a result the importance attached to economic research increased, and by the early 1990s the World Bank employed around 800 economists, many doing research comparable with that done in universities. Nowhere else was there such a large concentration of economists. Given that these were all working on issues related in some way to development, they had a noticeable influence.
These changes in the economics profession were closely linked to the spread of Keynesian ideas. The relationship is, however, not a simple one. Keynes's General Theory provided an enormous stimulus to the idea that governments could, and should, take responsibility for controlling the level of economic activity. It was also of great importance to the development of national-income statistics. Interest-rate policy and changes in government spending and taxation could be used to keep unemployment low. In the 1940s the United States and Britain both introduced clear commitments to full employment. However, it is important not to exaggerate the influence of Keynesian ideas on these developments. Roosevelt's New Deal, which began four years before Keynes's book was published, owed much to Rexford Tugwell (1891–1979), an advocate of economic planning. The concept of ‘American planning' was widely discussed in policy-making circles during the 1930s as something different from the socialist planning found in the Soviet Union or Nazi Germany. Equally important, in both the United States and Britain the Second World War showed that economic planning could be used to achieve national goals. Economists played an important role in the war effort, and arguably made a significant contribution to the Allied victory. In addition, a significant number of economists (or people who subsequently entered economics) spent the war working as statisticians. Although they worked on technical problems, such as quality control in munitions production, making the best use of limited shipping resources, or even the design of gunsights, many of the techniques they developed and the attitudes they acquired influenced the discipline when the war was over.
A further factor was that, although Keynesian economics swept through the universities, governments were more resistant. Britain introduced a Budget organized along Keynesian lines in 1941, and the concept of the inflationary gap – described by Keynes in How to Pay for the War (1940) – was used to calculate how much could be spent without causing inflation, and hence how much needed to be taken out of the economy by taxation or compulsory saving in order to avoid inflation. However, it is arguable that Keynesian ideas were not fully accepted in the Treasury until 1947. In the United States it was only in the 1960s, under the Kennedy administration, that Keynesian full-employment policies were systematically applied. In much of continental Europe (notably France and Germany) Keynesian ideas never dominated the policy agenda.
Macroeconomic planning of the type that governments tried to use during the post-war decades was made possible by the revolution that took place in national accounting and the provision of statistics during the inter-war period and the Second World War (see pp. 240–45). The use that could be made of national-income analysis was clearly demonstrated by wartime experiences in Britain and the United States. In Britain, the estimates of national income produced by Meade and Stone were used to calculate the inflationary gap. In the United States, Kuznets and Nathan used national income to show that Roosevelt's ‘Victory Program’, in which he promised vast increases in military production in 1942–3, was achievable. (It was achieved.) After the attack on Pearl Harbor, when the military dramatically increased its demand for hardware, Kuznets and Nathan (at the War Planning Board) continued to apply these methods. This time, however, goals had to be revised down, not up. Gilbert, in charge of national-income accounts, focused on providing rapidly available information on the state of the war economy.
The work of Kuznets and Nathan has been described as ‘one of the great technical triumphs in the history of the economics discipline’.1 They set targets that turned out to be feasible at a time when military procurement rose from 4 per cent to 48 per cent of US national income in four years. Not only was this an invaluable contribution to the war effort, it also provided a clear indication of what could be achieved using national accounting as a tool for economic planning. It amounted to turning military procurement into a science: if too little were demanded, war would be prolonged unnecessarily; if too much were demanded, costs would rise without any more being produced.
Keynesian economics and national-income accounting came together in econometric models. During the 1960s, as electronic (mainframe) computers became more widely available, these models grew in both size and sophistication compared with the earlier models of Tinbergen (see p. 249) and Klein (see p. 251). For example, in 1964 Klein produced a model of the United States based on quarterly data, comprising thirty-seven equations and estimated using more advanced statistical techniques than had been employed in his earlier work. The larger size of the model was the result of a much more detailed modelling of variables such as consumption (broken down into durable goods, non-durables and services) and investment (where Klein took account of inventories and new orders). The key development, however, was the Brookings model, first published in 1965. This started with around 200 variables, which later increased to over 400, and provided a much more detailed analysis of the economy than smaller models could provide. For example, it had separate equations for automobile sales and for spending on food and drink. Housing was distinguished from nonresidential construction, and several industries were analysed. Equally important, it was the result of a collaborative research effort, involving economists from different universities and other institutions. This was followed by a series of other models on a similar scale during the 1960s and 1970s. Unlike the earlier models, several of the new models were produced by commercial organizations. As this happened, the emphasis shifted away from exploring new techniques and developing new concepts towards keeping the models up to date so that they could provide business with the forecasts that were being demanded. The hope was that, by using an increasingly detailed model, estimated by ever more sophisticated statistical techniques, more accurate forecasts would be produced. Though there were national differences, similar developments occurred in other countries.
Though there were exceptions, these models were generally Keynesian in their broad structure: aggregate demand for goods and services was modelled in great detail, being broken down into various categories following the national accounts. These accounts adopted the Keynesian categories of consumption, investment, government spending on goods and services, exports, and imports. Each of these was then subdivided into a more detailed classification. This core, in which national income was determined by the level of aggregate demand, was supplemented by other equations to determine variables such as productive capacity, prices, wages and interest rates.
A particularly important equation was the Phillips curve. Its author, A. W. Phillips (1914–75), was an engineer who turned to economics at LSE and was responsible for the ‘Phillips machine’, in which coloured water was pumped through a system of transparent tanks in such a way that flows of water represented flows of income in the Keynesian system. This was ‘hydraulic Keynesianism' in the most literal sense of the term: the metaphor of a circular flow of income was translated into real flows of water. Phillips's curve, published in 1958, showed a negative relationship between inflation and the unemployment rate – high unemployment was associated with low inflation, and vice versa. Because unemployment could not fall below zero, however high inflation might be, and because wages fell by little, even when unemployment rose to 20 per cent during the Great Depression, the result was a curve rather than a straight line.
Phillips's curve was an empirical relationship that he found in British data. It was, however, soon given a theoretical interpretation by Lipsey, who in 1960 also provided an interpretation of the curve's distinctive shape. His explanation of the curve was based on the idea that if supply of any good (including labour) exceeds demand the price will fall, and if demand is greater than supply the price will rise. This means that there will be a negative relationship between wage inflation and the gap between supply and demand for labour. Unemployment, when adjusted for so-called ‘frictional unemployment' (unemployment that arises because workers are different from each other and have to be matched with the right job before they can be employed), was a measure of the difference between demand for and supply of labour.
In the same year, Samuelson and Solow found a similar relationship for the United States. They also argued that the Phillips curve could provide a framework within which to think about economic policy. Governments faced a trade-off between inflation and unemployment, but could use monetary policy and changes in government spending and taxation to achieve the point on the curve that they preferred. Some governments might choose to have low unemployment at the cost of a high inflation rate, whereas others might prefer lower inflation at the cost of higher unemployment.
Its relevance for policy-making was one reason why economists took up the idea of the Phillips curve with such enthusiasm. There were, however, two further reasons. The first was that it provided a satisfactory way to ‘close' the macroeconomic models that were in use at the time. The IS–LM model (see pp. 233–4) had become the standard model of how the levels of output and employment were determined, but it did not explain the price level. The Phillips curve provided the missing link, completing the model. In so far as they were constructed along Keynesian lines, the same was true of the large econometric forecasting models: when augmented with a Phillips curve, they could be used to forecast prices – something that clearly needed to be forecast. The second reason was that during the 1960s, as more and more economists had access to mainframe computers, estimating the Phillips curve provided an ideal agenda for econometric research. It was soon found that the original formulation of the Phillips curve did not fit the data very well, and numerous attempts were made to improve it by adding new variables and modifying the form of the equation.
The 1960s saw the high tide of Keynesian economics. In the United States, under President Kennedy, Keynesian policies were used to move the economy towards full employment by the end of the decade. However, this coincided with the escalation of the war in Vietnam and an enormous rise in military expenditure. In the rest of the world, too, the late 1960s and early 1970s were a period of rapid expansion, and inflation began to rise rapidly. The collapse of the Bretton Woods system, dating from 1944, which had fixed exchange rates for the previous quarter-century, meant that countries could expand without worrying about the effect it would have on their balance of payments. An important feature of this boom was a rise in commodity prices. In 1973 the Organization of Petroleum Exporting Countries contributed to this rise by successfully reaching an agreement to cut supplies of crude oil in order to raise its price. The Yom Kippur War, between Israel and the Arab states, disrupted oil supplies. The outcome was that oil prices rose by 66 per cent in October 1973 before doubling again in January 1974, and there was an acute shortage of oil. Furthermore, because oil revenues rose more rapidly than oil exporters could spend them, there was a sudden shortage of demand in oil-importing countries, which found themselves with unprecedented balance-of-payments deficits. The world was plunged into recession.
The novel feature of this depression was that inflation and unemployment rose simultaneously. The Phillips curve ‘broke down' (the negative relationship between inflation and unemployment disappeared), and Keynesian theory no longer provided an adequate framework on which policy-making could be based. Rising unemployment implied that spending should be increased, but high inflation required that it be reduced. A further consequence was that, as the decade went on, it became clear that the large-scale econometric models that were used for forecasting were not performing well. Something had gone seriously wrong with the way in which economists were analysing current problems. It was under these circumstances that the profession took a more serious interest in monetarism as expounded by Milton Friedman (1912–).
Starting with a widely read article in 1956, Friedman had been trying to revive interest in the quantity theory of money. This theory argued that the main factor explaining inflation was increases in the quantity of money (the stock of currency in circulation plus the stock of bank deposits). This ran counter to the Keynesian consensus of the time, which emphasized fiscal rather than monetary policy. Friedman sought to prove his case through extensive empirical work on the relationship between money, prices and interest rates, culminating in his Monetary History of the United States, 1867–1960 (1963), written jointly with Anna J. Schwartz (1915– ). He argued, in particular, that the money supply did not respond passively to other developments in the economy, and that changes in the money supply exerted a powerful effect on the economy. In the short run a rise in the money supply would raise output, but eventually output would return to its original level and the only effect would be on the price level. However, it was not possible to use this relationship as the basis for controlling the business cycle, because the effects of monetary changes were felt only after a long and unpredictable lag. If a central bank were to raise the money supply, the effects might be felt a year, or perhaps two years, later. The conclusion Friedman drew was that the aim of policy should be to prevent money from being a source of disturbance, and the way to do this was to ensure that the stock of money grew at a constant, known rate.
Against the background of 1968–73, when many governments had allowed the money supply to increase, Friedman's analysis of inflation was persuasive. Rapid monetary expansion around 1971 had been followed, about two years later, by an equally rapid rise in inflation. (Inflation in 1973 was clearly linked to the oil price rises of that year, though monetarists could argue that, were it not for monetary expansion, prices would not have risen so much.) During the 1970s, therefore, government after government broke with Keynesianism and implemented targets for the growth of the money supply. In some countries, such as Britain, this process was assisted by pressure from the IMF, which had for some years been working on the links between money and the balance of payments.
Overturning the Keynesian consensus, however, required much more than this. Three developments were particularly important: Friedman's expectations-augmented Phillips curve; the failure of Keynesian forecasting models; and rational expectations. The first of these – Friedman's alternative to the conventional theory of inflation – was proposed in his presidential address to the American Economic Association in 1967. His argument was that the conventional Phillips curve was incorrectly specified. What mattered to people negotiating over wages was not the money wage rate but the real wage rate – the wage adjusted for the purchasing power of money. This meant that, when bargaining over wages, people would take account of expected inflation. If people expected inflation to be 5 per cent, they would require wages to rise by 5 per cent more than if they expected the inflation rate to be zero. The result was that, if the inflation rate increased, the Phillips curve would shift upward by the same amount. This implied that there would be no stable trade-off between inflation and unemployment.
Friedman claimed that there was a single unemployment rate – the natural rate of unemployment – that was consistent with a constant inflation rate. He argued that if a government tried to peg unemployment at a level other than the natural rate, the inflation rate would rise or fall indefinitely. Low unemployment could not be bought at the price of a high inflation rate – only at the price of an ever-accelerating inflation rate, which must, at some point, become unsustainable. Governments had to accept that, though they might be able to influence unemployment for a short period (before people realized what was happening to inflation), they could not do this for long. Eventually unemployment would have to return to the natural rate. This completely undermined the basis for Keynesian demand-management policy.
It is interesting to note that the authors of the original Phillips-curve theory – Phillips, Lipsey, Samuelson and Solow – had all been well aware that wage increases would depend on expected inflation. Their economic theory told them this very clearly. However, in the late 1960s there was no such relationship in the data. Inflation was low and had changed little, with the result that there was no detectable relationship between expectations and wage increases. They thus dropped price inflation from their equations. By the early 1970s, however, after inflation rates had risen for a sustained period, econometric studies began to reveal a significant effect of expected inflation on wages, and by the mid 1970s the relationship was the very strong one that Friedman had predicted. This provided empirical support for Friedman's position. From the late 1970s, therefore, economists began to accept that in the long run the Phillips curve must be vertical – that there was no trade-off between inflation and unemployment.
The theory of the expectations-augmented Phillips curve reinforced Friedman's earlier arguments over the quantity theory. If governments could not control unemployment and faced the danger of accelerating inflation, there was a strong case for using monetary policy to control the one variable they could control, namely the rate of inflation. This doctrine came to be known by a term coined by Karl Brunner (1916-89), one of its supporters, as ‘monetarism’. Though this is simply a doctrine about the relationship between money and inflation, many of its supporters, such as Friedman, combined it with more general support for free markets and non-intervention. ‘Monetarism' therefore came to be associated, especially in the minds of non-economists, with measures such as privatization, deregulation, income-tax cuts and reductions in social-welfare provision. The meaning of the term became even looser where, as under Margaret Thatcher's government in Britain in the 1980s, attempts were made to implement so-called ‘monetarist' policies using methods (namely cuts in government spending) that were far removed from those advocated by Friedman. By this stage the term had become almost meaningless.
In the 1970s, in the wake of the first oil crisis, macroeconometric forecasting models began to forecast very badly. Attempts were made to repair them, introducing new equations and redesigning existing ones. However, such attempts were not very successful. It became clear that, despite the enormous resources that had been put into them, these models did not perform significantly better than much simpler ones. An explanation of why this was so was provided by Robert E. Lucas Jr (1937–) in 1976. The essential argument in what has come to be called the ‘Lucas critique' is that the behaviour of the private sector depends on people's expectations of what the government is going to do. For example, consumption patterns will depend on the tax and social-security policies that consumers expect to face. This means that a consumption function estimated under one tax regime will no longer work when tax policy changes. Thus, even if forecasting models offered accurate accounts of the way the economy operated when they were built in the 1960s, they were bound to break down when policy changed during the 1970s. Lucas concluded that a different type of model was required.
In a series of papers starting in 1972, Lucas argued that macroeconomic models ought to be based on the assumption that individuals were completely rational and that they took advantage of all opportunities open to them. He interpreted this to imply that all markets must be modelled as being in equilibrium, with supply equal to demand. If supply were greater than demand, for example, some suppliers would be unable to sell all the goods they wanted to sell. They would thus have an incentive to undercut their competitors, causing prices to fall, so bringing the market into equilibrium. To assume that markets were not in equilibrium, therefore, was to assume that people were not being fully rational. Similarly, he argued that if people were fully rational, their expectations would take account of all the information that was available to them. Here Lucas added the novel twist that modellers should assume that agents in their model know the true structure of the model. There are several ways in which this assumption can be justified, the most convincing of which is the argument that, if they do not do this, agents will make mistakes and change their behaviour. The only possible equilibrium, therefore, is one where people know the true model of the economy.
These two assumptions – known as ‘continuous market clearing' and ‘rational expectations' – have dramatic implications. They undermine the idea, basic to Keynesian economics, that people are unemployed because they cannot find work. Instead it is assumed that, if people would accept a lower wage rate, they would find work – that they have ‘chosen' to be unemployed, in that they have decided that the wage they would obtain from working is not enough to compensate them for the leisure they would lose. Fluctuations in output and employment arise because unanticipated shocks cause people to make mistakes in their estimates of inflation. It follows from this that systematic changes to government policy (such as following a rule that says expand the economy when unemployment is high and contract when unemployment gets low) will have no effect. The effects of such a rule will be predictable and hence will not affect output. The private sector will discount the policy changes in advance.
The business cycle presents a major challenge to such a theory. Though precise changes in output cannot be predicted, the economy generally follows a rough cyclical pattern of boom and slump, with the cycle lasting several years. In the 1970s Lucas tried to explain this as the result of monetary shocks. These would raise or lower demand, causing people to make mistakes that would cause output to fluctuate around its long-term trend. Much effort was put into measuring these shocks and explaining how they might produce fluctuations similar to those observed in the real world. Eventually, however, Lucas's explanation was abandoned in favour of one which explained the cycle in terms of ‘real' shocks – primarily shocks to technology (new inventions and so on). The result was the ‘real business cycle' theory first proposed by Fynn Kydland (1943– ) and Edward Prescott (1940– ). This was based on the same assumptions as Lucas's theory – notably continuous market clearing and rational expectations – but differed in its assumptions about the source of shocks to the system and used a new set of econometric techniques (so-called ‘calibration' methods).
Though many economists remained sceptical about the extreme policy conclusions reached by what came to be called the ‘new classical macroeconomics’, the main thrust of the new classical argument – that economic models should assume fully rational behaviour – came to be widely accepted. Keynesians, who in the 1970s had been exploring models where markets were generally out of equilibrium and traders faced rationing, changed their research strategy. They started to search for explanations of unemployment that did not violate the assumption of rationality. They built models using assumptions such as asymmetric information (where firms cannot tell how productive a worker will be until after he or she has been hired) or imperfect competition (where firms or unions have power to influence the prices at which they buy or sell). These models were based on the main new-classical assumptions, but produced Keynesian conclusions.
The main reason why the new classical macroeconomics had such a big impact was that it was in many ways a natural development from what had been happening in microeconomics since the 1930s. There were two elements to this. The first was ever greater mathematical rigour in the analysis of problems. Enough simplifying assumptions were made to permit rigorous mathematical techniques to be applied to whatever problem was being analysed. The second was the modelling of individual behaviour in terms of optimization – assuming that firms maximized profits and individuals maximized utility. In such a world, everything rested, in the last resort, on technology and individual tastes. One result of this was that the distinction between microeconomics, dealing with the behaviour of individual firms and households, and macroeconomics, dealing with the economy as a whole, was broken down.
A field that exhibits certain parallels with macroeconomics is development economics. This emerged in its modern form after the Second World War. The United States – then clearly the dominant Western power – was anti-colonialist, and from the 1940s many colonies began to achieve independence, receiving a political voice through the United Nations. As a result the ‘colonial economics' of the inter-war period, with its stress on the development of resources by colonial powers, was clearly out of date. Attention had also been focused on the economics of underdeveloped countries during the war. Paul Rosenstein-Rodan (1902–85) had tackled the theory of underdevelopment, focusing on south-eastern Europe. The statistical work of Colin Clark and Simon Kuznets revealed, for the first time, the extent of income differences between rich and poor countries. Finally, governments in North America and western Europe were taking an active interest in measures that might be taken to promote growth (and capitalism) in the rest of the world, partly in response to competition with the Soviet Union. Various agencies associated with the United Nations had a commitment to economic development from the start, and when European reconstruction was completed, the World Bank became concerned just with development. The Organization for European Economic Cooperation (OEEC) became the Organization for Economic Cooperation and Development (OECD).
There were strong links between Keynesian economics and early theorizing on problems of development. Keynesian economics was based on the presumption that economies could get stuck in situations of mass unemployment or underemployment (where workers have jobs but are not fully employed) from which they could not escape unaided. Underdeveloped countries were similarly thought to have become stuck in situations from which they needed assistance to escape. (The term ‘underdeveloped countries' is used here as it was the one used at the time. Since then, a series of euphemisms for poor countries has been used: ‘underdeveloped countries’, ‘less developed countries’, ‘developing countries’, ‘emergent nations’, ‘the Third World' and, most recently, ‘the South’.)
There were several theories about why this was the case. One of the most common focused on the difference between economy-wide growth and growth in a single sector of the economy. If a single industry (or a single business) were to expand, it would soon come up against barriers such as a lack of demand for its products and shortages of skilled labour. In contrast, if it were possible to engineer an expansion of the whole economy, each industry would create demand for other industries' products and would contribute to the growth of a pool of skilled labour on which all industries could draw. Such thinking underlay the theories of Rosenstein-Rodan, economic adviser to the World Bank in its early years, and Ragnar Nurkse (1907–59), an economist at the League of Nations who, after the war, became an advocate of the need for balanced growth.
Not all explanations of underdevelopment were of this type. At the UN Economic Commission for Latin America, Raúl Prebisch (1901-86) explained the contrast between rich and poor countries as being the outcome of unequal interaction between a ‘core' of industrial countries, exporting mainly industrial goods, and a ‘periphery' of poor countries, whose main exports were primary commodities. Because workers in industrial countries had great bargaining power, productivity gains led to rising real wages. In contrast, workers in underdeveloped countries did not have such bargaining power and so were unable to translate productivity gains into wage rises. Instead, wages stayed the same and prices fell. This difference led to primary commodities becoming ever cheaper in relation to industrial goods. The terms on which trade took place thus became more and more favourable to industrial countries, and it became more difficult for countries in the periphery to escape from poverty. Prebisch drew the conclusion that development required state intervention to develop industries (protected by tariff barriers) that would compete with goods currently being imported – a strategy of ‘import substitution’.
Other economists produced theories of ‘dualistic' development. Arthur Lewis (1915–91), for example, distinguished between a modern sector in which firms maximized profits and used mechanized production methods and a ‘traditional' sector in which family relationships ensured that everyone was employed on the land, even if their presence did not raise output. Economies that were split between sectors in this way were characterized by surplus labour in the traditional sector. Economic development involved the growth of the modern sector. Labour moved out of a sector in which its productivity was zero into one where it was productive.
Few of these theories went unchallenged. ‘Big-push' balanced-growth theories, for example, were vigorously challenged by Albert Hirschman (1915–), who argued that development required disequilibrium – unbalanced growth. Expansion of a single industry would create opportunities for other industries and would promote the development of new activities. Prebisch's theories were also challenged. There was a vigorous debate over whether statistical evidence supported the claim of a falling trend in commodity prices. Dualistic theories were vulnerable to the charge that it was in practice very difficult to identify sectors that were as different as the theories required.
The common feature of these theories is that they were ‘structural' theories. They attributed the problem of underdevelopment either to the structure of the economies themselves or to the structure of the world economy. These structural features meant that the market mechanism would, on its own, be insufficient to ensure development. Planning and state intervention of some kind were a necessity. This fitted in with the Keynesian perspective in two ways. The first was that different types of theory were seen as being needed for different problems. Just as macroeconomics was needed as a subject distinct from microeconomics in order to tackle problems of unemployment, so development economics was needed to deal with problems specific to underdeveloped countries. The second was that it was believed that markets could not be left alone – that government intervention was necessary if market economies were to operate in a beneficial way.
In the 1970s, however, this way of thinking about development fell out of favour. Attempts to plan development – whether they involved import substitution, export promotion, balanced or unbalanced growth, or the creation of disequilibria – were not particularly successful. It also became increasingly clear that ‘developing' countries were far from homogeneous – sub-Saharan Africa had problems that bore little if any relationship to those faced by South-East Asia or Latin America. It had become apparent that economic growth did not automatically reduce poverty. The result might simply be the emergence of an affluent modern sector amid poverty that was as great as before, or even greater. There was also an ideological shift against planning and in favour of solutions that placed greater emphasis on markets. The success stories of economic development were seen as arising from free-market economies such as those of Singapore, Taiwan and Korea (even though these had strong and authoritarian governments that intervened actively in industry). The assumption, central to many structural theories of development, that people in developing countries behaved in some way differently from people in developed countries became harder to sustain. The result was an increasing tendency to apply to problems of developing countries the same analytical techniques as were being used to analyse problems of developed countries. Everyone, whether rich or poor, was assumed to behave according to the precepts of rational behaviour.
There was therefore a significant change in the way in which development was tackled in the 1970s. Grand theories, often based on Keynesian macroeconomics, increasingly gave way to microeconomic theories in which prices played a much greater role. In 1969 Ian Little and James Mirrlees (1936–) produced for the OECD a manual on project evaluation that presented techniques that were widely used. It was argued that projects should be evaluated not on the basis of market prices, which might be seriously distorted, but on so-called ‘shadow prices' that reflected the constraints facing developing countries. In a similar vein, the concept of effective protection, first developed in the 1960s, came into more widespread use. Economists also focused more on the concept of poverty, seeking better ways to measure it. ‘Basic needs' indices, taking account of factors such as nutrition levels, mortality and literacy rates, became more prominent. Economic growth, though still important, was no longer the sole criterion by which development was measured. The theoretical tools used were, as in macroeconomics, increasingly those of contemporary microeconomics. For example, in the 1970s development economists took up models of risk and incomplete information. In the 1980s, again following macroeconomics, these were extended to include imperfect competition and the latest developments in growth and trade theory. Parallel changes took place both in academia and in international organizations, though there was no uniformity, even among the latter. For example, in the 1970s the OECD and UNIDO (the UN Industrial Development Organization) took up Little–Mirrlees methods of project appraisal and cost-benefit analysis, but the World Bank did not.
One of the main developments during the 1980s was the increasing prominence of the World Bank in setting the agenda for development. In 1980 it abandoned its earlier policy of lending only to finance specific projects and introduced ‘structural-adjustment lending’. This was lending designed to help countries get over medium-term balance-of-payments problems without impeding growth. Loans were made on condition that the borrowing countries implemented a programme of reform, including measures such as allowing exchange rates and interest rates to be determined by world markets, reducing the size of the public sector, deregulating markets, and removing controls on investment. This was based on the so-called ‘Washington consensus' – the idea that development required free markets and a trade-oriented development strategy. The debt crisis of the early 1980s worsened the situation for many developing countries, and the World Bank's insistence that lending be accompanied by measures to liberalize trade and capital flows and open up domestic markets became a major issue. Critics of the World Bank argued that structural-adjustment policies served to place the burden of adjustment on the poor in developing countries, for the result would frequently be unemployment and cuts in public services. Supporters focused on the need for such reforms if developing countries' problems were to be solved.
The context of development economics changed even more dramatically with the fall of the Soviet Empire in 1989–91. Economists – including both academics and those in international organizations – turned on a large scale to problems of ‘transition' and ‘emerging markets’. The establishment of market economies in eastern Europe and the former Soviet Union had clear parallels with the situation of ‘traditional' developing countries facing structural adjustment. It was believed that, in the long run, the establishment of a market economy would raise living standards, but the short-term effects were high unemployment and extreme poverty alongside extreme affluence.
After the Second World War, economics became a much more technical subject, and mathematical techniques were systematically applied to all its branches. This was not a neutral development, but was accompanied by a transformation of the subject's content as theories were refined in such a way that they could be treated using the available mathematical tools. The meaning attached even to such basic terms as ‘competition’, ‘markets' and ‘unemployment' changed. These developments were something that could happen only in an academic environment, for many theories were developed that had only tenuous links, if any, with real-world problems. Comparisons with ‘basic' or ‘blue-sky' research (not aimed at any specific use) in science and medicine were used to justify such inquiries.
At the same time as economics became more technical, it also became more international. (Cause and effect are hard to tie down, but there were many causes of internationalization other than the spread of mathematical techniques.) While there are still many economists who can be identified with a single country, there are many who cannot. It became common for an economist to be born in one country, to study in another country (or in two other countries), and to spend his or her career moving between institutions in a variety of other countries. Communication networks have also become international. The result is that the nationality of economic ideas has become harder than ever to pin down – there is a real sense in which it has become a meaningless concept. Economic ideas have become essentially international. Even where schools have retained national labels (such as ‘Austrian' economics) they have become international.
The country at the centre of this process was the United States. Universities, even in countries with long-established academic institutions, such as Germany and Britain, have increasingly modelled their graduate teaching on the US model. American textbooks have been widely used in all countries. American criteria for academic advancement, emphasizing the publication of articles in learned journals, have become widespread. In addition, because of the sheer size of the US academic system, American economics has increasingly dominated the pages even of European academic journals. Americans clearly dominate the list of Nobel Prize winners, and have been responsible for the most influential new ideas in the subject. The process therefore seems to be one of Americanization rather than internationalization. However, against this has to be set the fact that the ideas on which the current consensus is based have significant European roots: mathematical economics in German mathematics of the 1920s; econometrics in Tinbergen's work in the Netherlands; and macroeconomics, through Keynes, in Cambridge, England. In addition, one of the reasons for the apparent American dominance has been the migration of economists from Europe and elsewhere in the first half of the twentieth century (see p. 207). Many of the key players in the transformation of the subject came from German-speaking countries or eastern Europe. If economics has been Americanized, there is a sense in which this is because the American academic system has been so large, so wealthy and so open to international influences.
This, however, is only one side of the story of economics becoming more technical. The other is the increased involvement of economists in government, international organizations and business. Economists have come to be seen as technical experts whose advice is essential to decision-making – a process greatly stimulated by the Second World War. This has gone beyond simply forecasting, though that remains important. Especially in the United States, where the process has perhaps gone further than elsewhere, economists are regularly used in tasks such as designing the rules by which industries are to be regulated or the procedures by which franchises are to be sold. During the 1990s they were heavily involved in designing measures to protect the environment. In some fields, ideas were developed in academia and then applied by economists working in government or business, as one might expect. However, this simple relationship is not always found. Macroeconomics and development are two fields where it is hard to draw a clear line between research done in academia and research done in government, central banks and international organizations. Research in these fields has been dominated by policy problems, and there has been continual interaction between economists in universities and in other organizations, with many staff moving back and forth between different types of institution. There has therefore been a convergence between, for example, the ways in which central banks and academic economists think about monetary policy, and in ways of tackling economic development.
The academic environment, dominated by the United States, in which economic ideas were developed in the second half of the twentieth century is very important. The way in which economic thought developed during this period cannot be understood apart from it. However, the role of economists as policy advisers should not be neglected, especially in particular fields.