In 1944, in An American Dilemma, Gunnar Myrdal had forecast that in order to advance the situation of Negroes, the U.S. courts would need to take up the cause. In the years between the mid-1950s and the mid-1970s, that is exactly what happened, but then a reaction set in. President Richard Nixon and Vice President Spiro Agnew queried whether the Supreme Court in the United States was not taking decisions that (a) were more properly the work of government, making political decisions masquerading as legal ones, and (b) were flagrantly disregarding the views of the ‘silent majority,’ sowing social tension by always putting the minorities first.
President Nixon and Vice President Agnew left office in disgrace, and so their personal involvement in these debates was compromised. But the issues were real enough. They were addressed, in detail, by Ronald Dworkin, a professor at New York University’s law school, in Taking Rights Seriously, which was published in 1977.1 Dworkin’s book was an examination of the way law evolves and itself an example of that evolution. It combined thinking about law, moral philosophy, linguistic philosophy, politics, and political economy, and took into account recent developments in civil rights, women’s liberation, homosexual emancipation, and the theories and arguments of Ludwig Wittgenstein, Herbert Marcuse, Willard van Orman Quine, and even R. D. Laing. But Dworkin’s main aim was to clarify certain legal concepts in the wake of the civil rights movement, Rawls’s theory of justice, and Berlin’s notions of negative and positive freedom. In doing so, Dworkin offered a defence of civil disobedience, a legal justification for reverse discrimination, and argued, most fundamentally, that there is no right to general liberty, when by liberty is meant licence. Instead, Dworkin argued, the most basic right (insofar as the phrase makes sense) is of the individual against the state, and this is best understood as having the right to be treated as an equal with everyone else. For Dworkin, in other words, equality before the law came ahead of everything else, and above all other meanings of freedom.
Besides considering the great social/legal problems of the day, Dworkin grounded his work in the all-important question of how, in a democracy, the rights of the majority, the minorities, and the state can be maintained. As with Rawls v. Nozick, in an earlier exchange, he compared utilitarian notions (that such-and-such a law would benefit the greatest number) in favour of the ideal (that fairness be seen as the greatest common good). He was suspicious of Isaiah Berlin’s notion of ‘negative’ freedom as the basic form.2 Berlin, it will be recalled, defined negative freedom as the right to be left alone, unconstrained, whereas positive freedom was the right to be appreciated as whatever kind of person one wished to be appreciated as. Dworkin thought that provided one had equality before the law, this distinction of Berlin’s turned out to be false, and that therefore, in a sense, law preceded politics. (This was reminiscent of Friedrich von Hayek’s view that the ‘spontaneous’ way that man has worked out his system of laws precedes any political party.) On Dworkin’s analysis, equality before the law precluded a general right to property, which Hayek and Berlin thought was a sine qua non of freedom. Dworkin arrived at his view because, as the title of his books suggests, he thought that rights are serious in a modern society, and that without taking rights seriously, the law cannot be serious.3 (His book was also a specific reply to Vice President Agnew, who in a speech had argued that ‘rights are divisive,’ that liberals’ concern for individuals’ rights ‘was a headwind blowing in the face of the ship of state,’ not so very different from President Nixon’s comments about the silent majority.) As Dworkin put it at the end of his central chapter, ‘If we want our laws and legal institutions to provide the ground rules within which these [social and political] issues will be contested then these ground rules must not be the conqueror’s law that the dominant class imposes on the weaker, as Marx supposed the law of a capitalist society must be. The bulk of the law – that part which defines and implements social, economic, and foreign policy – cannot be neutral. It must state, in its greatest part, the majority’s view of the common good. The institution of rights is therefore crucial, because it represents the majority’s promise to the minorities that their dignity and equality will be respected…. The Government will not re-establish respect for law without giving the law some claim to respect. It cannot do that if it neglects the one feature that distinguishes law from ordered brutality. If the Government does not take rights seriously, then it does not take the law seriously either.’4 Dworkin’s conclusion that in the modern age, the post-1960s age, the right to be treated equally – by government – was a prerequisite of all freedoms, was congenial to most liberals.
An alternative view of the legacy of the 1960s and 1970s, the freedom and the equalities the period had produced, and a very different picture of the laws that had been passed, came from two conservative economists in Chicago. Milton and Rose Friedman took equality before the law for granted but thought that freedom could only be guaranteed if there were economic freedom, if men and women were ‘free to choose’ – the title of their 1980 book – the way they earned their living, the price they paid for the goods they wished to buy, and the wage they were willing to pay anyone who worked for them.5 Milton Friedman had advanced very similar views two decades before, in Capitalism and Freedom, published in 1962, and considered in chapter 30 (see page 519). He and his wife returned to the subject, they said, because they were anxious that during the interim, ‘big government’ had grown to the point where its mushrooming legal infrastructure, much of it concerned with ‘rights,’ seriously interfered with people’s lives, because unemployment and inflation were growing to unacceptable levels in the West, and because they felt that, as they put it, ‘the tide [was] turning,’ that people were growing tired and sceptical of the ‘liberal’ approach to economics and government in the West, and looking for a new direction.6
Free to Choose, as its authors were at pains to point out, was a much more practical and concrete book than Capitalism and Freedom. The Friedmans had specific targets and specific villains in their view of the world. They began by reanalysing the 1929 stock market crash and the ensuing depression. Their aim was to counter the view that these two events amounted to the collapse of capitalism, and that the capitalist system was responsible for the failure of so many banks and the longest-running depression the world has ever known. They argued that there had been mismanagement of specific banks, specifically the Bank of the United States, which closed its doors on II December 1930, the largest financial institution ever to fail in the history of the U.S. Although a rescue plan had been drawn up for this bank, anti-Semitism on the part of ‘leading members of the banking community’ in New York was at least partly responsible for the plan not being put into effect. The Bank of the United States was run by Jews, servicing mainly the Jewish community, and the rescue plan envisaged it merging with another Jewish bank. But, according to Friedman (himself Jewish), this could not be stomached ‘in an industry that, more than almost any other, has been the preserve of the well-born and the well placed.’7 This sociological – rather than economic – failure was followed by others; by Britain abandoning the gold standard in 1931, by mismanagement of the Federal Reserve System’s response to the various crises, and by the interregnum between Herbert Hoover’s presidency and Franklin Roosevelt’s in 1933, when neither man would take any action in the economic sphere for a period of three months. On the Friedman analysis, therefore, the great crash and the depression were more the result of technical mismanagement than anything fundamental to capitalism per se.
The crash/depression was important, however, because it was followed so soon by world war, when the intellectual climate changed: people saw – or thought they could see – that cooperation worked, rather than competition; the idea of a welfare state caught on in wartime and set the tone for government between 1945 and, say, 1980. But, and this was the main point of the Friedmans’ book, ‘New Deal liberalism,’ as they called it, and Keynesianism, didn’t work (though they were relatively easy on Keynes: even President Nixon had declared, ‘We are all Keynesians now’). They looked at schools, at the unions, at consumer protection, and at inflation, and found that in all cases free-market capitalism not only produced a more efficient society but created greater freedom, greater equality, and more public benefit overall: ‘Nowhere is the gap between rich and poor wider, nowhere are the rich richer and the poor poorer, than in those societies that do not permit the free market to operate. That is true of mediaeval societies like Europe, India before independence, and much of modern South America, where inherited status determines position. It is equally true of centrally planned societies, like Russia or China or India since independence, where access to government determines position. It is true even where central planning was introduced, as in all three of these countries, in the name of equality.’8 Even in the Western democracies, the Friedmans said, echoing something first observed by Irving Kristol, a ‘new class’ has arisen – government bureaucrats or academics whose research is supported by government funds, who are privileged but preach equality. ‘They remind us very much of the old, if unfair, saw about the Quakers: “They came to the New World to do good, and ended up doing well.” ’9
The Friedmans gave many instances of how capitalism promotes freedom, equality, and the wider spread of benefits. In attacking the unions they did not confine themselves to the ‘labour’ unions but focused on middle-class unions as well, such as the doctors, and quoted the case of the introduction of ‘paramedics’ in one district of California. This had been vigorously opposed by doctors – ostensibly because only properly trained medical personnel could cope, but really because they wanted to limit entry to the profession, to keep up their salaries. In fact, the number of people surviving cardiac arrest rose in the first six months after the introduction of paramedics from 1 percent to 23 percent. In the case of consumer rights, the Friedmans claimed that in America there was far too much government legislation interfering with the free market, one result being a ‘drug lag’: the United States had dropped behind countries like Great Britain in the introduction of new drugs – they specifically referred to beta-blockers. The introduction of new drugs to the market, for example, had fallen, they said, by about 50 percent since 1962, mainly because the cost of testing their effects on the consumer had risen disproportionately. The Friedmans considered that government response to exposés like Rachel Carson’s had been too enthusiastic; ‘all the movements of the past two decades – the consumer movement, the ecology movement, the back-to-the-land movement, the hippie movement, the organic-food movement, the protect-the-wilderness movement, the zero-population-growth movement, the “small is beautiful” movement, the anti-nuclear movement – have had one thing in common. All have been anti-growth. They have been opposed to new developments, to industrial innovation, to the increased use of natural resources.’10 It was time to shout enough is enough, that the forces for control, for ‘rights’, had gone too far. At the end of their book, however, the Friedmans said they thought a change was coming, that many people wanted ‘big government’ rolled back. In particular, they pointed to the election of Margaret Thatcher in Britain in 1979, on a promise ‘to haul back the frontiers of the state,’ and to the revolt in America against the government monopoly of the postal service. They ended by calling for an amendment to the U.S. Constitution, for what would in effect be an Economic Bill of Rights that would force the government to limit federal spending.
Why this change in public mood? The main reason, alluded to in an earlier chapter, was that following the oil crisis in 1973–74, the long stagnation in the living standards of the West produced a growing dissatisfaction. As the economist Paul Krugman of MIT described it, the ‘magic’ of the Western economies, their ever-higher standards of living, went away in 1973. It took time for these trends to emerge, but as they did, certain academic economists, notably Martin Feldstein of Harvard, began to document the negative effects of taxation and government expenditure on investment and savings.11 Friedman actually predicted that there would come a time of stagnation – zero growth – combined with inflation, which according to classical economics couldn’t happen. Paul Samuelson gave this phenomenon its name, ‘stagflation,’ but it was Friedman, rightly, who received the Nobel Prize for the insight. Where Friedman and Feldstein led, others soon followed, and by the late 1970s there emerged a hard core of ‘supply-side’ economists who rejected Keynesianism and believed that a sharp reduction in taxation, meaning that more money would be ‘supplied’ to the economy, would produce such a surge in growth that there was no need to worry about expenditure. These ideas were behind the election of Margaret Thatcher in the United Kingdom in 1979, and of Ronald Reagan as president of the United States a year later. In the United States the Reagan years were marked by massive budget deficits, which were still being paid for in the 1990s, but also by a striking rally on Wall Street, which faltered between 1987 and 1992 but then recovered. In Britain, in addition to a similar rise in the stock market, there was also an important series of policy initiatives, known as privatisation, in which mainly public utilities were returned to private hands.12 In social, economic, and political terms, privatisation was a huge success, transforming ungainly and outdated businesses into modern, efficient corporations where, in some cases at least, real costs to the consumer fell. The idea of privatisation was widely exported – to Western Europe, Eastern Europe, Asia, and Africa.
Nevertheless, despite all that was happening on the stock markets, the growth performance of the major Western economies remained unimpressive, certainly in comparison with pre-1973 levels. At the same time there was a major jump in the inequalities of wealth distribution. In the 1980s growth and inequality were the two main theoretical issues that concerned economists, much more so than politicians, Western politicians anyway.
Traditionally, three reasons are given for the slowdown in growth after the oil crisis. The first is technological. MIT’s Robert Solow was the first economist to show exactly how this worked (he won a Nobel Prize in 1987). In his view, productivity growth comes from technological innovation – what is known in economics now as the Solow Residual.13 Many technological breakthroughs matured in World War II, and in the period of peace and stability that followed, these innovations came to fruition as products. However, all of these high-tech goods – the jet, television, washing machines, long-playing records, portable radios, the car – once they had achieved saturation, and once they had developed to a certain point of sophistication, could no longer add further innovation worth the name, and by around 1970 the advances in technology were slowing down. Paul Krugman, in his history of economics, underlines this point with a reference to the Boeing 747 jet. Still, in 2000 AD, the backbone of many airlines, this first came into service in 1969. The second reason for the slowdown in growth was sociological. In the 1960s the baby-boom generation reached maturity. During that same decade many of the assumptions of capitalism came under attack, and several commentators observed a decline thereafter in educational standards. As Krugman wrote, ‘The expansion of the underclass has been a significant drag on US growth…. There is a plausible case to be made that social problems – the loss of economic drive among the children of the middle class, the declining standards of education, the rise of the underclass – played a significant role in the productivity slowdown. This story is very different from the technological explanation; yet it has in common with that story a fatalistic feel…. [It] would seem to suggest that we should learn to live with slow productivity growth, not demand that the government somehow turn it around.’14 The third explanation is political. This is the Friedman argument that government policies were responsible for the slow growth and that only a reduction in taxes and a rolling back of regulations would free up the forces needed for growth to recur. Of these three, the last, because it was overtly political, was the most amenable to change. The Thatcher government and the Reagan administration both sought to follow monetarist and supply-side policies. Feldstein was himself taken into the Reagan White House.
Ironically, however, again as Paul Krugman makes clear, 1980 was actually the high point of conservative economics, and since then ideas have moved on once more, concentrating on the more fundamental forces behind growth and inequality.15 The two dominant centres of economic thinking, certainly in the United States, have been Chicago and Cambridge, Massachusetts – home to Harvard and MIT. Whereas Chicago was associated primarily with conservative economics, Cambridge, in the form of Feldstein, Galbraith, Samuelson, Solow, Krugman, and Sen (now at Cambridge, England), embraces both worldviews.
After his discovery of the ‘residual’ named after him, Robert Solow’s interest in understanding growth, its relation to welfare, work, and unemployment, is perhaps the best example of what currently concerns theoretical economists involved with macroeconomics (as opposed to the economics of specific, closed systems). The ideas of Solow and others, fashioned in the 1950s and 1960s, coalesced into Old Growth Theory.16 This said essentially that growth was fuelled by technological innovation, that no one could predict when such innovation would arise, and that the gain produced would be temporary, in the sense that there would be a rise in prosperity but it would level off after a while. This idea was refined by Kenneth Arrow at Stanford, who showed that there was a further gain to be made – of about 30 percent – because workers learned on the job: they became more skilled, enabling them to complete tasks faster, and with fewer workers needed. This meant that prosperity lasted longer, but even here diminishing returns applied, and growth levelled off.17
New Growth Theory, which emerged in the 1980s, pioneered by Robert Lucas at Chicago but added to by Solow himself, argued that on the contrary, substantial investment by government and private initiative can ensure sustained growth because, apart from anything else, it results in a more educated and better motivated workforce, who realise the importance of innovation.18 This idea was remarkable for two reasons. In the first place Lucas came from conservative Chicago, yet was making a case for more government intervention and expenditure. Second, it marked the coming together of sociology, social psychology, and economics: a final recognition of David Riesman’s argument in The Lonely Crowd, which had shown that ‘other-directed’ people loved innovation. It is too soon to say whether New Growth Theory will turn out to be right.19 The explosion of computer technology and biotechnology in the 1990s, the ease with which new ideas have been accepted, certainly suggests that it might do. Which makes it all the more curious that Margaret Thatcher railed so much against the universities while she was in power. Universities are one of the main ways governments can help fuel technological innovation and therefore srimulate growth.
Milton and Rose Friedman, and the Chicago school in general, based their theories on what they called the key insight of the Scotsman Adam Smith, ‘the father of modern economics,’ who wrote The Wealth of Nations in 1776. ‘Adam Smith’s key insight was that both parties to an exchange can benefit and that, so long as cooperation is strictly voluntary, no exchange will take place unless both parties do benefit.’20 Free-market economics, therefore, not only work: they have an ethical base.
There was, however, a rival strand of economic thinking that did not share the Friedmans’ faith in the open market system. There was little space in Free to Choose for a consideration of poverty, which the Friedmans thought in any case would be drastically reduced if their system were allowed full rein. But many other economists were worried about economic inequality, the more so after John Rawls and Ronald Dworkin had written their books. The man who came to represent these other economists was an Indian but Oxford- and Cambridge-trained academic, Amartya Sen. In a prolific series of papers and books Sen, who later held joint appointments at Harvard and Cambridge, attempted to move economics away from what he saw as the narrow interests of the Friedmans and the monetarists. One area he promoted was ‘welfare economics,’ in effect economics that looked beyond the operation of the market to scrutinise the institution of poverty and the concept of ‘need.’ Many of Sen’s articles were highly technical mathematical exercises, as he attempted to measure poverty and different types of need. A classic Sen problem, for example, would be trying to calculate who was worse off, someone with more income but with a chronic health problem, for which treatment had to be regularly sought and paid for, or someone with less income but better health.
Sen’s first achievement was the development of various technical measures which enabled governments to calculate how many poor people there were within their jurisdiction, and what exactly the level of need was in various categories. These were no mean accomplishments, but he himself called them ‘engineering problems,’ with ‘nuts and bolts’ solutions. Here too economics and sociology came together. Of wider relevance were two other ideas that contributed equally to his winning the Nobel Prize for Economics in 1998. The first of these was his marriage of economics and ethics. Sen took as a starting point a non sequitur that he had observed: many people who were not poor were nevertheless interested in the problem of poverty, and its removal, not because they thought it was more efficient to remove it, but because it was wrong. In other words, individuals often behaved ethically, without putting their own self-interest first. This, he noted, went against not only the ideas of economists like the Friedmans but also those of some evolutionary thinkers, like Edward O. Wilson and Richard Dawkins. In his book On Ethics and Economics (1987), Sen quoted the well-known Prisoners’ Dilemma game, which Dawkins also made so much of in The Selfish Gene. Sen noted that, while cooperation might be preferable in the evolutionary context, in the industrial or commercial setting the selfish strategy is theoretically what pays any single person, seen from that person’s point of view. In practice, however, various cooperative strategies are invariably adopted, because people have notions of other people’s rights, as well as their own; they have a sense of community, which they want to continue. In other words, people do have a general ethical view of life that is not purely selfish. He thought these findings had implications for the economic organisation of society, taxation structure, financial assistance to the poor and the recognition of social needs.21
But the work of Sen’s that really caught the world’s imagination was Poverty and Famines, his 1981 report for the World Employment Programme of the International Labour Organisation, written when he was professor of political economy at Oxford and a Fellow of All Souls.22 The subtitle of Sen’s book was ‘An Essay on Entitlement and Deprivation,’ which brings us back to Dworkin’s concept of rights. In his report, Sen examined four major famines – the Great Bengal Famine in 1943, when about 1.5 million people starved to death; the Ethiopian famines of 1972–74 (more than 100,000 deaths); the 1973 drought and famine in the Sahel (100,000 dead); and the 1974 flood and famine in Bangladesh (figures vary from 26,000 to 100,000 dead). His most important finding was that, in each case, in the areas most affected, there was no significant decline in the availability of food (FAD, for ‘food availability decline’ in the jargon); in fact, in many cases, and in many of the regions where famine was occurring, food production, and food production per capita, actually rose (e.g., in Ethiopia, barley, maize, and sorghum production was above normal in six out of fourteen provinces).23 Instead, what Sen found typically happened in a famine was that a natural disaster, like a flood or a drought, (a) made people think there would be a shortage of food, and (b) at the same time affected the ability of certain sectors of the population – peasants, labourers, agricultural workers – to earn money. Possessors of food hoard what they have, and so the price rises at the very time large segments of the population suffer a substantial fall in income. Either the floods mean there is no work to be had on the land, or drought causes the poor to be evicted from where they are living, because they can’t grow enough to earn enough to pay the rent. But the chief factor is, as Sen phrases it, a fall in ‘entitlement’: they have less and less to exchange for food. It is a failure of the market system, which operates on what people think is happening, or soon will happen. But, objectively, in terms of the aggregate food availability, the market is wrong. Sen’s analysis was startling, partly because, as he himself said, it was counterintuitive, going against the grain of common sense, but also because it showed how the market could make a bad situation worse. Apart from helping governments understand in a practical way how famines develop, and therefore might be avoided or the effects mitigated, his empirical results highlighted some special limitations of the free-market philosophy and its ethical base. Famines might be a special case, but they affect a lot of people.
In his economic history of the last quarter of the century, Peddling Prosperity, the MIT economist, Paul Krugman charts the rise of right-wing economics and then describes its declining influence in the 1980s, devoting the last third of his book to the revival of Keynesianism (albeit in new clothes) in the late 1980s and 1990s.24 Krugman’s account described the failure of such right-wing doctrines as ‘business-cycle’ theory, and the drag on the U.S. economy brought about by the huge budget deficits, the result of Ronald Reagan’s various monetarist policies. He similarly took to task the ideas of more recent, more liberal economic thinkers such as Lester Thurow, in Zero-Sum Society (1980), and notions of ‘strategic trade’ put forward by the Canadian economist James Brander and his Australian coauthor, Barbara Spencer. Strategic trade views countries as similar to companies – corporations – who seek to ‘place’ their economy in a strategic position vis-à-vis other economies. This view held sway in the Clinton White House, at least for a while – until Larry Summers became economic secretary in May 1999 – but it was misplaced, argues Krugman, for countries are not companies and do not necessarily need to compete to survive and prosper, and such apparently intelligent thinking is in any case doomed to failure because, as most research in the 1980s and 1990s showed, people behave not in a perfectly rational way, as classical economists always claimed, but in a ‘near-rational’ way, thinking mainly of the short term and using only such information as comes their way easily. For Krugman, this recent insight is an advance because it means that individual decisions, each one taken sensibly, can have disastrous collective consequences (in short, this is why recessions occur). Krugman therefore allies himself with the new Keynesians who believe that some government intervention in macroeconomic matters is essential to exert an influence on invention/inflation/unemployment/international trade. But Krugman’s conclusion, in the mid-1990s, was that the two main economic problems still outstanding were slow growth and productivity on the one hand, and rising poverty on the other: ‘Everything else is either of secondary importance, or a non-issue.’25 This brings us to a familiar name: J. K. Galbraith.
Among professional economists, J. K. Galbraith is sometimes disparaged as a fellow professional much less influential with his colleagues than among the general public. That is to do him a disservice. In his many books he has used his ‘insider’ status as a trained economist to make some uncomfortable observations on the changing nature of society and the role economics plays in that change. Despite the fact that Galbraith was born in 1908, this characteristic showed no sign of flagging in the last decade of the century, and in 1992 he published The Culture of Contentment and, four years later, The Good Society: The Human Agenda, his eighteenth and twentieth books. (There was another, Name Dropping, a memoir, in 1999.)
The Culture of Contentment is a deliberate misnomer. Galbraith is using irony here, irony little short of sarcasm.26 What he really means is the culture of smugness. His argument is that until the mid-1970s, round about the oil crisis, the Western democracies accepted the idea of a mixed economy, and with that went economic social progress. Since then, however, a prominent class has emerged, materially comfortable and even very rich, which, far from trying to help the less fortunate, has developed a whole infrastructure – politically and intellectually – to marginalise and even demonise them. Aspects of this include tax reductions to the better off and welfare cuts to the worse off, small, ‘manageable wars’ to maintain the unifying force of a common enemy, the idea of ‘unmitigated laissez-faire as the embodiment of freedom,’ and a desire for a cutback in government. The more important collective end result of all this, Galbraith says, is a blindness and a deafness among the ‘contented’ to the growing problems of society. While they are content to spend, or have spent in their name, trillions of dollars to defeat relatively minor enemy figures (Gaddafi, Noriega, Milosevic), they are extremely unwilling to spend money on the underclass nearer home. In a startling paragraph, he quotes figures to show that ‘the number of Americans living below the poverty line increased by 28 per cent in just ten years, from 24.5 million in 1978 to 32 million in 1988. By then, nearly one in five children was born in poverty in the United States, more than twice as high a proportion as in Canada or Germany.’27
Galbraith reserves special ire for Charles Murray. Murray, a Bradley Fellow at the American Enterprise Institute, a right-wing think tank in Washington, D.C., produced a controversial but well-documented book in 1984 called Losing Ground.28 This examined American social policy from 1950 to 1980 and took the position that, in fact, in the 1950s the situation of blacks in America was fast improving, that many of the statistics that showed they were discriminated against actually showed no such thing, rather that they were poor, that a minority of blacks pulled ahead of the rest as the 1960s and 1970s passed, while the bulk remained behind, and that by and large the social initiatives of the Great Society not only failed but made things worse because they were, in essence, fake, offering fake incentives, fake curricula in schools, fake diplomas in colleges, which changed nothing. Murray allied himself with what he called ‘the popular wisdom,’ rather than the wisdom of intellectuals or social scientists. This popular wisdom had three core premises: people respond to incentives and disincentives – sticks and carrots work; people are not inherently hardworking or moral – in the absence of countervailing influences, people will avoid work and be amoral; people must be held responsible for their actions – whether they are responsible in some ultimate philosophical or biochemical sense cannot be the issue if society is to function.29 His charts, showing for instance that black entry into the labour force was increasing steadily in the 1955–80 period, or that black wages were rising, or that entry of black children into schools increased, went against the grain of the prevailing (expert) wisdom of the time, as did his analysis of illegitimate births, which showed that there was partly an element of ‘poor’ behaviour in the figures and partly an element of ‘race.’30 But overall his message was that the situation in America in the 1950s, though not perfect, was fast improving and ought to have been left alone, to get even better, whereas the Great Society intervention had actually made things worse.
For Galbraith, Murray’s aim was clear: he wanted to get the poor off the federal budget and tax system and ‘off the consciences of the comfortable.’31 He confirmed this theme in The Good Society (1996). Galbraith could never be an ‘angry’ writer; he is almost Chekhovian in his restraint. But in The Good Society, no less than in The Culture of Contentment, his contempt for his opponents is there, albeit in polite disguise. The significance of The Good Society, and what links many of the ideas considered in this chapter, is that it is a book by an economist in which economics is presented as the servant of the people, not the engine.32 Galbraith’s agenda for the good society is unashamedly left of centre; he regards the right-wing orthodoxies, or would-be orthodoxies, of 1975–90, say, as a mere dead end, a blind alley. It is now time to get back to the real agenda, which is to re-create the high-growth, low-unemployment, low-inflation societies of the post-World War II era, not for the sake of it, but because they were more civilised times, producing social and moral progress before a mini-Dark Age of selfishness, greed, and sanctimony.33 It is far from certain that Galbraith was listened to as much as he would have liked, or as much as he would have been earlier. Poverty, particularly poverty in the United States, remained a ‘hidden’ issue in the last years of the century, seemingly incapable of moving or shaking the contented classes.
The issue of race was more complicated. It was hardly an ‘invisible’ matter, and at a certain level – in the media, among professional politicians, in literature – the advances made by blacks and other minorities were there for all to see. And yet in a mass society, the mass media are very inconsistent in the picture they paint. In mass society the more profound truths are often revealed in less compelling and less entertaining forms, in particular through statistics. It is in this context that Andrew Hacker’s Two Nations: Black and White, Separate, Hostile, Unequal, published in 1992, was so shattering.34 It returns us not only to the beginning of this chapter, and the discussion of rights, but to the civil rights movement, to Gunnar Myrdal, Charles Johnson, and W. E. B. Du Bois. Hacker’s message was that some things in America have not changed.
A professor of political science at Queen’s College in New York City, Andrew Hacker probably understands the U.S. Census figures better than anyone outside the government; and he lets the figures lead his argument. He has been analysing America’s social and racial statistics for a number of years, and is no firebrand but a reserved, even astringent academic, not given to hyperbole or rhetorical flourishes. He publishes his startling (and stark) conclusions mainly in the New York Review of Books, but Two Nations was more chilling than anything in the Review. His argument was so shocking that Hacker and his editors apparently felt the need to wrap his central chapters behind several ‘softer’ introductory chapters that put his figures into context, exploring and seeking to explain racism and the fact of being black in an anecdotal way, to prepare the reader for what was to come. The argument was in two parts. The figures showed not only that America was still deeply divided, after decades – a century – of effort, but that in many ways the Situation had deteriorated since Myrdal’s day, and despite what had been achieved by the civil rights movement. Dip into Hacker’s book at almost any page, and his results are disturbing.
In other words, the situation in 1993 was, relatively speaking, no better than in 1950.35
‘The real problem in our time,’ wrote Hacker, ‘is that more and more black infants are being born to mothers who are immature and poor. Compared with white women – most of whom are older and more comfortably off” – black women are twice as likely to have anemic conditions during pregnancy, twice as likely to have no prenatal care, and twice as likely to give birth to low-weight babies. Twice as many of their children develop serious health problems, including asthma, deafness, retardation, and learning disabilities, as well as conditions stemming from their own use of drugs and alcohol during pregnancy.’36 ‘Measured in economic terms, the last two decades have not been auspicious ones for Americans of any race. Between 1970 and 1992, the median income for white families, computed in constant dollars, rose from $34,773 to $38,909, an increase of 11.9 percent. During this time black family income in fact went down a few dollars, from $21,330 to $21,161. In relative terms, black incomes fell from $613 to $544 for each $1,000 received by whites.’37
Despite a large chapter on crime, Hacker’s figures on school desegregation were more startling. In the early 1990s, 63.2 percent of all black children, nearly two out of three, were still in segregated schools. In some states the percentage of blacks in segregated schools was as high as 84 percent. Hacker’s conclusion was sombre: ‘In allocating responsibility, the response should be clear. It is white America that has made being black so disconsolate an estate. Legal slavery may be in the past, but segregation and subordination have been allowed to persist. Even today, America imposes a stigma on every black child at birth…. A huge racial chasm remains, and there are few signs that the coming century will see it closed. A century and a quarter after slavery, white America continues to ask of its black citizens an extra patience and perseverance that whites have never required of themselves. So the question for white Americans is essentially moral: is it right to impose on members of an entire race a lesser start in life and then to expect from them a degree of resolution that has never been demanded from your own race?38
The oil crisis of 1973–74 surely proved Friedrich von Hayek and Milton Friedman right in at least one respect. Economic freedom, if not the most basic of freedoms, as Ronald Dworkin argues, is still pretty fundamental. Since the oil crisis, and the economic transformation it ignited, many areas of life in the West – politics, psychology, moral philosophy, and sociology – have been refashioned. The works of Galbraith, Sen, and Hacker, or more accurately the failure of these works to stimulate, say, the kind of popular (as opposed to academic) debate that Michael Harrington’s Other America provoked in the early 1960s, is perhaps the defining element of the current public mood. Individualism and individuality are now so prized that they have tipped over into selfishness. The middle classes are too busy doing well to do good.39