CHAPTER ONE THE GOLDEN AGE OF CAPITALISM

In 1944, the great and the good met in Bretton Woods, New Hampshire, to discuss rebuilding the world economy in the wake of the bloodiest war in history.1 The American delegation, led by Harry Dexter White, had been sent to ensure that the reins of the global economy were handed from the UK to the US in an orderly fashion. The British delegation, led by the famed economist J.M. Keynes, had been sent to retain as much power as conceivably possible without angering the UK’s main creditor, the US, which had emerged as the new global hegemon in the wake of the destruction of Europe. White, a little-known Treasury apparatchik, was a “short and stocky… self-made man from the wrong side of the tracks”. Other delegates recall that he was shy and reserved, though this may have had something to do with the fact that he spent much of the conference in hushed meetings with the delegates from the Soviet Union. Years later, he was accused of being a Russian spy, which he denied before dying from a heart attack. Keynes couldn’t have been more different — a tall, intellectual member of the British establishment, who unabashedly touted his achievements and promoted his own ideas. They were the “odd couple of international economics”.

The conference itself was, by all accounts, a raucous affair. Its wheels were greased with alcohol and fine food — in the small hours of the morning, delegates could be found drunk and cavorting with the “pretty girls” sourced from all over the US. Keynes predicted that the end of the conference would come alongside “acute alcohol poisoning”. The hotel boasted top of the range facilities, including “boot and gun rooms, a furrier and card rooms for the wives, a bowling alley for the kids, a billiard room for the evening”, as well as a preponderance of bars, restaurants and “beautiful women”. The more extravagant, the better — the splendour and superiority of the American way was to be shown at every turn.

It is somewhat ironic that the decadent crowd at Bretton Woods came up with an agreement that would hold back the re-emergence of the gilded age of the inter-war years. Bretton Woods was meant to prevent the outbreak of not only another world war, but also another Wall Street Crash. Keynes argued forcefully that doing so would require reining in what he called the “rentier class”: those who made their money from lending and speculation, rather than the production, sale and distribution of commodities.2 In the late eighteenth and early nineteenth century, rentiers had become extremely powerful on the back of the rising profits associated with the industrial revolution and increasing trade within the world’s constellation of empires. In the absence of controls on capital mobility, these profits traversed the global economy seeking out the highest returns. Much of this capital was invested in US stock markets, pushing up stock prices and inflating a bubble that eventually popped in 1929.

What the Great Depression started was finished by the Second World War, which saw billions of dollars’ worth of destruction, and increases in taxation to finance states’ war efforts.3 As a result, financial capital emerged from the first half of the twentieth century on the back foot, which made reining in the parasitic rentier class easier. Whilst the negotiators at Bretton Woods were undoubtedly concerned with securing the profitability of their domestic banking industry — not least the emerging power of Wall Street — just one banker was invited to the summit by the US delegation.4

Between the eating, the drinking, and the flirting, delegates at the conference hammered out an historic agreement for a set of institutions that would govern the global economy during the golden age of capitalism. The world’s currencies would be pegged to the dollar at a pre-determined level, supervised by the Federal Reserve, and the dollar would be pegged to gold. Capital controls were implemented to prevent financiers from the kind of currency speculation that could cause wild swings in exchange rates. The system of exchange-rate pegging and controls on capital mobility served to hem in those powerful pools of capital that had wreaked such havoc in the global economy in the period before 1929. Bretton Woods was a significant step forward in reining in the rentier class.

But Keynes didn’t get everything he wanted. He was hindered in his battle against international finance by the formidable Dexter White, backed up by the full force of US imperial power. White wished to retain the US dollar as the centre of the international monetary system, whilst Keynes wanted it replaced with a new international currency — the bancor. White emerged victorious, and the US gained the “exorbitant privilege” of controlling the world’s reserve currency.5 In other words, as well as constraining international finance, Bretton Woods also institutionalised American imperialism.6

The Bretton Woods conference marked the dawning of a new era for the global economy. Europe set about the long processes of post-war reconstruction and decolonisation, and the multinational corporations of the world’s newest superpower profited handsomely.7 Trade flows increased after the years of autarky during the war, and a new age of globalisation began. Whilst Bretton Woods provided the international framework for this economic renewal, it was at the level of national economic policy that the transition from pre-war laissez-faire economics was most evident. Keynes was, once again, at the centre of these developments.

In the inter-war period, Keynes had mounted a challenge to the economics profession by developing a theory of economic demand that challenged the central tenet of classical economics — Say’s law, the idea that supply creates its own demand.8 According to Jean-Baptiste Say — a Napoleonic-era French economist — prices in a free market will rise and fall to ensure that the market “clears”, leaving no goods or services left once everyone has had the chance to bid. If the market fails to clear — i.e. if businesses have products to sell but no one wants to buy them — it is because something is getting in the way of the price mechanism, like taxes or regulation. The law applied to workers as well as commodities, which reinforced the idea that there could be such a thing as involuntary unemployment. If a worker was unable to find a job, it was because he was setting his wage expec- tations too high.

This ideology was, of course, at odds with the experiences of those who had lived through the Great Depression. But the classical economists would retort that their field was a science, which paid no heed to the sensibilities of working people. Keynes was able to prove them wrong. His great innovation was to introduce the idea of uncertainty into economic models. When people are uncertain about the future, they may behave in ways that seem irrational — for example, saving when they will receive little return for doing so, or spending far above what they can afford. This is because in the context of uncertainty, people prefer to hold liquid (easy-to-sell) assets — and they tend to prefer to hold the most liquid asset of all: cash. Liquidity preference means that, the higher the levels of uncertainty, the more people save rather than spend.

This kind of uncertainty marks business’ behaviour even more than consumers’ and affects their investment decisions. If businesses’ confidence about the future turns, then they are likely to stop investing. These lower levels of investment will result in lower revenues for suppliers, who may have to lay people off, who will reduce their spending, leading to a fall in economic activity. This kind of self-reinforcing cycle of expectations is what gives rise to the business cycle: the ups and downs of the economy through time. It also shows why, over the short term, Say’s law doesn’t hold — if businesses lack confidence in future economic growth, they may choose not to spend even if they can afford to do so. And as Keynes famously stated, “in the long run we are all dead”.

But Keynes’ didn’t stop with this theoretical innovation, he also offered solutions to policymakers. Say’s law implies that taxes and regulation distort the normal functioning of the market, and that it is best for everyone when state economic policy is as unobtrusive as possible. But Keynesian economics provides a role for the state as an influencer of expectations, and a backstop for demand. If, for example, business confidence drops and investment falls, the state can anticipate the multiplier effect this will cause by increasing its own spending or by cutting interest rates, making borrowing cheaper. If, on the other hand, businesses are investing too much, leading to inflation, the state can cut spending or raise interest rates to mute the upward swing of the business cycle. Managing the business cycle also required reining in the influence of finance, because lending and investment are also pro-cyclical: they rise during the good times and fall during the bad times. If the role of government was to lessen the ups and downs of the business cycle, it must properly regulate finance, which so often exacerbated these ups and downs.

This kind of Keynesian economic management had a significant influence on economic policy in the post-war period. The destruction of the war, the increasing size of the state, and the arrival of Bretton Woods led to something of a rebalancing in the power of labour relative to capital within the states of the global North.9 The rising political power of domestic labour movements led to the widespread take up of Keynes’ ideas, which were, after all, aimed at preventing recessions and unemployment. States and unions often developed close relationships with one another via emerging mass parties representing labour, and many had a centralised collective bargaining process. Taxes on the wealthy and on corporations were high — underpinned by low levels of capital mobility — and societies became much more equal. During this time, many Keynesians believed that they had finally succeeded in taming the excesses of a capitalist system that had caused so much destruction in the preceding decades, which is why this period was termed the golden age of capitalism, following the gilded age of the pre-war years.

In the UK, this period saw the emergence of a new type of political economy, often referred to as the post-war or Keynesian consensus.10 Following the wartime coalition, Labour roundly defeated the Conservatives in the 1945 election and Clement Attlee became prime minister. The new Labour government seized on Keynesianism which had, up to that point, had a limited impact on economic policy: Keynes’ ideas had revolutionised economics, but it took a change in power relations for them to revolutionise the real world. Over the course of the next several decades, inequality fell, wages rose in line with productivity, living standards for the majority rose and both the labour movement and the state apparatus became more powerful relative to capital. The welfare state developed, providing a safety net when the business cycle turned, as well as increasing the social wage and therefore workers’ bargaining power. And whilst the City grew, and retained its strong influence over government, the rentier class — landlords, speculators, and financiers — was much more constrained than it had been before.

The post-war consensus could be enforced because the workers, who stood to benefit from Keynesian management of the economy, had emerged from the war more powerful than ever before, and they organised to make it happen. In this way, the rebalancing of power from capital to labour that came about as a result of the war was institutionalised in the post-war social and economic framework implemented in the 1940s.

How Does Change Happen?

This understanding of historical change — that which is driven by power relations, institutions, and crisis — is based on one reading of Marx’s analysis of history. One reading because it is a topic upon which Marxists continue to disagree. In particular, there is some disagreement between those who believe Marx prioritised economic structures in his analysis of historical development, and those who believe he prioritised agency. In other words, these groups have different answers to the question: “what matters most when it comes to historical change – economic and technological conditions, or how people respond to these conditions?”

On the first view, technological change leads to changes in peoples’ working conditions, and this leads to changes in the balance of power within society, and therefore peoples’ ideas. For example, the advent of mass production made it easier for workers to share political ideas and to organise to resist their exploitation, facilitating the emergence of unions. In this case, the political change naturally follows from the technological change in a way that can appear inevitable. Economic and technological conditions – what Marx referred to as the economic base – determine the balance of power in capitalist societies, and those with the power set about building institutions that reinforce their ideas – what he referred to as the superstructure. The powerful use their control over education, the media, and the law to influence their narratives, which determine how people make sense of the world. This is how the system remains stable from day to day. But it is all underpinned by an asymmetry of material power – by who has the control over force and resources. Taken to extremes, those who view history in this way may claim that human agency doesn’t matter at all – history progresses due to changes in technology, not human decisions.

Others respond that human beings aren’t robots: we have the capacity for free thought, debate and to make sense of the world in our own ways. They claim that the superstructure has power in its own right – institutions can shape the development of capitalism, they can make it harsher or kinder, more extractive or less exploitative. And institutions can be shaped by battles that take place in the realm of ideas. These people can often be found arguing that, if a policy is convincing enough, and if we lobby hard enough, we will be able to implement it and change the way capitalism works. For them, it is human action that drives history, not the other way around. For example, the development of social democracy wasn’t just based on changes in technology that made it easier for workers to organise. It was workers who won limits on the working week, sick pay, and eventually even the creation of the welfare state itself; and they did so by organising.

The determinism of the structuralists jars with the utopianism of those who view human agency as the driving force of history, and this tension has dominated debates on the left — and indeed in the social sciences more broadly — for generations. Marx’s own method for dealing with these questions – also the method used in this book – was based on the idea of the dialectic, in which what appear at first as opposing forces merge to determine the direction of historical change. The economic base — the technological basis of production — interacts with the super-structure — ideas, culture, and institutions — to determine what happens and when. Under this view, the nature of technology and the economy provides the overarching context in which human action takes place — these things shape peoples’ incentives and behaviours in ways that make certain outcomes more likely than others. But they do not determine human action. People, their capacity to organise themselves, and the ideas they hold, still have the capacity to drive and shape history in ways that cannot be determined through an analysis of their economic conditions alone. Men make their own history, but they do not make it as they please.

The relationship between structure and agency becomes particularly important during moments of structural crisis, which naturally emerge in capitalist systems due to their inherent contradictions.11 Capitalism is subject to contradictions that stop it from working properly — from workers not earning enough to purchase the goods capitalists are producing, to the emergence of financial crises driven by investment booms, to the environmental crises associated with the injudicious extraction and use of the planet’s scarce resources. These contradictions are contained by political institutions designed by the powerful to make the system more stable — like the welfare state or financial and environmental regulation. But these institutions do not stop the contradictions from emerging, they only mute their impact. As capitalism develops, its contradictions escalate until they explode in a moment of crisis. These extended periods of crisis are critical in determining how change happens. Moments of crisis are moments when institutions, norms, and discourses break down — it becomes harder for our political, economic, and social systems to function, and much more difficult for people to make sense of the world. Divisions emerge amongst the people with the power, which leave them vulnerable to all sorts of attacks — most revolutions have taken place during moments of crisis. The structural flaws of capitalism lead to crises, and crises are times when agency matters more: it is primarily during these moments that ideas and the movements that champion them can influence the course of history.

And this is exactly what happened in the post-war period. The destruction of the war had changed the balance of power between capital and labour and created an institutional crisis of which the latter could take advantage. Working people used this moment of crisis to organise and institutionalise a new settlement — one that would benefit them. And for a long time, this framework worked. But it could not last forever. As the twentieth century progressed, capital began to strain against the leash that had been placed on it, and the compromise between labour, capital, and the state began to break down. Social democracy, just like any capitalist economic model, was subject to its own inherent contradictions. And its collapse paved the way for something new entirely.

The Rise of Global Finance

On 28 June 1955, G.I. Williamson, the Chief Foreign Manager of the Midland Bank, was called into the Bank of England to discuss what appeared to be some unusual dealings in the foreign exchange markets.12 Midland Bank had been engaging in an activity that, up until 1955, no UK bank had dared to try. It had been taking deposits denominated in US dollars and paying out interest to the holders of these deposits — an activity formerly restricted to US banks regulated by the Federal Reserve. The Bank of England’s “gentlemanly” approach to regulation at the time is well-documented. Bankers were frequently invited to Leadenhall Street — an old, imposing building, in which alumni of Eton, Oxford, and Cambridge were likely to have felt quite comfortable — for a cup of tea and a chat. Occasionally stern words were exchanged, but rarely would any real discord disturb what has been described as the “dream-like” state of the City of London in the golden era of capitalism.

The discussions between Williamson and Cyril Hamilton, a Bank official, were no different. Hamilton summarised the meeting in a memo reassuring his higher-ups that “nothing out of the ordinary had taken place” at Midland and that its foreign exchange activities had been undertaken in the “normal course of business”. In any case, Hamilton reported that “Williamson appreciates that a light warning has been shown”. Quite why a light warning would have been required for proceedings undertaken in the normal course of business was not specified. Perhaps Hamilton had a faint inkling that Midland’s activities represented an entirely new phenomenon that the Bank of England was not quite equipped to manage. It is, however, highly unlikely that he realised he had just given the go ahead for an innovation that, within two decades, would have transformed global finance.

The new market in dollars outside of the US, and therefore outside of the jurisdiction of the Federal Reserve, was called the “Eurodollar market”. Usually, when you hold a foreign currency, you can either spend it in a foreign country, deposit it in a foreign bank, or invest it in foreign assets — a British bank wouldn’t generally allow you to deposit euros in your bank account. The Eurodollar markets changed all this by allowing banks to take and pay interest on foreign currency deposits. The term “Eurodollar” is something of a misnomer given that the first non-US dollar deposits were taken in the UK, but it stuck, and today the prefix “Euro-“ is used for any currency held outside its home country; for example, “Euroyen” are Japanese yen held outside Japan. The implications of this system weren’t truly visible until the Eurodollar markets took off in the 1970s. Socialist and newly-wealthy oil-producing states that wanted to hold dollar deposits without depositing them in US banks were able to put their dollars in London instead. London’s Eurodollar markets grew substantially as a result.

The Eurodollar markets undermined Bretton Woods by creating a global system of unregulated capital flows.13 Those investors holding dollars — pretty much everyone, given the use of the dollar as the global reserve currency — could now deposit them into the City of London. These dollars would then be free to float around the global economy at will, unhindered by the strict regulation then imposed on US banks by the Federal Reserve. Billions of dollars had ended up in the unregulated Eurodollar markets by the 1970s, undermining Keynes’ determination to curb the hot money of the rentier class. This gave financiers in the City an almost bottomless pit of dollar reserves to play with. After decades of retrenchment for the former financial centre of the largest empire in the world, the Eurodollar markets gave the City of London a new lease of life.

But the growth of the Eurodollar markets wasn’t the only threat to Bretton Woods that emerged in the 1970s. The increase in international trade that took place in the post-war period benefitted some countries more than others. US corporations, backed by the most powerful state in the world, grew substantially. Many were drafted by the US government to help rebuild Europe, becoming some of the first modern multinational corporations in the process. Between 1955 and 1965, US corporations increased their subsidiaries in Europe threefold.14 As the reconstruction effort took off, they were joined by German and Japanese multinationals, such that by the 1970s there were more, and larger, multinational corporations than ever before.

The growth of the multinational corporation meant that billions of pounds worth of capital was flowing around the world within corporations. Toyota, General Electric, and Volkswagen couldn’t afford to keep their subsidiaries across the globe insulated from one another — money had to be moved, even if that meant undermining the monetary architecture of the international economy. Technological change also facilitated direct transfers of capital between different parts of the world. All this meant that, despite the continued existence of capital controls, capital mobility had increased substantially by the 1970s. The combination of the emergence of the Eurodollar markets and the rise of the multinational corporation were beginning to place serious strain on Bretton Woods.

But it was the US government — not the banks — that dealt the final blow to the system that it had helped to create. With the dollar as the reserve currency, the US had gained the “exorbitant privilege” of being able to produce dollars to finance its spending15. Because everyone needed dollars, the US could spend as much as it liked without the threat of hyper-inflation. The gold peg was supposed to rein in this behaviour: if investors started to think that there were more dollars in circulation than gold to back it up, they might turn up at Fort Knox demanding the weight of their dollars in gold. But this didn’t stop the Americans from printing billions of dollars to fund a wasteful and destructive war in Vietnam. Combined with dollars leaking out of the US via its growing current account deficit, the global economy was facing a dollar glut by the 1970s. Realising that there were far too many dollars in circulation to keep up the pretence, in 1971 Nixon announced that dollars would no longer be convertible to gold. Bretton Woods was finally over.

Many expected a sharp devaluation of the dollar at this point, but this didn’t happen. In fact, the dollar — strong as ever — continued to be used as the global reserve currency, even in the absence of any link with gold. Finally, the real foundations Bretton Woods had been exposed: American imperial power. The gold peg established at Bretton Woods was not the source of the dollar’s value; the source of its value was a collective agreement that dollars would be used as the default global currency, much as English had by that point become the default global language. Freed from the need even to pretend it was covering its increased spending with ever-greater gold reserves, the power of the US Treasury was finally unleashed, with consequences that would not be felt for three and a half decades.

The end of Bretton Woods represented a profound transformation in the international monetary system. Absent any link with gold or any other commodity, money became nothing more than a promise, created by fiat by the state issuing it. The value of a currency would now be determined by the forces of supply and demand. Rather than having to limit the amount of money they were creating in order to maintain a currency peg, states would be able to create as much money as they liked, accounting only for the threat of inflation. Private banks were also now free to create currency on their behalf in the form of credit, constrained only by domestic regulation. The collapse of Bretton Woods represented the final step away from a system of commodity money, which has been the norm for most of human history, and towards fiat and credit money, which now dominate all other forms of money. The implications of this change would be far more profound than anyone could have seen at the time.16

With the demise of Bretton Woods, capital was finally released from its cage. Many countries continued to maintain capital controls and strict financial regulation. But the glut of dollars that had emerged at the international level needed somewhere to go. Meanwhile, the capital that had been stored up within states like the UK under Bretton Woods was desperate to be released into the global economy. It pushed and strained against the continued existence of capital controls, finding ever more ingenious ways of getting around the system. Finance capital had returned with a vengeance, and it sought to remove all obstacles to its continued growth. But it would take a national crisis for the remnants of the post-war order finally to fall.

The Political Consequences of Social Democracy

Just as Bretton Woods was collapsing, the social democratic model was starting to show signs of strain.17 Bretton Woods created a global economy, with global corporations, global supply chains, and global competition. Eventually, the system became a victim of its own success. Some companies — notably the US multinationals — thrived, but many others found it harder and harder to compete with the rising industries located in Germany, China, and Japan. UK corporations in particular found themselves struggling to benefit from the new wave of globalisation, partly because sterling was pegged to the dollar at too high a level, making British exports more expensive to international consumers.18 These firms struggled to cope with increasing international competition, and by the end of the 1960s, their profits had been seriously reduced. By the 1970s, the UK was referred to as the sick man of Europe. From 1973, after an attempt at a European peg was abandoned, sterling continuously fell against the dollar until, in 1976, sterling below $2 for the first time.19

In this context, one might have thought that the end of Bretton Woods would be good for British capitalists. Freed from the overvalued exchange rate, manufacturers would now finally be able to compete internationally once again. But decades of stagnation cannot be undone overnight. Britain’s manufacturers found that, even with a lower exchange rate, they could not compete with the new multinationals on either quality or cost. The first oil price spike in 1973 drove an increase in inflation, which exceeded 20% in two years over the course of the 1970s, peaking at 27% in the year to August 1975. In the absence of strong unions, rising inflation driven by rising costs might not have been such a systemic problem. Under other circumstances, bosses would have laid off workers or reduced pay to cut costs. But with the post-war consensus still firmly in place, unions pressed for pay rises that kept pace with inflation. Able to bargain with and make demands on the state, the unions refused to back down.

Nevertheless, as cost pressures mounted, unemployment rose. The state flitted between increasing spending to alleviate unemployment and cutting it to reduce inflation. The oil price spike had created a catch-22 situation that Keynesian policymakers were not equipped to deal with: stagflation — the combination of unemployment and inflation. This was not supposed to happen. Keynesian economics was based on the idea of the Phillips Curve. In the 1960s, economists drew on the work of William Phillips to posit an inverse relationship between inflation and unemployment. According to the models they built, when unemployment was high, inflation was low, and vice versa, implying that states should tolerate moderate levels of inflation in order to promote full employment.20 Governments were supposed to boost spending and reduce interest rates until full employment was reached, at which point they should start to reduce spending and raise interest rates in order to bring down inflation. Effecting this balancing act between inflation and unemployment was seen as the main aim of economic policy throughout the post-war period.

But by the 1970s, social democratic management of the economy was failing to bring down either unemployment or inflation — the latter of which was driven by political developments halfway around the world. Increases and decreases in interest rates had done nothing other than create a “stop–go” economy that fluctuated from one set of extremes to another. In this uncharted territory nobody knew what to do. By the time unemployment reached 4% in the early 1970s, it was clear that the state was trying to resolve the issue by tacitly withdrawing its promise to protect full employment in an effort to bring down inflation. But such a strategy posed an existential threat to the UK’s trade unions: the withdrawal of the state’s commitment to full employment would mean losing a powerful ally in their fight against the bosses. They could not afford to go down without a fight — not to mention, their members required jobs and pay increases in line with inflation to be able to survive. Industrial action escalated, especially in the industries with the most powerful unions – particularly the miners, whose power stemmed from their control over the nation’s energy supply.

Economic turmoil created a political crisis. On the one hand, by the mid-1970s, the Conservatives had roundly failed to turn years of strikes, energy shortages, and stagflation into an electoral advantage. Ted Heath went to the nation and asked them to decide “who governs this country? Us or the miners?”. On the other hand, the Labour government elected in 1974 proved equally unable to end the stalemate. Pursuing a more conciliatory approach, Harold Wilson raised the miners’ wages and attempted to implement a “social contract” between capital and labour, involving a voluntary incomes policy in which the government negotiated pay increases with the unions. But the second oil price spike — which came three years after the UK had sought an emergency loan from the International Monetary Fund — was the nail in the coffin of the social contract. In 1979, with inflation spiralling once again, the unions pushed for a return to free collective bargaining.

1979 was the coldest winter since 1962, and the combination of industrial action, economic stagnation, and energy shortages led to its being termed the “Winter of Discontent”. A sense of crisis hung in the air. In January 1979, Prime Minister James Callaghan was at a summit in Guadeloupe and was asked by a journalist about “the mounting chaos in the country”. He responded that he didn’t think others would agree with the journalist’s assessment that the country was in chaos. The following day, the Sun famously ran with the headline: “Crisis, What Crisis?”. By 1979, Britain was at a crossroads: the unions would not back down, and the social democratic state could not afford to confront them. What had happened to the golden age of capitalism?

Looking back, it is quite clear that the 1970s were a turning point for the post-war consensus. Businesses could not afford to continue to tolerate unions’ demands for pay increases in the context of rising international competition and high inflation. But unions could not afford not to demand jobs and pay increases in line with inflation. These problems were structural — they were inherent to the way the system functioned. Economic actors pursuing their own interests — whether businesses trying to increase profits, or workers trying to increase wages — eventually led to the emergence of acute strains that threatened to bring the British economy to the brink of collapse. The contradictions inherent in the social democratic growth model had finally come to the fore, and there were only two potential solutions to the crisis: a victory for the workers, or for capital. Much depended on where the loyalties of the state would lie.

Michał Kalecki — a Polish economist who theorised demand management at the same time, and some have said before, Keynes himself — had foreseen such problems decades earlier.21 After reaching his conclusions about the capacity of the state to control demand in the economy, he argued that such policies couldn’t work for long because there were “political aspects” of full employment policy that rendered it inherently unstable. The state’s commitment to promote full employment undermined the thing that made capitalism work: the threat of the sack. A policy of full employment would remove the “reserve army” that capitalists relied on to ensure a steady stream of cheap labour. Without desperate workers to exploit, profits would dry up.

The powerful state that had emerged from the Second World War had committed a second sin: it was no longer afraid of the capitalists’ threats to withdraw investment. When the government invests too much in the economy, and especially when certain industries are nationalised, it becomes much harder for businesses and investors to withdraw their capital when the state does something they don’t like — the option of “capital strike” is removed. Over the long-term, the combination of these factors encourages owners of capital to oppose policies that promote full employment, even if those policies also boost consumption and therefore support capitalists’ profits.

Kalecki’s argument is not that social democracy is economically unsustainable, but that it is politically untenable: at some point, a political crisis moment will be reached. He explains:

[U]nder a regime of full employment, the “sack” would cease to play its role as a disciplinary measure. The social position of the boss would be undermined, and the self-assurance and class-consciousness of the working class would grow. Strikes for wage increases and improvements in conditions of work would create political tension. It is true that profits would be higher under a regime of full employment than they are on the average under laissez-faire... But “discipline in the factories” and “political stability” are more appreciated than profits by business leaders.

This is what appears to have happened in the 1960s and 1970s. With high wages, low unemployment, and moderate levels of inflation, the power of the UK’s unions grew. The distributional tension over profits between the bosses and the workers was muted during the early years due to the investment and aid being sent by the US and the increase in global trade facilitated by Bretton Woods. But when things started getting tough — when inflation increased and competition from abroad began to erode profits — these tensions exploded onto the national stage. It was at this point that the political contradictions of social democracy became apparent; when the battle between capital and labour finally became zero-sum.

With profits under pressure, only one thing determined who got the gains from growth: who had the power. Thanks to rising capital mobility and the breakdown of Bretton Woods, the balance of power between capital and labour had changed by the 1970s. Capitalists could threaten to up and leave if they didn’t like the business environment — and though capital controls were still in place, many were finding ingenious ways to move their money anyway. With state support for the labour movement weakening, workers, meanwhile, found themselves facing up to bosses without powerful political allies.

These pressures steadily wore away at the post-war consensus, until they erupted during the crisis of the 1970s. But the old model would not completely collapse until a new one emerged in its place. The political tumult created by the erosion of British social democracy — echoed by the retreat of social democratic movements over much of the global North — provided a long-awaited opportunity for those who had been marginalised during the post-war boom to shape what came next. The left seemed out of answers, but the right saw that their moment had finally arrived.

Never Let a Serious Crisis Go to Waste

After asking voters “who runs the country” and being told “not you”, the humiliated former prime minister Ted Heath was forced into organising an election for leadership of the Conservative Party in 1975. Despite losing the twin elections of 1974, Heath maintained the support of much of the Conservative establishment and newspapers. He was expected to win. But instead he was ousted by the young upstart elected on a radical new economic programme that would eventually come to be known as neoliberalism: the theory that human wellbeing is best advanced by liberating the entrepreneurial spirit through free markets, private property rights, and free trade, all supported by a strong state.22 Her name was Margaret Thatcher.

Thatcher’s radical, neoliberal economic agenda had been forged decades earlier in the Swiss village of Mont Pélerin.23 In 1947, a group of economists from all over the world met to develop a new programme that would begin the fightback against the “Marxist and Keynesian planning sweeping the globe”. This was an austere, intellectual affair, in stark contrast to the bawdy conference that had taken place across the Atlantic three years previously. The Mont Pelerin Society, or the MPS — as the group would name themselves — knew that they were politically and intellectually isolated. The credibility of pre-war laissez-faire liberalism had crashed with Wall Street in 1929. The war that had followed these events had empowered the state to levels never previously seen in history, and these states had used their power to constrain the activities of the international financiers who were sponsoring the event.

The MPS objected to any state intervention that stood in the way of free markets. They were deeply offended by the creation of the National Health Service and the introduction of a social safety net. The rise of the unions and the role of the state in supporting collective bargaining were equally significant affronts to neoliberal ideology. But perhaps the most egregious aspect of the post-war consensus was the continued existence of capital controls. Allowing the state to determine where an individual could put their money was seen by some as a threat to human liberty, and to others simply as a barrier to profitability. The alliance between ideologues, desperate to create a world free of totalitarianism where private enterprise thrived, and the opportunists who wanted to undermine a system that was preventing them from making money, marked the Mont Pelerin Society from day one.

This ambiguity is important to understanding how neoliberalism eventually rose to prominence. It is both an internally-coherent intellectual framework and an ideology used to promote the power of the owners of capital in general, and finance capital in particular.24 The work of Hayek, von Mises, and others constituted a serious intellectual enterprise grounded in a particular set of values: namely, a commitment to human freedom, defined by control over one’s property.25 The fact that this gave justification for shrinking the size of the state, removing capital controls, and reducing taxes is what led several prominent international financiers to cover a large portion of the costs for the first meeting. One can see a parallel in the development of Keynesianism and the Labour Party’s adoption of this ideology. On the one hand, Keynes sought to “save capitalism” from its own contradictions, and on the other the Labour Party sought an ideology and set of policies that would allow it to maintain a compromise between the workers, capitalists and the state. In this sense, neoliberalism was no more a conspiratorial plot to take over the global economy than was Keynesianism. Intellectuals will always seek out the powerful to sponsor their ideas, and the powerful will always seek out ideas to justify their interests.

The elite that gathered at Mont Pélerin decided, then and there, that they would exhaust their time, money, and intellectual resources in an effort to bring down the system of state capitalism which they saw as paving the way to totalitarianism. Their political manifesto — the “Statement of Aims” — included commitments to promote the free initiative and functioning of the market, prevent the encroachment of private property rights, and establish states and international institutions that uphold these ideals. The Statement of Aims also claimed that “[t]he group does not aspire to conduct propaganda”. Yet they hatched a plan to translate these principles into an economic policy agenda that would undermine the social democratic consensus all around the world. Their ideas would be thrust into the mainstream through a network of academics, politicians and think tanks who could spread the word about this newer and better way of looking at economics. They had their work cut out for them. The Keynesian political compromise had seen living standards rise, inequality fall, and a strong bargain emerge between organised labour and the nation-state. Arguing for the abolition of the welfare state made the neoliberals look like dangerous radicals not worth taking seriously. For decades, Hayek and his acolytes were left shouting from the sidelines, derided by academics and politicians alike.

But perhaps the social democrats were too complacent. What looks like unparalleled stability can quickly implode under the dynamic, unconstrained forces of global capitalism. The crisis of the 1970s proved that social democracy was no different to any other capitalist system: it contained its own inherent contradictions that would eventually prove its undoing. Those neoliberals following in the wake of the MPS were as shocked by the collapse of the post-war consensus as anyone else. They had spent decades working at the global level, trying to unpick the regulations that underpinned Bretton Woods, but the national social democratic settlement looked stable in comparison. The Seventies changed everything. With the US state having dealt the final blow to Bretton Woods, the neoliberals felt emboldened. They knew that this spelled the beginning of the end for capital controls. Rising capital mobility would stand them in good stead in their battle with the nation state — capital mobility, after all, gives those who own it veto power. Don’t want to pay your taxes? Move your money abroad.

The neoliberals focused their efforts on the British state — the historic centre of global finance, in which the golden age of capitalism already seemed to be coming to a close with the acute crisis of social democracy. The think tanks they had created after Mont Pélerin — the Institute for Economic Affairs and the Adam Smith Institute — started churning out neoliberal propaganda at an impressive rate. They engaged with any politician that was willing to talk to them — and one proved much more open than any other. Neoliberal economists and lobbyists were quick to latch onto Thatcher’s campaign for the leadership of the Conservative Party.26 When she won, they were equally quick to work with her to shape an electoral agenda that would change the course of British history.

Thatcher’s campaign hinged on three promises: to take on the unions, shrink the state, and create a nation of homeowners. Her electoral promises were couched in populist terms: the Conservatives would “restore the health of our economic and social life”, “restore incentives so that hard work pays”, and “support family life by helping people to become home-owners”. This talk of restoration allowed Thatcher to frame what were radical economic policies in the language of traditional conservatism, drawing on people’s fond memories of the post-war consensus. Her attack on the Labour Party portrayed them as the party of the scroungers, living off the hard-work of others, and the thugs, holding the country to ransom. She sought to appeal to traditional Labour voters by claiming that her economic policies would restore full employment, using the famous “Labour isn’t working” posters to portray this message in popular terms. Labour, she claimed, was the party of fringe-extremists seeking to bring down British democracy and replace it with Soviet-style totalitarian rule. The Conservatives were the true party of working people — they would lower your taxes and inflation, while securing you a job and a home. It was a powerful message, and the polling shows that Thatcher’s victory came on the back of the switched allegiances of many low-income voters.

This populist rhetoric was, of course, the thin end of the neoliberal wedge. Thatcher knew that there was little public support for the most important elements of the neoliberal agenda, so she hid her commitments to privatisation and deregulation in the small print. In fact, even those policies that Thatcher did advertise — from going to battle with the unions to reducing the size of the state — were no more popular amongst voters in 1979 than they had been in 1974.27 The lesson of Thatcher’s period in opposition is the importance of extended crises in eroding support for the status quo. Even if they weren’t particularly keen on privatisation, people were sick to death of the constant disruption associated with industrial disputes, with the high levels of inflation and unemployment, and with the state’s apparent inability to deal with any of these issues. Many people voted for Thatcher in 1979 because she appeared to be one of the few politicians who was able to make sense of what was going on and provide workable solutions. Even if you didn’t like the Thatcherite agenda, after the Winter of Discontent you might have thought it was worth a try. Milton Friedman — one of the founders of the Mont Pelerin Society — knew this better than anyone. Looking back on the neoliberal victories of the 1980s, he wrote:

Only a crisis — actual or perceived — produces real change. When that crisis occurs, the actions that are taken depend on the ideas that are lying around. That, I believe, is our basic function: to develop alternatives to existing policies, to keep them alive and available until the politically impossible becomes the politically inevitable.

The neoliberals’ aim wasn’t simply to get Thatcher elected. It was to use the moment of crisis provided by the breakdown of the post-war consensus to institutionalise a new model for the British economy — one that increased the power of capital, just as the Keynesian consensus had institutionalised the power of labour. In this sense, the neoliberals had a view of change just as dialectical as that of any Marxist. The contradictions of social democracy would be exposed by a crisis that would bring the economy grinding to a halt. During such a crisis, people and politicians would search for ideas that might provide them with a way out. By building a narrative, developing an electoral coalition, and gaining control of the state, the neoliberals could use the crisis moment to build a new set of institutions that would give them and their backers the kind of lasting power that social democracy had denied them.

This is what the Thatcherite agenda was all about. Neoliberal economists, think tankers and financiers convinced Thatcher — who didn’t need much convincing to begin with — that free markets required a strong state.28 The only way to deal with the communist threat — at home and abroad — was to aggressively take on the power of the labour movement and release the dynamic forces of market competition that would promote efficiency, profitability, and social justice — as well as restoring the owners of capital to their rightful, unchallenged position as the most powerful group in society. Thatcher and her acolytes knew that they had five years to build such a model, but that once it had been built, it would be just as irreversible as the NHS.

The first thing they did was to deal with the only group capable of challenging their hegemony: the unions. Thatcher spent years, and a great deal of political capital, waging war with the UK’s labour movement. The next job was to empower capital in its place. Rather than seeking out an alliance with an ailing national capitalist class focused on mining and manufacturing, Thatcher knew that this required supporting the interests of the burgeoning international capitalist class. The natural allies of such a grouping could be found just down the road from Westminster, in the City of London.

On its own, this victory of capital over labour would not have lasted very long. What the neoliberals needed was an electoral alliance that would render their new system structurally stable. The clue to how this was created can be found in the electoral agenda of the 1979 Conservative government: a small state and property ownership. In place of the alliance between the national capitalist class and the labour movement that governed the post-war consensus, Thatcher would build an alliance between the international capitalist class centred in the City of London and middle earners in the south of England. She secured the support of middle earners by turning them into mini-capitalists through the extension of property ownership and the privatisation of their pension funds. In doing so, she transformed British politics and unleashed a new growth model that lasted over thirty-five years, before collapsing in the biggest financial crisis since 1929.