12.3 The Econobubble: The Inherent Error in the Age of the Minotaur

In 1997 Robert Merton and Myron Scholes shared the Nobel Prize in Economics for developing ‘a pioneering formula for the valuation of stock options. Their methodology’, trumpeted the Nobel committee, ‘has paved the way for economic valuations in many areas. It has also generated new types of financial instruments and facilitated more efficient risk management in society’. If only the hapless Nobel committee knew that in a few short months the lauded ‘pioneering formula’ would cause a spectacular multi-billion dollar debacle, the collapse of LTCM (in which Merton and Scholes had invested all their kudos) and, naturally, a bail-out from the reliably kind US taxpayers.

itax 72.5 Taming risk? Henri Poincare’s timely warning

Henri Poincare (1854-1912) was a mathematician, physicist and philosopher with a talent not only for solving difficult analytical problems but also for understanding the workings and limitations of human reasoning. His basic premise was that, when faced with a problem, the mind begins with random combinations of possible answers, often generated unconsciously, before a definite, rational process of validation begins by which the solution is finally arrived at. In his schema, chance plays an important but incomplete role in understanding. However, it is futile, he thought, to try to impose upon chance the rules of certainty. In Chapter 4 of his 1914 masterpiece Science and Method, Henri Poincare writes:

How can we venture to speak of the laws of chance? Is not chance the antithesis of all law?... Probability is the opposite of certainty; it is thus what we are ignorant of, and consequently it would seem to be what we cannot calculate. There is here at least an apparent contradiction, and one on which much has already been | written. To begin with, what is chance? The ancients distinguished between the [ phenomena which seemed to obey harmonious laws, established once for all, and J those that they attributed to chance, which were those that could not be predicted j

because they were not subject to any law. In each domain the precise laws did not decide everything, they only marked the limits within which chance was allowed to move. In this conception, the word ‘chance’ had a precise, objective meaning; what was chance for one was also chance for the other and even for the gods. But this conception is not ours. We have become complete determinists, and even those who wish to reserve the right of human free will at least allow determinism to reign undisputed in the inorganic world... Chance, then, must be something more than the name we give to our ignorance. Among the phenomena whose causes we are ignorant of, we must distinguish between fortuitous phenomena, about which the calculation of probabilities will give us provisional information, and those that are not fortuitous, about which we can say nothing, so long as we have not determined the laws that govern them. And as regards the fortuitous phenomena themselves, it is clear that the information that the calculation of probabilities supplies will not cease to be true when the phenomena are better known (our emphasis).

[Poincare (1908 [1914]), pp. 64-6]

In 1900, Louis Bachelier (1870-1946), one of Poincare’s more able doctoral students, submitted a thesis entitled The Theory of Speculation (Bachelier 1900, 2006). In it, the young student applied stochastic processes to offer a model by which to price options taken out on the French state’s bonds. In this sense, Bachelier was an early precursor of financial engineers like Black, Merton and Scholes. Poincare passed the thesis and made some polite noises about the young man’s mathematical skills but, nevertheless, was quite categorical that the class of phenomena involved in pricing options are well outside the scope of probability calculus. The reason? That they fall under

(b) above, the class of phenomena over which we can say nothing precise. Or, in the language of our interpretation of Keynes (see Chapter 7): we are damned if we know!

It was not that Poincare believed that humanity lies beyond the realm in which probability calculus can prove useful. For instance, in the same book he writes:

The manager of a life insurance company does not know when each of the assured will die, but he relies upon the calculation of probabilities and on the law of large numbers, and he does not make a mistake, since he is able to pay dividends to his shareholders. These dividends would not vanish if a very far-sighted and very indiscreet doctor came, when once the policies were signed, and gave the manager information on the chances of life of the assured. The doctor would dissipate the ignorance of the manager, but he would have no effect upon the dividends, which are evidently not a result of that ignorance.

[Poincare (1908 [1914], p. 66)]

However, Poincare understood well that pricing options or second guessing the stock exchange was a wholly different problem to that of estimating life expectancy. Whereas the latter fell under (a) above, the former was a class (b) phenomenon; one whose Maws’ are not only unknown but also unknowable.

In effect, Poincare intuitively understood that which, in this book, we refer to as economics’ Inherent Error. Bachelier did not. And nor did Black, Merton, Scholes and the large army of financial engineers that priced the flood of derivatives in the run up to the Crash of 2008.

The Nobel committee, and the whole financial sector that embraced this highly motivated madness, should have known better. At least since 1914 bright mathematicians (see Box 12.8) understood that pricing options by means of pristine formulae is fools’ gold; that although the private rewards from developing mathematical pricing models are tremendous reality will sooner or later come knocking. Inevitably floods of tears will wash away all the sains and, with the markets in a state of shock, governments will be forced to step in to mop up.

The annals of financial economics will refer to the collapse of LTCM as the result of an exogenous shock that the ‘theory’ had not predicted: the fiscal crisis in Russia that caused the Moscow government to default on its debts and an avalanche of bad debts to flatten the pristine equations of Merton, Scholes, et al. The question is: why did these models not allow room for the possibility of a crisis occurring somewhere in the world? How on earth can one describe a fiscal crisis as exogenous to the capitalist system? Was it caused by a meteorite from space?

Our answer was given back in Chapter 9 (see, in particular, Box 9.10). The pricing models bestowed upon us by the wizards of financial engineering left the possibility of systemic crisis out of their equations because, over a period that began in 1950 and ended up: with the demise of the Global Plan, mainstream economics fully adopted a particularly strong meta-axiom by which economists habitually closed their models (meta-axiom E). Economists who did not adopt it were expunged from the profession with an efficiency that Stalin would have marvelled at. Meanwhile, the economists who did espouse the offending meta-axiom lost even the remotest of connections with really existing capitalism; a loss which, paradoxically, lent them (and economics in general) immense discursive power in the universities but also in Wall Street, in the government, in the corporations’ boardrooms, etc.

In practice, the meta-axioms to which the profession conceded en masse meant that those teaching and learning economics were adopting a mind-frame in which a crisis was simply unfathomable. Once in that mind-frame, it was ‘logical’ to jump to the natural inference that today’s share and option prices incorporate all available ‘wisdom’ about future fluctuations. In technical language, current prices are a sufficient statistic by which to estimate future prices. Markets were, thus, meta-axiomatically conceived as efficient mechanisms that no humble intellect could plausibly doubt or second-guess.

Box 12.9 The Efficient Market Hypothesis

Bacheh'er’s basic tenet (see Box 12.8) was that financial markets contrive to ensure that current prices reveal all the privately known infonnation that there is. If this is so, no one can systematically make money by second guessing the market. The only way money can be made systematically is if stocks prices rise on average over a period of time. Even then, ‘players’ cannot hope to make more money, on average, than the average rate of stock price inflation.

The idea is simple: suppose Jill gets some information that E-widget will announce a rise in its profit. She will immediately respond by buying E~widget shares. The more certain she is of her information’s reliability the more shares she will buy. But then others will notice in a split second that Jill is buying a lot of shares. They immediately think that ‘she must know something’. Thus, her benefits (if her information is right)

are very short-lived because the price of E-widget shares will escalate very quickly. Jill’s insider information begets a higher price for E-widget shares for everyone, thus annulling the value of that information. It is as if Jill broadcast her private information to the world simply by buying these shares.

Î Naturally, prices may fluctuate as Jill’s information is proved inaccurate causing I the market to ‘overshoot’ (i.e. investors to buy too many of E-widgefs shares). This ! variation is, however, random noise and can be treated like a random walk around a I price whose best estimate is the current price.

j In the 1930s, Alfred Cowles, the founder of the Cowles Commission that played j such a prominent role in Chapter 9, in his own research on Wall Street’s behaviour ! (occasioned by his incredulity at what had happened there in 1929) hinted at the pos-j sibility that no one could consistently make money over and above the average growth I in stock values, however seasoned in the art of trying to make money by ‘playing’ the stock exchange. In essence, this was a restatement of Bachelier’s hypothesis.

In the early 1960s, another acquaintance of ours from Chapter 9, Paul Samuelson, discovered Bachelier’s thesis and circulated it among colleagues. A few years later, i Eugene Fama, who was to become professor at Chicago’s business school, submitted his own thesis; a modern-day extension of Bachelier’s original. The gist of the theory is that investors react to private and public information randomly. Some overreact, some underreact. Thus, even when everyone errs, the market gets it approximately ‘■right’. Those trying to bet against the market systematically, through for instance a meticulous study of past prices, will lose their shirt. The reason? Every piece of information that can be inferred from past prices has already been inferred and has been factored into the current price. Hence, prices follow a random path and no theory can predict them better than a series of random guesses.

The Efficient Market Hypothesis resembles the old joke about two economists walking down the street. One looks down and says: ‘By golly, look! There is $100 note on the pavement’. The other does not even bother looking. He coolly replies: ‘Can’t be. If there was one, someone would have picked it up’. While this attitude makes perfect sense on average, and with regard to the possibility of $100 notes lying around on the pavement, to take this attitude to the financial markets as a whole is a different matter altogether.

In effect, the Efficient Market Hypothesis presupposes that there exists a unique sufficient statistic for financial asset prices towards which the market converges, albeit a noisy one. But as Poincaré knew from the outset, there can exist no sufficient statistic when it comes to radically indeterminate variables; variables that move according to ‘laws’ that are not only unknown but also unknowable.

To put the same point differently, the prerequisite for the Efficient Market Hypothesis to make sense is the existence of a unique and well-defined equilibrium path on which the ‘economy’ is guaranteed to move. But, as we have explained in Book /, for such a path to exist, the economy must comprise a single sector or a single Robinson Crusoe-like individual. The Efficient Market Hypothesis, therefore, cannot possibly be sustained in a really existing capitalist world.

Chapters 8 and 9 argued that, beginning with Nash’s 1950 paper at the Cowles Commission, political economics embarked upon a road with no return, following the agenda of a formalism whose theoretical results were predicated upon meta-axiomatic ‘closure’ bv means of moves that could only engender a permanent chasm between economic theory and capitalist reality. Only in that context is it possible to explain how intelligent scholars, in living memory of the 1930s, could see the world through the prism of the Efficient Market Hypothesis, the Rational Expectations Hypothesis and, more recently, so-called Real Business Cycle Theory.

The common thread that runs through them is the determination to misapply a Nash-Debreu-Arrow type of solution to a macroeconomy. Up until the early 1970s, Nash-Debreu-Arrow models, while dominant in the academic discourse, were confined to an abstract microeconomic, academic game that was played in universities. While the Globa! Plan stood tall, macroeconomic policy was still informed by the New Dealers’ experience of at first running the War Economy and later the institutions of the Global Plan, A largely trial-and-error form of policymaking, based too on a strong track record, kept a firm controlling hand over global capital and trade flows.

Box 12. JO The Rational Expectations Hypothesis

The Rational Expectations Hypothesis (REH) is based on a clever argument that, under certain circumstances, makes perfect sense. Discursively, it suggests that no one should expect a theory of human action to predict well in the long run if it presupposes that humans systematically misunderstand that very theory. REH rejects the idea that a social theory can reliably expect people consistently, and at a personal cost, to misunderstand the rules that govern their own behaviour. Humanity, REH proponents believe, has a capacity eventually to work out its systematic errors. Thus, a theory that depends on the hypothesis that people systematically err, will, eventually,: stop predicting human behaviour well. Put simply, as Abraham Lincoln supposedly' once said, ‘you can fool some of the people all of the time, and all of the people some of the time, but you cannot fool all of the people all of the time’.

In neoclassical economics the REH was articulated first and most powerfully by John F. Nash, Jr. His concept of a game’s (Nash) equilibrium is defined as the set of strategies, one for Jill and one for Jack, such that Jill’s strategy is the best reply to Jack’s and vice versa (i.e. Jack’s strategy is also his best reply to Jill’s). The point here is that if both choose these very strategies then, by definition, each will find that his or her predictions of the other’s behaviour is confirmed. Now, consider two theories: the first theory Tx predicts that Jack and Jill will choose their Nash equilibrium strategies (which are, by definition, self-confirming). The second, T2, predicts that they will choose some other pair of strategies that are not in a Nash equilibrium. Clearly, the only way the latter can occur is if Jill, Jack or both entertain mistaken expectations about one another. The central difference, therefore, between theories T{ and T2 is that T2 alone predicts well if Jill, Jack or both hold systematically mistaken predictions about the outcome. Theory T2, by contrast, makes no such assumption. Indeed, its

prediction is that the players will behave in a manner that confirms their expectations. In this sense, theory Tx is consistent with the REH whereas T2 is not.

In 1961, John Muth published a paper in which he assumed that when people assigned a future value to any economic variable that they cared to predict (e.g. wheat prices, the price of some share) any error they made was random; that is, that there could be no theory that systematically predicts the predictive errors of investors, workers, managers, etc.2 Note that this is entirely equivalent to espousing theory Ty above on the grounds that it makes no assumption of systematic predictive errors on behalf of Jill or Jack.

Mutlrs model was largely ignored until the Global Plan collapsed in 1971. Then, it was retrieved by two other Chicago economists, Robert Lucas, Jr. and Thomas Sargent, who applied the idea to their neoconservative brand of macroeconomics, which assumed that the macroeconomy is in permanent equilbrium.3 Of course, to be able to attach a consistent neoclassical equilibrium narrative on the macroeconomy, Lucas and Sargent had to ditch complexity in favour of time. So, in conformity with the demands of the neoclassical form of the Inherent Error, the Lucas and Sargent macroeconomic REH-based model is confined to a single- sector inter-temporal inodel, one not too dissimilar to that of Frank Ramsey (see Chapter 7).

The REH allows Lucas and Sargent to get rid of Ramsey’s central planner and therefore to claim that equilibrium is ‘achieved’ miraculously by the market itself. ■Since the agents’ expectations are assumed always to be correct, plus or minus some random errors, there is indeed no need for a planner. In a sense, the planner’s role has become obsolete courtesy of the strong version E of neoclassicism’s third meta-axiom (recall Section 9.5, Chapter 9). Together with the Efficient Market Hypothesis (see previous box), the REH’s ‘policy implication’ is crystal clear: government should keep off!

If the world behaves like this type of theory suggests, it is impossible for output, employment or any other variable that society cares about to be positively affected by means of government intervention. If agents always entertain the correct expectations (plus some random noise), then aggregate output and employment is always going to be as high as it can. Inevitably, meddling governments can only undermine perfection!

Remarkably, the REH literature dominated at a time of historically high unemployment. How did it manage that? To be consistent with their model, they had to claim that if observed unemployment is, say, 8 per cent, then 8 per cent is the level of unemployment that it is ‘natural’ for the economy to have at that point - the ‘natural’ rate consistent with agents’ rational, i.e. correct, expectations. Suppose, they added, government tried to suppress unemployment to below that ‘natural’ level by means of ‘Keynesian’ meddling. The ensuing increase in the quantity of money cannot, in this context, change what people expect (in terms of actual output, employment, etc.) since everyone harbours the correct expectations. Everyone will then know in advance, on the basis of their rational expectations, that the government’s effort will leave output and employment unaffected. Immediately they will surmise that prices must rise (since there is now more money in the economy chasing after the same quantity of goods and the same amount of actual labour).

When the Global Plan buckled under, for reasons discussed in Chapter 11, the game changed. In the era of the Global Minotaur, and in Paul Volcker’s inimitable words, the disintegration of the global economy and the enhancement of global trade and capital flow asymmetries became a legitimate policy option for the US government, a tool for recovering and reinforcing US hegemony. In particular, energy price inflation, interest rate volatility, significant rises in unemployment, financial deregulation, etc. became essential in the persecution of the new ambitious project of restoring US economic and geopolitical power after the Vietnam catastrophe, and at the expense of Germany and Japan who had grown ‘too competitive’ during the Global Plan era.

The new policy of controlled disintegration of national economies and global capitalism alike, a form of negative engineering that was pursued after 1971, had to have, as all major policy twists do, a veneer of theoretical legitimacy. Now that the Global Minotaur required governments that stood aside while massive asymmetries were gathering pace, especially in the form of the capital flows into Wall Street that were to sustain the United States’ expanding trade deficits, a new form of macroeconomics was necessary; one with a simple message: the good government is one that takes its leave; that concentrates on keeping US inflation below that of its competitors and leaves it to the market to decide everything else.

It took two steps to bring to prominence this type of free-marketeer (or new classical, or neoclassical, or neoconservative, or neoliberal) macroeconomics. First, an end had to be put to the idea of rationally managing an economy, for example, by means of fiscal and monetary policy; the idea that Paul Samuelson had cultivated in the mind of countless students, many of them in the US administration. This proved a simple task once the policy levers of the Global Plan (e.g. a looser fiscal policy to reduce unemployment) ceased to function in the dying days of Bretton Woods and as the massive increase in oil prices was taking hold. The straw man version of Keynes that neoclassicists like Paul Samuelson had created in the 1960s (see Section 9.3 of Chapter 9 for the argument) had outlasted his sell-by date and could be done away with by lightly blowing in its direction. Such is the fate of impostors when the winds of history change.

Second, a new type of macroeconomics had to take its place, preferably one that recommended the sort of policies needed during the Global Minotaur's formative years. It was none other than the neoclassical variant of the Inherent Error (or, as we called it in preceding pages, the Inherent Error on steroids). It came in different, yet depressingly similar, forms (see Boxes 12.10 and 12.11) which did no more than apply the vulgarised versions of the ftash-Debreu-Arrow formalist method, against its progenitor !s wishes, to macroeconomics. The great advantage of these models was that they provided exceedingly complex mathematics that only a tiny band of practitioners understood but whose conclusion was clear for all to grasp: capitalist markets are axiomatically impossible to second-guess, both at the level of individual investments (see Box 12.9) and at the level of the macroeconomy (see Box 12.10). Recessions can and do take place but only because of external shocks that society cannot do anything about through collective or state action and which are best absorbed by allowing the market free rein to respond (see Box 12.11).

Box 12.11 Real business cycles

The Rational Expectations Hypothesis, along with its bedfellow the Efficient Market Hypothesis, left the reader of that quaint literature with an impression that capitalism was a harmonious system that never caught anyone by surprise, constantly confirming everyone’s expectations (give or take a few random and independently distributed errors) and permanently on the road to maximum growth and prosperity; as long as the government stayed out of its hair. Of course, reality was rather different and even true believers, such as Robert Lucas, Jr., saw the need to tell a story about fluctuations, recession, upturns and downturns. The theory of the real business cycle (RBC) was the result.

Turning necessity into virtue, RBC portrayed recessions as capitalism’s rational (and ‘efficient’) response to external or exogenous shocks. The idea here is that the nebulous markets are working optimally in a capitalist world, which occasionally is threatened by events that occur outside its realm. Like a well-functioning Gaia that must respond to and adapt after the crashing into it of a large meteor, so does capitalism react efficiently to exogenous shocks. In neoclassical language, this translates as follows: unregulated capitalist markets maximise inter-temporal expected utility and recessions, when they occur, are part of that ‘plan’; a best reply to extra-economic events that happen on the markets’ periphery; e.g. a tsunami or a crazed oil producer’s decision to inflate oil prices.

But what makes these business cycles real? The answer is that they are not due to some endogenous market failure but, rather, are an efficient reaction to a ‘real’ external shock. With this assumption under their belt, RBC theorists spend their days and nights carrying out statistical analysis (and a lot of ‘filtering’) of aggregate data, usually GDP and GNP data. Their purpose? To filter out of the raw data the growth trend, revealing in full view the fluctuations around the trend that can be seen as random and consistent with some external shock.

The idea that recessions (i.e. below-trend growth) are the market’s way of dealing with external shocks is mainly due to Finn Kydland and Edward Prescott (1982).1 They explain all deviations from the trend in terms of events for which the markets are neither to blame nor to commend: catastrophic weather, oil price rises, technological change, political interventions to tighten environmental legislation, etc. These external or exogenous changes cause employment, prices, output, etc. to deviate from their

prior equilibrium path until a new equilibrium path is established, one that is consistent with the new external reality.    1 The connection with both the Efficient Market Hypothesis and the Rational    ^ Expectations Hypothesis is intimate. All three turn on the assumption that markets    ^ know best. Under the surface, however, they share something far more important: the    i neoclassical version of the Inherent Error, which forces these theorists to think    f of capitalism as a single sector, Robinson Crusoe-like economy where a decision to    I save is also a decision to invest and the possibility of coordination failure or Keynes’    ! fallacy of composition is simply non-existent.    [ One does not need to be particularly radical to recognise the inanity of such    1 theories. Larry Summers (1986) who became President Clinton’s deregulation guru,    ( and more recently returned as a key figure in President Obama’s administration, had    I this to say: ‘[R]eal business cycle models of the type urged on us by Prescott have    ! nothing to do with the business cycle phenomena observed in the United States or    ! other capitalist economies’.    j What Summers does not say is that RBC theorists have long stopped caring. Taking    1 a leaf out of neoclassical economics' turn in the 1950s, following the Formalists' triumph, they are innocent of any concern regarding the seaworthiness of their theoretical model. Economics became a mathematised religion quite a while before REH and RBC. And like all successful religions, the fact that it was founded on a web of superstition, with nothing useful to say about how the world actually works, never counted against it.

Note

1 See also Lucas (1977) and Stokey, Lucas and Prescott (1989).

The best thing that can be said about the Global Minotaur^ macroeconomics is that it was a magnificent combination of higher mathematics with childish political economics. Nevertheless, it condemned a whole generation of economists to thinking of the most complex, disintegrated, precariously balanced period in the history of capitalism in terms of a universe in immaculate equilibrium. A model that applied exclusively to a single-sector, Robinson Crusoe-like economy that featured no actual markets, which some invisible hand was assumed always to keep in equilibrium ended up the parable on which humanity had to rely as a source of insights into the Global Minotaur's workings.

Section 7.3 of Chapter 7 told the story of the state of political economics prior to the Crash of 1929. This section tells a disappointingly identical story about the state of economic theory on the cusp of the Crash of2008. In the former we wrote that, just before 1929 struck, the best way of capturing the political economists’ sum of understanding of the workings of capitalism would come in the form of a sentence beginning with: we have no cluel We then went on to say that the rest of the book will be arguing that:

[N]othing has changed since then. Absolutely nothing! Whereas other sciences have moved in leaps and bounds, the insiders of our discipline remain wedded either to single commodity and Robinson Crusoe models (featuring amazingly complex dynamics) or to wonderfully complex multiple commodity (or General Equilibrium) models in which time sits still. Neither variety of model allows us a glimpse of an expectations-driven world in which our expectations are capricious, not because we are not smart enough to form them rationally but, because capitalism is indeterminate.

No further comments are needed. Except perhaps for one more box that answers the question on everyone lips, concerning the spectacular failure of so many bright people to see through Wall Street’s private money (i.e. the toxic derivatives) before the bottom fell out of that ‘market’, pushing the whole world into the downward spiral of debt-deflation, unemployment and negative growth. Box 12.12, therefore, completes this section with an account of the mathematics that helped create a market for a burgeoning mountain of CDOs simply by allowing traders to quote one price per CDO on offer. That mountain, and the fictitious prices CDOs were assigned, were to be the Minotaur's undoing.

To recap, from the mid-1990s to 2008 the CDOs (and associated CDSs, see Box 12.12 for a definition) became a new form of private money. Financialised capitalism was quickly hooked. As the flood progressed with no rhyme or reason, capitalism was propelled to its worst implosion since 1929. If we look carefully at the causes of this dynamic, we shall discern, behind the generation of this ultimately destructive form of private money, a mighty alliance comprising

(a)    the capital inflows into Wall Street and the City of London that were part and parcel of the Minotaur;

(b)    the US government’s penchant for deregulation (which started in the 1970s in a bid to disintegrate the Global Plan and reached a climax with the Clinton-Summers legislative moves of the 1990s);

(c)    the securitisation of US mortgages into CDOs;

(d)    the use by so-called financial engineers of high-order mathematical statistics and computer algorithms that turned options contracts into an impenetrable maze whose ultimate ‘achievement’ was to assign a single price to each CDO; and

(e)    the neoclassical version of the Inherent Error which provided the key assumptions that ‘closed’ the financial engineers’ pricing models under (d) above.

Box 12.12 overviews (d) and (e), two crucial links in the chain of inauspicious events. It begins with Black Monday (a bleak day in the stock markets back in October 1987), which we used in Chapter 1 as an introduction to the special difficulties involved in predicting financial storms, and tells the story of how financial institutions sought safety in numbers, in estimates of risk to be precise. These estimates, for example, the values VaR and R which Box 12.12 elucidates, attempted to do the impossible: in Poincare’s words, to discuss untameable chance using the language of certainty.

The formulae that produced these numbers, which in turn ‘informed’ traders during the frenzy that typified the financial markets for almost two decades, were but a fig leaf by which to cover up the nakedness of the models. But it was enough. Traders wanted to believe in them and, while the party lasted, they had no reason not to (except, that is, for a plethora of scientific reasons that should have prevailed upon them). Oodles of cash were being made by those who professed faith in the formulae. There is, indeed, nothing like success in the world of naked moneymaking to reinforce one’s beliefs in magical formulae that look scientific and whose contents almost no one understands.

Box 12.12 Black Monday, ‘Value at Risk’, CDSs, Dr Li’s formula and the Inherent Error's latest clothes    {

On Monday, 19 October 1987, Black Monday as it is now known, the stock markets f went into free-fall. In one day, Wall Street lost more than 22 per cent of share values \ It was even worse than any single day during the Crash of1929. By the end of October I the City of London had lost 26.4 per cent, Wall Street followed with a 22.7 per cent i fall, Hong-Kong equities were decimated by a mindboggling 45,5 per cent, with ! Australia not far behind dropping 41,8 per cent. It was clear that the 1980s bubble, I which was inflating during the Global Minotaur’s energetic early phase, had burst, I aided and abetted by the electronic trading systems that had come on line in previous 1 years (allowing traders to program computers to sell automatically en masse when the j markets fell by more than a given percentage).    j

While the Fed and the other central banks managed to stabilise the situation and, indeed, to reverse the losses by the end of the year,3 the shock left an indelible mark on the traders’ collective psyche. In Black Monday’s aftermath the Efficient Market Hypothesis looked frail and sad. Unwilling to go through another trauma comparable to Black Monday, traders started looking at ways to estimate their exposure to risk, to second-guess the market (against the edicts of the Efficient Market Hypothesis), Most analysts accepted the hypothesis that an abnormal ‘point of inflection’, like Black Monday, must be preceded by an increase in the ‘noise’ surrounding price movements just before the crash is about to hit. If that noise, or variation, could be detected in good time, then the hope was that early warning of the impending crash can help a trader salvage a great deal of money by selling up, at the expense, of course, of everyone else.

Value at Risk, or VaR, was developed fully in the early 1990s at Wall Street merchant bank J.P. Morgan. The innovation there was that price information was pooled from all of the trading desks of the company and a single number was produced that, supposedly, answered a simple question: how much does the company stand to lose in case of an adverse market move whose probability was estimated at 1 per cent?

It is said that Dennis Weatherstone, J.P. Morgan’s CEO, expected a piece of paper with that number every day at around the close of business (the 4.15 report). Interestingly, J.P. Morgan was possibly the only financial institution that did not take VaR estimates seriously, even though they developed it in-house and its CEO wanted to see it eveiy afternoon. In fact, J.P. Morgan sold the VaR computational apparatus technology in the form of an independent business that was absorbed into a company known as RiskMetrics Group. The new firm did brisk business, since the demand for VaR estimates around Wall Street and the City of London was sky high.

The statistical idea behind VaR is as simple as the formulae involved are convoluted. First, one specifies the desired confidence level, usually set at 99 per cent. Then, one looks at the whole portfolio of assets (shares, options, etc.) that one trades in and asks the question: what is the smallest number x such that the probability that my loss today, say L, will exceed x is smaller than 1 per cent? The answer to that question, i.e. x, is the trader’s VaR value at that moment in time. More formally, but saying exactly the same thing,

VaRI% = inf (x e R j Pr(L > x) < 1%)

So far so good. The question then becomes: how does one compute that probability? This is where the plot thickens and the fraudulent part comes in. The only way to compute jc is to approximate the probability that L>x either by means of some assumed probability distribution or by some so-called parametric measure; i.e. a presumption of how many abnormal events (i.e. events whose losses cannot be defined) one can reasonably expect. Either way, the gist here is that to define x one must, effectively, assume that one knows something that is unknowable. Henri Poincare must have been spinning in his grave.2

A few years later, a Chinese business statistician, Li Xianglin who later changed his name to David X. Li and worked for RiskMetrics, the Canadian Imperial Bank of Commerce and Barclays Capital, before moving back to China to work for CICC (China International Capital Corporation), developed another famous and, indeed fatal, formula: the Gaussian copula function, which proved manna from heaven for the financial institutions seeking a simple method of pricing their Credit Default Obligations (CDOs), which, as we postulate, played the role of private money that Wall Street created on the back of the Global Minotaur? The reason why Li’s formula (see below) became all the rage on Wall Street was that it gave banks and other institutions the illusion that it afforded them a simple, accurate way to price the options involving mortgages (prime as well as subprime).

The problem with pricing CDOs was that, as they comprise many different types of mortgages (some safer than others), to price a CDO one had first to work out an estimate of how the default probability of one type of mortgage was correlated with that of another type. Dr Li’s epiphany was to model default correlations by emulating actuarial science’s solution to the so-called broken heart syndrome: the observation that people tend to die faster after the death of a beloved spouse. Statisticians had been working for a while on how to predict the correlation between deaths on behalf of insurance companies selling life insurance policies and joint annuities. ‘Suddenly

I thought that the problem I was trying to solve was exactly like the problem these guys were trying to solve’, said Dr Li. ‘Default is like the death..., so we should model this the same way we model human life’.

Dr Li tackled this problem by employing a mathematical theorem (by A. Sklar) to model the joint distribution of two uncertain events. In technical terms, the Sklar theorem allowed Li to separate the dependence structure from the univariate margins of any multivariate distribution. In plain language this means that Li had come up with an ingenious way of modelling default correlation that did not require use of historical default data. Instead, he used market data about the prices of specific ‘insurance policies’ called Credit Default Swaps or CDSs.4

The difference between a CDS and a simple insurance policy is this: to insure your car against an accident, you must first own it. The CDS ‘market’ allows one to buy an ‘insurance policy’ on someone else’s car so that if, say, your neighbour has an accident, then you collect money! To put it bluntly, a CDS is no more than a bet on some event taking place; mainly someone (a person, a company or a nation) defaulting on some debt. When you buy such a CDS on Jill’s debt you are, to all intents and purposes, betting that Jill will fail to pay it back, that she will default.

CDSs became popular with hedge fund managers (and remain so to this day) for reasons closely linked to the trade in CDOs. Take, for example, a trader who invests in a CDO’s riskiest slice; committing to protect the whole pool of mortgages if the holder of the riskiest mortgages in the CDO defaults on their repayments. Suppose further that our investor undertakes to cover $ 10 million of such default losses. Just for that pledge, in the pre-2008 days, he could receive an upfront payment of $5 million, plus $500,000 a year! As long as the defaults did not happen, he would make a huse bundle without investing anything: just for his pledge to pay in case of a default! Not bad for a moment’s work - until, that is, the defaults start piling up. To hedge against that eventuality, the trader would buy CDSs that would pay him money if the mortgages in the CDOs he bought defaulted.

Thus the combination of CDSs and CDOs made fortunes for traders at a time when defaults on mortgages were rare and uncorrelated. What gave that combination a great big boost was Dr Li’s idea to use the prices of CDSs in order to value the CDOs more easily and quickly! It brought tears of joy to the traders’ eyes. All of a sudden, and as long as they trusted the formula’s underlying assumptions, they could ignore the nearly infinite relationships between the various parts (i.e. types of mortgages) that made up a CDO. They could set aside concerns about what happens when some partial correlations between components turn negative while others turn positive. All they needed was to keep a trained eye on one, single number, one correlation R that summed up all the information relevant to pricing the derivative,

Pr[r, < I,/, < 1] = 4>2 (0)-' (F, (l)),®-1 (F„ (l)),r)

The particular ingredient on which Dr Li’s formula hinged is the innocuous looking y on its right-hand side and, more importantly, the assumption that it is a parameter. The reader need not bother with the rest of that formula. Just focus on the y. in plain English, the assumption that y is constant means that traders assumed away the possibility of a sudden wave of defaults, unanticipated by the actuarial data. The mind boggles: where did Dr Li find the confidence to assume that no such wave would ever gather pace and that his y’s constancy is safe as houses (rather that a fluctuating variable connected to capitalism’s unpredictable whims)?

The simple answer is: in the same place that Formalists derived the confidence to impose the third meta-axiom (see Chapter 8) every time they needed to ‘close’ one of their models. That place was no other than a fantasy world of economics’ Inherent Error in which the economy operates as if to confirm the economists’ mathematised superstitions.

Notes

1    The main reason that the markets were stabilised simply by extending credit and liquidity to the financial institutions was that, unlike in 2008, banks had not yet become replete with private money, i.e. derivatives. In 2008, things turned out very differently when the crash not only wiped out share values but also the entire derivatives market on which the financial alter ego of the Global Minotaur had grown so reliant.

2    Poincaré also understood that which modern financial engineers conveniently, and catastrophically, neglected: That predictions are extremely sensitive to imperceptible initial deviations. In the same book that we quoted before, he wrote: ‘[E]ven if it were the case that

the natural laws had no longer any secret for us, we could still only know the initial situation approximately. If that enabled us to predict the succeeding situation with the same approximation, that is all we require, and we should say that the phenomenon had been predicted, that it is governed by laws. But it is not always so - it may happen that small differences in the initial conditions produce very great ones in the final phenomena. A small error in the former will produce an enormous error in the latter... The meteorologists see very well that the equilibrium is unstable, that a cyclone will be formed somewhere, but exactly where they are not in a position to say; a tenth of a degree more or less at any given point, and the cyclone will burst here and not there, and extend its ravages over districts it would otherwise have spared..

[Poincare (1908 [1914]), p. 68]

3    See Li (2000).

4    When the price of a credit default swap goes up, that indicates that default risk lias risen. Li’s breakthrough was that instead of waiting to assemble enough historical data about actual defaults, which are rare in the real world, he used historical prices from the CDS market.

Heads of trading desks would take a look at the VaR estimate that their minnows fed them regularly and thought that they knew how much money they were liable to lose at that moment if an unlikely, harmful ‘development’ (a 1-in-100 ‘event’ like Black Monday) were to occur. However, what most did not understand was that their little VaR estimate had no capacity to tell them what would happen in case of a 1-in-150, or a l-in-200, ‘event’; an event from which they would lose much, much more. Nor did they allow themselves to ask the most pertinent of questions: why might such ‘abnormal’ events actually happen?

The reason they did not ask this question was twofold; first, because it would have got in the way of a great deal of moneymaking. Stopping to think in the middle of a feeding frenzy means a smaller catch for great fish and small ones. Second, because the ‘best' economists in the best economics departments were winning Nobel Prizes with theories proclaiming that all unexpected events are exogenous and basically predictable; that they happen for reasons external to capitalism (and, as such, are the sort of events that cannot possibly be given an economic explanation); that capitalism is secure from global events with a probability less than one per cent. Who were they, the traders, to dispute the brightest Nobel Prize winners?

In the seven years or so before 2008, another important formula was devised: Dr Li’s Gaussian copula function which, as Box 12.12 explains, was the cherry on the cake: a simple formula by which even an unsophisticated trader could ‘calculate’ the value of any CDO he wanted to flog off, comprising a cacophony of different tranches of mortgages and other financial assets. The explosion of the CDO market between 2000 and 2008 owed a great deal to it.

At this point it is important to remind the reader that neither VaR nor Dr Li’s formula materialised from thin air. As Book 1 went to great pains to show, from the second half of the 1950s onwards, the economics profession began to view capitalism from within the prism of the Nasb-Debreu-Arrow approach. It looked nothing like the capitalism that real capitalists, workers, consumers and government officials experienced daily. But it was a powerful viewpoint. Its most enduring impact was finally and irreversibly to deny the notion of radical uncertainty that Keynes had highlighted. At around that time, scholars like Harry Markowitz18 and James Tobin*9 introduced a notion of variability that was a total denial of

the idea of indeterminacy; of Keynes’ insights concerning the nature and depth of uncertainty. It was the notion that Fischer Black and Myron Scholes would harness, in 1973, to present the first formula that claimed to offer a practical way of pricing derivatives, which later morphed into VaR, before evolving into Dr Li’s little toxic formula.

The direct lineage of these formulae in the formalism of the 1950s can be gleaned from their fundamental axioms: zero transaction costs; continuous trading; and, of course, the assumption of Brownian motion. The latter is what we may call the ‘killer’ assumption, hi other words, it means that the model assumes away not only the possibility of crises but incredibly, the possibility that changes in prices are patterned (as opposed to totally random).20

VaR and Dr Li’s Gaussian copula formula are stunningly compatible with the rampant neoclassicism of the 1970s (see the hypotheses in Boxes 12.9, 12.10 and 12.11) which in effect, proclaimed the end of genuine macroeconomics.21 The pricing of the GDOs presupposed the same theoretical move that formalist neoclassicists had been making in the context of bypassing the Inherent Error; assumptions that fell under the umbrella of the strong version E of their third meta-axiom. So, a type of theory that cannot handle time and complexity together became the foundation of a fiendishly complex financialised capitalism that evolves at neck breaking pace along time’s arrow. The world’s largest market (by value) was founded upon a theory that could not survive outside the pages of a formalist’s scribbling.

However well versed one may be in the intricacies of the story, one cannot but return to the same question, again and again: why were the prices that Dr Li’s formula generated believed? Why did numerous smart, self-interested, market operators, whose livelihood depended on the constancy of Dr Li’s y parameter (see Box 12.12) never question that obviously flawed assumption? The question becomes even more pressing when it is pointed out that, just as J.P. Morgan’s staff had mistrusted the VaR measures of risk they had themselves developed (possibly because they understood better than anyone what went into it), Dr Li did not really believe his own model’s pricing recommendations either!22

The answer is twofold: first, because they were captives of herd-like behaviour and would : have risked their jobs if they moved against the pack;23 second, because during the Global Minotaur era political economics had ridded its textbooks and leading research programmes: of all dissident voices that might have warned against such assumptions. In short, the economics profession had successfully peddled a form of mathematised superstition which armed the hand of the traders with the superhuman, and super-inane, confidence needed (perhaps against their better judgement and wishes) to bring down the system which nourished them; a very contemporary tragedy indeed.

12.4 The abandoned protégés: Japan and Germany in the Age of the Minotaur

The dimming of the Rising Sun24

During the years of the Global Plan, Japan achieved enormous export-led growth under the patronage of the United States (which guaranteed Japan cheap raw materials and access to US and European markets see Chapter 11). Japanese wages rose throughout the period but not as fast as growth and productivity. Government spending was directed to building infrastructure for the benefit of the private sector (e.g., transport, R&D, training, etc.) and minimally to the end of providing a social safety net for the population at large.

This set of priorities ensured that, in the context of the well-regulated international environment of the Global Plan, Japan was to grow on the back of an export drive that proceeded along the lines of three phases. At first, the emphasis was on exporting primary products and importing light industrial goods. Then, by the late 1950s, Japan moved to exporting light industrial goods and importing heavy industrial goods plus raw materials. Lastly, it matured enough to export heavy industrial goods while limiting its imports to scarce raw materials.

Production was based on large-scale capital investments yielding impressive economies of scale in the context of a highly oligopolistic industrial structure known as keiretsu (e.e. Mitsui, Mitsubishi and Sumitomo). These conglomerates were vertically integrated, hierarchical organisations around which revolved (through an intricate subcontracting svstem) countless SmaU-to-Medium Sized Enterprises (SMEs) or chusho-kigyo. This structure still survives today, and the SMEs account for about 80 per cent of total employment. However, their contribution to overall productivity is quite low, estimated at less than half of the average level of larger firms.

Thus, Japan’s industry is bifurcated. At the centre we find the conglomerates, dominating the economy and producing its substantial productivity gains, while around these centres we observe many small business clusters that create most of the employment but little of the nation’s productivity. This combination of oligopoly capital, many small businesses and a government that spends a tiny percentage of its large outlays on social programmes, lies at the heart of Japan’s reliance on foreign demand for its output in order to maintain aggregate demand domestically. In short, it is the main reason why, after the Global Plan gave its place to the Global Minotaur, Japan’s macroeconomy was so seriously destabilised.

Japan’s banks were traditionally controlled by the state. This afforded authorities leverage over investment, the result being a relatively easy implementation of the ‘national policy1 of industrialisation in the post-war period. Japanese firms were actively discouraged from financialisation (from seeking their own finance through the money markets, in other words), with the Ministry of Finance performing that task on their behalf (in association with the Bank of Japan). Consequently, the flow and circulation of capital was usually directed by the banks affiliated to each respective industrial grouping.

During the Global Plan, and under the tutelage of the United States, authoritarian de facto one-party rule (by the almost invincible Liberal Democratic Party) ensured that the Japanese state maintained a high level of structural autonomy from civil society. In this sense, it is impossible to explain Japan’s path without affording its policymakers a major part in the unfolding drama. After 1971, when the Global Plan was shredded and the Global Minotaur got underway, the dollar’s initial devaluation forced upon Japanese officials an urgency to find ways of maintaining competitiveness. In this context, the appreciation of the Yen, which happened the moment Bretton Woods died, was effectively countered with the export of capital through foreign direct investment (FDI) and through capital outflows to the United States.

In short, to keep its oligopolistic industry going, Japan had to nourish the Global Minotaur with continuous transfers of capital to Wall Street. The, perhaps tacit, agreement between Japanese and American authorities was simple: Japan would continue to recycle its trade surpluses by purchasing US debt (i.e. government bonds and securities) and, in return, Japan would have privileged access to the US domestic market, thus providing Japanese industry with the aggregate demand that Japanese society was incapable of producing.

However, there was a snag. When one buys foreign assets, at some point these assets start generating income which must be, eventually, repatriated. Japan thus ‘ran the risk' of ceasing to be able to remain a net capital exporter, turning into a rentier nation. This prospect was at odds with the post-oil crisis Japanese growth strategy, which was to concentrate on high-value-added, low-energy-using industries like electronics, integrated circuits, computers and mechatronics (industrial robots).

On 22 September 1985, the United States, Japan, W. Germany, France and Britain signed the Plaza Accord. The ‘advertised’ purpose of the agreement was to devalue the US dollar in an attempt to rein in the US trade deficit.25 Our interpretation is quite different. The purpose was, at least in part, to prevent Japan from becoming a rentier nation, a development that would jeopardise both Japan’s own long-term plans and the Global Minotaur {whose wont was to remain the undisputed global rentier).26

In the years that followed the rise in the Yen forced the Japanese economy into the lap of a major, sustained slowdown. Indeed, the 1990s are referred to commonly as Japan’s lost decade. The slow-burning crisis was due to a flood of liquidity and over-accumulation. In an attempt to keep up the rate of investment when Japanese exports were becoming dearer in the United States, the Bank of Japan pumped a lot of liquidity into the system. The result was the largest build-up of excess liquidity in modern history. Its side effect was massive speculative activity in real estate.27

When that speculative bubble burst in the early 1990s, following a rise in interest rates whose aim was to limit liquidity, house plus office prices crashed. The nation’s banks ended up with huge loans on their books that no one was ever to repay. Although their central location in Japan’s industrial structure (the keiretsus) ensured that they would not go to the wall, they nevertheless were weighed down by these non-performing loans. All injections: by the Bank of Japan into the banking system were partly absorbed by these loans, thus curtailing the injections’ effect on real investment. Largely because of its zombie banks, banks that were neither dead nor truly alive, the Japanese economy was caught in a liquidity trap from which it has yet to recover. No matter how far the interest rate dropped, and it was never far above zero, it failed to reignite investment.

The very structure of Japan’s oligopolistic industry and its citizens’ great sense of insecurity (which is reinforced by the absence of decent social welfare provisions that translate into high savings ratios) combined to deny the country the levels of aggregate demand that, would have otherwise restored growth. The only components of effective demand that have been keeping the Japanese economy afloat since the early 1990s are (a) direct government expenditure on infrastructure and (b) net exports.

The most substantive global repercussion of Japan’s stagnation was the effect of almost zero interest rates on capital flows from Japan to the United States. To the already large amounts of capital that the government of Japan was investing in US government debt, and the equally large amounts of capital that Japanese Anns were diverting to the United States as foreign direct investment (e.g. the purchasing of American shares, whole firms or the setting up by Sony, Toyota, Honda, etc. of production facilities on US soil), a third capital flow was added: the so-called carry trade by financial speculators who would borrow in Japan at rock bottom interest rates and, subsequently, shift the money to the United States where it would be lent or invested for much higher returns. This carry trade significantly expanded the Minotaur's inflows, thus speeding up the financialisation process described in the previous section.

Perhaps the greatest threat to Japanese capitalism is that, unlike the United States, Japan has not managed to cultivate a hegemonic position in relation to South-East Asia.

Box 12.13 The Minotaur's little dragons Japan and South-East Asia

Ever since the Korean and, more significantly, the Vietnam Wars caused capitalism to take root in South-East Asia, Japan began to play the hegemonic role in the region. Japan constitutes the source of both initial growth and technological progress for that part of the world. However, it would be false to argue that Japan was to South-East Asia what the United States was to Germany and Japan during the Global Plan. The reason is that Japan never absorbed the surpluses of South-East Asian industries the way that the United States absorbed Japan’s and Germany’s. Indeed, South-East Asia is in a structural (or long-term) trade deficit with Japan.

This situation was sustained on a capacity to generate net export revenues outside that part of the world. During 1985-95, the decline in the value of the dollar was accompanied by a shift of Japan’s foreign direct investment towards Asia. In a few short years, Japanese oligopoly industrial capital had spread its wings over Korea, Malaysia, Indonesia and Taiwan in the form of exported capital goods used in both production and the building of new infrastructure. This development, which was always part of the intention behind the 1985 Plaza Accords, reinforced South-East Asia’s trade déficit vis-à-vis Japan. As the Japanese were always incapable of generating sufficient aggregate demand, the pressure to find export markets for South-East Asian output outside Japan grew stronger.

Once again, the United States came to the rescue. For, unlike Japan, that could produce everything except the requisite demand necessary to absorb its shiny, wonderful industrial products, the United States, under the Minotaur’s gaze, had learned the art of creating immense levels of demand for other people’s goods. Thus the United States became the export market for the area as a whole, inclusive of Japan, while South Korea and Taiwan imported mostly from Japan.1 This process created, perhaps for the first time, the Japanese vital space that the Global Plan’s designers had imagined, but never implemented fully (because of Chairman Mao’s unexpected success).

::Note

1 The greatest development in the region since 2000 has, of course, been the rise of China. It is interesting to note that, if Hong Kong is included in our statistics for the period 2000-8 (and it must be since it is a major gateway for the trade of the People’s Republic), the combined current account position of China vis-à-vis Japan is negative. In this sense, the pattern was reproduced all the way to the Crash of2008.

While Korea, Taiwan, Malaysia, Singapore, etc. rely on Japan for technology and capital goods, they cannot look to it as a source of demand. The whole area remained tied to the Minotaur and its whimsical ways. The rise of China in the late 1990s only exacerbated the situation since it added another surplus country, and a Great Dragon one at that, to the equation. Thus, the Rising Sun, the Great Dragon and the Little Dragons remained, at least prior to 2008, wedded to the Minotaur, feeding it with capital in the hope that it would, in return, devour their surpluses.

Box 12.14 Prodigal Japanese savings    i

A return with global repercussions    j

Following the Crash of2008, the Yen revalued substantially, giving another blow' to j Japan’s plans for export-led growth out of the mire. The conventional wisdom is that, I at a time of crisis, capital flows back to the largest economies in search of safe havens and that this is the reason why the Dollar and the Yen are rising. But that leaves unan- j swered the question of why the Yen was rising so fast against the Dollar. The expkana- I tion consistent with the above is that, following Japan’s stagnation in the 1990s and j beyond, Japanese interest rates had collapsed to almost zero. Japan thus began to ! export its savings, worth at least $15 trillion, in search of higher interest rates. This ! carry trade was partly responsible for the fast pace of financialisation in the rest of the (mainly Anglo-Celtic) world, as much of the exported savings ended up contributing to Wall Street’s private money (i.e. were used up buying CDOs, CDSs, etc.).

In the panic of 2008, a mass return of Japanese capital (the part of it that did not ‘bum up’) back to Japan was caused by the collapse of Wall Street’s private money and, of course, the fall of US and EU interest rates to the near-zero, Japanese-like levels.

The long-term effect of this ‘return’ of Japanese savings back to Japan is serious.

On the one hand it deepens Japan’s stagnation through the appreciation of the Yen.

On the other, the end of the yen carry trade translates into an upward push for world interest rates, despite the central banks’ efforts to push them down.

The Deutschmark’s new clothes

In sharp contrast to Japan’s travails during the Global Minotaur's formative years (1973—80). during which Japan’s average growth rate was an anaemic 0,3 per cent, W. Germany was able to protect its trade surplus from the devaluation of the dollar by exploiting its dominance of the vital space the United States had previously laboured so hard to create on Germany’s behalf: the European Common Market, that is today’s European Union (EU). The role of German exports to the rest of Europe remained that which the Global Plan’s US architects had envisioned: they supported a strong Deutschmark and, at the same time, played a central role in the industrial development of the rest of Europe. Indeed, German exports were not just Volkswagens and refrigerators but also capital goods essential for the normal functioning of every aspect of Europe’s productive apparatus.

Nevertheless, Germany was not Europe’s locomotive. From 1973 onwards, the developmental model of continental Europe has been resting on the combined effect of maintaining a powerful capital goods industry, linked through Germany to global oligopolistic corporations. However, the aggregate demand that keeps these corporations going was always scarcer in their home countries, than in their neighbours. As did Japan, so did Germany show a magnificent capacity to produce efficiently the most desirable and innovative industrial products but, at the same time, it failed miserably in endogenously generating the requisite demand for them. That demand came from Germany’s European periphery, or vital space as we call it, and, during the Minotaur’s halcyon days not only from across the Atlantic but also (in the 1998-2008 decade) from China.

Much ink has been expended in recent years in discussing Europe’s fundamental heterogeneity. The latter results from the coexistence of three groupings under the EU’s umbrella: (a) persistent trade surplus generating countries (Germany, Holland, the Flemish pail of Belgium; to which more recently Austria and Scandinavia were added); (b) persistent trade deficit inducing countries (headed by Italy and including Greece, Spain, Portugal); and (c) France, in a category of its own, for reasons that we explain below. For short, we shall refer to (a) as the ants, to (b) as the magpies and to (c) as... France.

The ants are the producers with the best and shiniest capital goods and, as a result, the most capital intensive output. Their industries are highly concentrated oligopolies sustained by both excess capacity and technological innovations of the highest order. However, just like Japan’s sparkling industry, they cannot possibly generate sufficient demand for their own wares locally and this is why, once the Global Plan was gone, they desperately sought institutional arrangements with their neighbouring magpies, plus France, to whom they would siphon an increasing share of their output. The European Common Market, whose creation was (as we argued in the previous chapter) a US policy decision that goes as far back as the late 1940s, was the obvious set of institutions for fostering such a link.

Turning to the magpies, while the government and entrepreneurs of countries such as Greece and Portugal dreamed of convergence with the European north, the fact is that they proved unable to generate net exports because, their export growth notwithstanding, they have weak domestic capital goods sectors, so much so that any sustained expansion in national income yields a rising import content. In that sense, convergence was a bridge-too-far for the magpies because the infrastructural work needed to support their industries required large imports of capital goods from the ants (primarily from Germany).

The only magpie that did fairly well in the convergence game was Italy which, on occasion, even became a trade surplus country. However, these successes, judged from a neomercantilist’s perspective, were predicated upon the devaluation of the Italian Lira relative to the Deutschmark. Italy, in this sense, is a curious magpie in that it could occasionally transfigure itself into an ant by waving the magic wand of aggressive currency devaluations. All that, of course, came to a grinding halt with the introduction of the Euro, which turned Italy into a fully fledged magpie.

And then there is France; the outlier in the European family.28 The French elites, its government and private sector leaders alike, consistently aimed at a trade surplus; a neo-mercantilist goal that was, alas, seldom achieved. Being the largest destination for not only German but also for Italian exports, France only attained a net surplus during 1992-3 after the collapse of the Euro’s predecessor, the European Monetary System (EMS); a system whose aim was to limit fluctuations in the exchange rates between European currencies. What marks France out (e.g. from Italy) are two strengths.

First, the calibre of and expertise within its political institutions which (perhaps due to its Napoleonic past) were the nearest Europe got to a policymaking civil service that might rival that of Washington (or of London during the days before the Great War). Second, France sports a large banking sector which is more advanced than that of the ants, and of course of the magpies. Because of the gravitas of its banks, France achieved a central position in the facilitation of trade and capital flows within the European economy. At the same time, before the Euro’s introduction, the importance of its financial sector ruled out the strategy competitive devaluations (like those of Italy) of the French government’s menu of policy options. After the Euro, French banks were upgraded further as they tapped into the energy of financial waves stirred up by the Minotaur's rampant impact in New York and London.

Summing up the third category of EU states, which comprises France alone, we may perhaps unkindly (but pretty accurately) describe it as an aspiring but chronically underachieving neo-mercantilist desperately seeking a way out of its ‘special’ place in the European scheme of things on the strength of its financial sector.

Back to Germany, the ants’ undisputed queen, from 1985 onwards the Global Minotaur's drive to expand the US trade deficit translated into a major improvement in Germany’s current account or trade balance. It was the new scheme of things: the United States imported as if there were no tomorrow and Germany exported to the United States both industrial goods and the capital necessary to nourish the Minotaur. The improvement in Germany’s trade surplus vis-à-vis the United States rubbed off on the rest of the EU, which saw jts collective trade position go into surplus, This was the environment in which the forces that would create the common currency, the Euro, gathered pace. Each grouping had different reasons for wanting to link the currencies up.

From the 1970s onwards, Germany was keen to shore up its position in the European scheme of things, as a net major exporter of both consumption and capital goods and a net importer of aggregate demand. While the Global Minotaur was turning the United States into the country in which consumption-led growth exceeded the growth in domestic productive capacity, German policymakers intentionally aimed for the opposite: for a growth rate that was below that of the rest of Europe but, at the same time, of a capital accumulation (or investment) drive that was harder and faster than that of its neighbours.29

The aim of this policy was simple: to accumulate more and more trade surpluses from within its European vital space in order to feed the Minotaur across the Atlantic so as, in turn, to maintain its own export expansion within the United States and, later, China. That was indeed Germany’s response to the challenge of maintaining its surplus-led growth model after the end of the Global Plan and once US authorities had chosen to allow global capitalism to enter a state of partial disintegration under the menacing domination of the Minotaur,

The one spanner in the works of this German Strategy was the threat of competitive currency devaluations that Italy and other countries were using with good effect to limit their trade deficits vis-à-vis Germany. The idea of a monetary system to limit currency fluctuations grew from the ants’ preference for intra-European currency stability that would kili three birds with one stone: (a) it would remove the ants’ uncertainty about the Deutschmark value of their exports to the magpies; (b) it would render their cross-border production costs more predictable given that the ants were engaged in considerable FDI (foreign direct investment) in the magpies’ currency area (e.g. German manufacturers setting up whole factories making Volkswagen gearboxes in Spain or washing machine components in Portugal); and (c) it would solidify the surplus position of the ants, in relation to the magpies, by locking in a permanent differential between their growth rates and the magpies’ growth rates; a differential which translated into permanent infusions of aggregate demand from the magpies to the ants.

The other groupings, the magpies and France, had different reasons for embracing the idea of monetary union. Starting with the magpies, their elites had grown particularly tired of devaluations - plain and simple. The fact that the Deutschmark value of their domestically held assets (from the value of their beautiful villas to that of their shares in domestic banks and companies) was liable for large and unexpected falls weighed heavily upon them. Similarly, the magpies’ working classes were equally tired of a cruel game of catch up. All their hard fought nominal wage gains were liable to be wiped out at the stroke of the finance minister’s pen, which was the sole requirement for a loss of up to 30 per cent of the local currency’s value, and an immediate increase in the price of imported goods (mainly imported from the ants) which played a centrai role in the basic goods basket of the

workers.

As for France, it had three reasons also for seeking a lock-up between the Franc and the Deutschmark: (a) it would strengthen the political elite’s bargaining position vis-à-vis the powerful French trade unions, in view of the moderate wage rises across the Rhine that German trade unions negotiated with employers and the Federal government; (b) it would shore up its, already important, banking sector; and (c) it would give its political elites an opportunity to dominate Europe in the one realm where French expertise was terribly advanced compared to its German counterpart: the construction of transnational political institutions.30 For these three reasons, France and the ants discovered common ground for combining the oligopoly industrial capital priorities of the ants with the project of la construction européenne aspired to by France.31

The road to the Euro began with the ill-fated European Monetary System (EMS), a mechanism of coordinated action by the central banks whose purpose was to intervene in the currency markets in order to keep the exchange rates within a narrow band. Alas, the Global Minotaur was stirring up so many waves of speculation that very soon it became clear that the combined forces of the central banks had insufficient fire power in Wall Street and the City of London to impose their will over the speculators. When George Soros, along with other astute money market gamblers, recognised a weakness in the EMS, and in particular saw that the British Pound and the Italian Lira were overvalued (in comparison to the countries’ trade and fiscal position), he took huge bets out that the Pound and the Lira would devalue. In a game of chicken that lasted 24 tense hours, Soros and the Bank of England fought it out by betting serious amounts of capital against one another: Soros betting that the Pound would devalue, the Bank that it would not. The question was who would blink first. Once other speculators began to think that the Bank of England was losing too much money, and would have to withdraw from this contest, they added their bets on Soros’ pile and, soon after, it was game over. The Pound and the Lira exited the EMS and Soros made more than $1 billion in one night.

That traumatic experience was the harbinger Germany required to decide to give up its beloved Deutschmark, as the price it was prepared to pay for the benefits of ending currency fluctuations within its European vital space. The rest, magpies and France, also took fright at the sight of the unprecedented power of the money markets and decided that huddling together under a renamed Deutschmark was a good idea.

The formation of the Euro, and the period of adjustment that led to it, crystallised the preceding situation and enabled Germany and the other ants to reach exceptional surpluses in a context of deepening stagnation in the rest of Europe. These surpluses became the financial means with which German corporations internationalised their activities in the rest of the world; whether the internationalisation materialises in the form of FDI, merger and acquisitions, or mega joint ventures (such as the construction of the Beijing-Lhasa railway line, or the touted collaborative construction of the Santos-Antofagasta line) the way local capitalisms react in the post-2008 environment. Yet, the primary prerequisite for such internationalisation of the German capital goods sectors is the continued growth of the Bundesrepublik’s external surpluses with the rest of Europe.

In an important sense, Germany became the Global Minotaur's European Simulacrum?2 an economic system fully integrated within the European sphere, courtesy of the US-sponsored Global Plan, which mopped up increasing quantities of the shrinking aggregate demand pie of its neighbours. In the process, it put the magpies and France into an effective slow-burning recession, which was the price the latter had to pay for hooking their currencies up to the Deutschmark.

While the Simulacrum was the mirror image of the Global Minotaur (in that it drained the rest of Europe of demand, rather than injecting demand into it), it nevertheless had a similar end outcome. Just as the Minotaur exported a recessionary environment to the rest of the world in order to preserve its hegemony, so did the Simulacrum maintain Germany’s relative position during that global receding tide by exporting stagnation into its own European backyard. The mechanism for accomplishing this was the celebrated Maastricht Treatv (later augmented in Dublin and Amsterdam) which laid down the following rules of monetary union:

® budget deficits for member states capped at 3 per cent;

•    debt to GDP ratios below 60 per cent;

•    monetary policy to be decided upon and implemented by an ‘independent' European Central Bank, the ECB, whose single objective was to keep a lid on inflation: and

•    a no transfers clause (or no bail outs, in post-2008 parlance), which meant that, if member states ever got into fiscal trouble, they should expect no assistance from the Euro’s institutions (ECB, Eurogroup, etc.).

The above stipulations were ‘sold’ to the European public and elites as reasonable measures for shielding the Euro from ‘free riding’ by member states and, thus, creating credibility for the new currency. However, there was a hidden agenda that was as crucial as it was unspoken. The no transfers clause, in particular, became an all-consuming ideology (that was later to be dented in 2010 with the Greek fiscal crisis) which signified the ants’ determination to use the creation of the Eurozone as a mechanism by which to cast in stone the ‘obligation’ of the magpies (plus France) to provide the ants with net effective demand for their exports/3 Summing up, the difference of this Minotaur Simulacrum from its transatlantic cousin was that (a) it fed on other people’s aggregate demand (unlike the Global Minotaur that fed on other people’s capital and goods);34 and (b) it remained hopelessly dependent upon the Global Minotaur (as the Crash of2008 confirmed; see below). Under these circumstances, with the Global Minotaur draining the world of capital and its German sidekick draining real capital from within the non-surplus EU region, it is hardly surprising that European growth rates declined during every single one of the last four decades. Meanwhile, Europe has been falling more and more under the spell of German surpluses, a predicament that was only ameliorated during the Global Minotaur's, halcyon days by net exports to the United States. But when 2008 struck, even that silver lining was removed.35

German reunification and its global significance

The collapse of the Soviet Union, which began unexpectedly around 1989, soon led to the demolition of the Berlin Wall. Chancellor Helmut Kohl moved quickly to seize this opportunity to annex the DDR, East Germany as it was more commonly known. Conventional wisdom has it that the inordinate cost of Germany’s reunification is responsible for the country’s economic ills and its stagnation in the 1990s. This is not our reading.

While it is undoubtedly true that reunification strained Germany’s public finances (to the tune of approximately SI.3 billion), and even led it to flout the very Maastricht rules that it had initially devised to keep Europe’s magpies on the path of fiscal rectitude, at the same time it conferred upon German elites new powers and novel policy levers. On the negative side,

Gennan capitalism had to shift its focus from traditional investments in technology, innovation and engineering to more mundane things like rebuilding from scratch the East’s infrastructure and environmental reconstruction. However, these burdens proved an excellent investment for reasons that go much beyond national pride from the fulfilment of an understandable dream to reunite the country’s two parts after 40 years of enforced separation.

The first effect of reunification on the real economy was the reduction in labour’s bargaining power.36 The West German trade unions tried to enforce similar wages and conditions in the East. Soon they found out that the complete implosion in the East’s industrial sector and social economy did not allow them room to manoeuvre. Moreover, East Germany was not the only part of the former Soviet empire that collapsed. So did Eastern Europe as a whole, from Poland to Slovakia and from Hungary to Ukraine. The effect of the availability of dirt cheap labour either within the Federal Republic’s new borders or close by was to depress German wages wholesale. Thus, German industry’s intra-European competitiveness rose and, by 2004, Germany had become, once again, the world’s largest exporter of industrial goods.37

To sum up, Germany’s response to the cost blowout of reunification was the pursuit of competitive wage deflation. In the run up to the introduction of the euro, Germany was locking into its labour markets substantially decreased wages in relation to the wages elsewhere in what was to become the Eurozone. Almost in a bid to copy the Global Minotaur's domestic strategy (recall Table 11.4 in the previous chapter), the German Simulacrum promoted a strategy of restraining wage growth to a rate less than that of productivity growth. Once the Euro was introduced, and German industry was shielded from the competitive currency depreciation of countries like Italy, its gains from the fall in wages became permanent.

Additionally, Germany’s system of collective wage bargaining, based on a corporatist-cum-neo-mercantilist entente between German capital and the German trade unions, enabled the gap between productivity and wages to be more favourable to capital compared to the rest of Europe. The gist of the matter was low growth reinforcing Gennan export competitiveness on the back of continual real wage deflation and vigorous capital accumulation. As the Global Minotaur began to soar, after 2004, Germany’s trade surplus took off in sympathy, capital accumulation rose, unemployment fell to two million (after having risen to almost double that) and German corporate profits rose by 37 per cent.38 However, even though the picture seemed quite rosy for the German elites, something rotten was taking over its banking sector; a nasty vims that the Minotaur Simulacrum had wilfully contracted from the Global Minotaur itself (see Box 12.15). And when the Crash of2008 happened in New York and London, that virus was energised in earnest. It was to become the beginning of the Euro’s worst crisis ever.