Chapter 4

Preventing Perfect Financial Storms

When Everyone Was Too Clever by Half

What is happening in the credit markets today is a huge blow to the Anglo-Saxon model of transactions-oriented financial capitalism. A mixture of crony capitalism and gross incompetence has been on display in the core financial markets of New York and London. From “NINJA” subprime lending, to the placing and favorable rating of assets that turn out to be almost impossible to understand, value, or sell, these activities have been riddled with conflicts of interest and incompetence.

These events have called into question the workability of securitized lending, at least in its current form. The argument for this change—one that I admit I once accepted—was that it would shift the risk of term-transformation (borrowing short to lend long) out of the fragile banking system and onto the shoulders of those best able to bear it. What happened, instead, was the shifting of the risk on to the shoulders of those least able to understand it.

—Martin Wolf, “Why the Credit Squeeze Is a Turning Point for the World,” Financial Times, December 11, 2007

Widely acknowledged as the dean of world financial commentators, Martin Wolf outdid himself in these prescient comments made during the early days of the global financial crisis of 2007–2010. Going deeper, Gretchen Morgenson’s Reckless Endangerment argues that a self-dealing network of politicians/lobbyists/regulators/bankers enriched themselves at the expense of the broader public. Greed on stilts! While Wolf and Morgenson and many others are right in what they say, they fail to point out that much deeper forces were at work here—forces that would transform the credit collapse into a genuine Perfect Storm. These deeper forces were identified only 15 years ago in a completely new concept of market risk developed by Mordecai Kurz at Stanford University. This, incidentally, is the same Kurz who coauthored the book on fiscal policy with Kenneth Arrow that was central to our Lost Decade story in Chapter 2. Kurz’s new theory of endogenous risk (risk that bubbles up from inside an economic system) shows that the greed, incompetence, and conflict of interest stressed by Wolf certainly exacerbate Perfect Storms, but are not in fact necessary for their occurrence.

The primary purpose of this chapter is to explain how such storms arise, and to do so from first principles utilizing the paradigm introduced by Professor Kurz. His theory represents the “higher order of deductive logic” I utilize in this chapter to help clarify an otherwise intractable problem, just as I drew upon the Arrow-Kurz logic in Chapter 2. Because this new theory permits an identification of the true causes of market risk, and does so at a deeper level than ever before, it makes possible two other important advances. First, it can help policy makers identify valid policies for preventing or at least mitigating future financial market storms. That is, the field of risk management receives a boost. Without understanding exactly what causes Perfect Storms to arise, meaningful storm-prevention policies cannot be identified.

Second, the field of risk assessment receives a boost. There are of course many perspectives on market risk, including the popular “fat tail” theories celebrated by Nassim Taleb in his delightful book Black Swan. But these theories usually describe risk rather than explain it at a fundamental level. Once again, without an understanding of the factors that give rise to risk in the first place, how can risk be properly assessed? For reasons stressed by Taleb and others, it is simply not enough for quants to massage historical data in an effort to determine the true probabilities of future events. For the probabilities of future events such as market meltdowns that define future risk cannot be assessed without first assessing the probabilities of the underlying causes of such events.

One caveat is in order here at the outset. The global financial crisis of 2007–2010 consisted of two main stages. In the first stage, the mortgage banking crisis in the United States and the United Kingdom held center stage. In the second stage, the crises in these two nations spread across the globe like wildfire immediately after Lehman Brothers was allowed to go under in September 2008. I shall focus on the first stage because this is where the greatest confusion and misunderstanding lies. I ignore the second stage because the causality here is well understood: The collapse of Lehman Brothers and AIG precipitated a panic-driven cessation of interbank lending among all major financial institutions. No bank could trust any other bank to repay a loan of any maturity, and global lending came to a standstill. This in turn caused the crisis to go global, and to impact Main Street as well as Wall Street.

The discussion in this chapter divides into two halves. The first half identifies the four principal sources of the Perfect Storm, viewed at a suitable level of abstraction. In order to do this, I introduce the new logic of endogenous risk, and explain it from scratch. Building on this analysis, I deduce in the second half of the chapter what should be done to prevent future Perfect Storms. There are two main policy proposals. The first requires a scaling down of leverage in financial institutions far greater than today’s “financial reforms” can achieve. This is because excess leverage is far more dangerous than is generally perceived. The second proposal requires too-big-to-fail institutions to isolate their risky proprietary trading activities from their banking activities. Fortunately, progress has already been made in this second policy arena.

Neither of these proposals is novel, on the surface. But I want to justify both policies at a much deeper level than has been attempted to date. In particular, I want to demonstrate why excess leverage of the kind still plaguing the banking system is a nonmarket “externality” or “public bad” that government should rein in for fundamental reasons of public welfare. The reasons why this is true have not been clearly articulated during the recriminations of the past four years. Without an understanding of exactly why excess leverage is so dangerous, and without greater public outrage over the issue, champions of true financial reform will find it very difficult to oppose the all-powerful financial lobby that is resisting meaningful reform. Additionally, a failure to introduce these two reforms will increase the probability of future Perfect Storms, making them more likely than is already feared. In addition to stressing these two primary reforms, I identify a host of secondary proposals that are already generally recognized as necessary to strengthen the financial system. These will be reviewed briefly.

The Four Origins of Today’s Financial Crisis

There were four principal sources of the global financial crisis (GFC). These appear in the schematization of Figure 4.1, a schematization that outlines the flow of this chapter. I will stress the role of the upper box on the left the most. Rarely has bad economic theory (“modern finance”) exacted such a large price from society as it did during the GFC. It led to the creation of financial weapons of mass destruction, to irresponsibly high levels of leverage, to an arresting underestimation of risk, and to a cozy markets-know-best philosophy of deregulation that culminated in the GFC. These corollaries of bad thinking go a long way to explaining what caused the crisis without invoking the more familiar Wolf-Morgenson explanation based upon self-dealing and greed.

Figure 4.1 What Caused the GFC? The Interplay between Four Developments

Source: Strategic Economic Decisions, Inc

image

The new theory of market volatility of Mordecai Kurz was specifically developed as an antidote to many of these difficulties of classical financial theory. It has been able to explain some 90 percent of observed volatility as opposed to the 20 percent explained by classical theory, and in doing so to better model the real world. Accordingly, I shall devote considerable time to explaining this new theory, showing how it can explain and help to predict Perfect Storms, and in doing so highlight the shortcomings of classical finance. This in turn leads to a discussion of policies for preventing future GFCs. The contents of the other three boxes in Figure 4.1 are also discussed, with a particular emphasis on the precise meaning of “excess leverage” and why it must be reined in by much more stringent legislation than has been proposed to date.

Bad Economic Theory—Debunking the Conceits of Modern Financial Economics

We will start off with a discussion of the first of the four smaller “boxes” appearing in Figure 4.1, namely the domination of financial economics by highly unsatisfactory economic theory developed between 1960 and 1990. This theory is usually referred to as Efficient Market Theory. More technically, economists refer to it as the theory of “rational expectations.” What I mean by poor economic theory will become clearer in what follows. But here at the outset, a summary definition will be helpful: A theory (whether in physics or economics) is “poor” if it neither explains nor predicts real-world data and if, at a deeper level, its Basic Assumptions are indefensible. In the case of good theories, the reverse is true: the Basic Assumptions from which the theory is deduced are judged “eminently reasonable,” and the resulting theory has the power both to explain and predict real-world data. Additionally, a good theory must be “falsifiable” in Karl Popper’s sense. In what follows, Kurz’s new theory becomes a foil against which the deficiencies of classical financial theory will become crystal clear.

In the case of finance, the efficient market theory is a poor theory insofar as: (1) It posits as a basic (if implicit) assumption that participants in markets do not make mistakes (properly defined); (2) it assumes that all risks can be hedged which, when combined with the no-mistakes axiom, implies that leverage is not a significant problem; (3) it assumes that everyone in a market knows how to correctly “price” the news—there are no disagreements as to how to interpret news; (4) it predicts a level of volatility that is about one-fifth of what is observed in reality—with no Perfect Storm being possible; (5) it gave rise to the creation of new financial securities that did not perform as they were supposed to, and in fact became weapons of massive financial destruction; and (6) it implies that markets left to themselves always function quite well, and will not break down.

As this case study will show, poor theories—not just poor policies—can be very deleterious to the public welfare. They affect how we think, and thus how we regulate, or deregulate for that matter. The GFC was an important reminder that bad thinking can lead to bad policies, and that bad policies matter.

I believe that the new theory of endogenous risk developed at Stanford University is the appropriate antidote to the problems caused by the Efficient Market Theory. Like most good theories, it does not reject the predecessor theory—Efficient Market Theory—but rather generalizes it. It thus includes classical finance as a special case that will only work under conditions that will rarely if ever be encountered in the real world. This was the case with Einstein’s general theory of relativity, which incorporated Newton’s theory of gravity as a limiting special case. Specifically, if and when space-time can be approximated as “flat,” not curved, then Newton’s laws work just fine.

An endnote discusses how the new theory of endogenous risk can incorporate not only Efficient Market Theory as a special case, but also many of the insights of Behavioral Finance—the first effort to create a theory superior to that of Efficient Markets. As a bonus, the new theory makes falsifiable predictions of future prices and quantities, as did classical Efficient Market Theory, whereas most behavioral finance theories fail to do so.1

My job is now to explain the new theory, and convince you of its power. I shall demonstrate how it can explain the emergence of Perfect Storms. Moreover, once the new theory is understood, the true deficiencies of classical Efficient Market Theory will become crystal clear. In particular, I will demonstrate how traditional financial theories have blinded many policy makers, bankers, quants, and investors as to the true nature of risk, and hence the true likelihood of Perfect Storms.

Preconditions for a Perfect Storm in a World of “Good” People

Consider an idealized world of people devoid of incompetence, stupidity, greed, irrationality, and corruption. In such a world, can there be a Perfect Storm? In other words, need we lay the blame for what happened on the kind of factors cited by Martin Wolf earlier? The answer is no, as we now see. Of course, the existence of greed, incompetence, and self-dealing certainly exacerbated the magnitude of the storm, but that is all. The true sources of distress lay deeper.

Please consider Figure 4.2, which will be central to this discussion. It exhibits the four preconditions for a Perfect Storm to occur in a hypothetical world of people who are competent, rational, and good. In the middle of the figure there is “The Perfect Storm—Maximal Endogenous Risk.” While I do not like jargon, this word “endogenous” lies at the heart of the new theory and is what gives rise to Perfect Storms. The kinds of risk it incorporates are intuitively appealing, so bear with me.

Figure 4.2 The Deeper Origins of Perfect Storms

image

Precondition 1: The Right-Hand Oval—Correlated Mistakes

Suppose that the world is sufficiently complex and confusing that most of us have different forecasts about future events, whether these be prices (the value of the stock market in a year), or macro events (the probability of war, recession, global warming, or whatever). By a “forecast,” I mean a person’s own, subjective betting odds (probabilities) on future events given his or her current information. When people have differing forecasts, as is usually the case, at most one person will end up having been “correct,” although usually no one is completely correct. People who discover that their forecasts were incorrect react by shifting their financial portfolios and revising their then-future forecasts.

When they change their portfolios, demanding more of one asset and less of another, market prices will in turn change. Thus, “being wrong” can cause market volatility. In classical finance, such behavior is ruled out. It is assumed that no one ever changes his or her probabilistic forecast due to having been wrong. For everyone has “rational expectations,” which means their forecasts are never wrong. Readers of this book may find this off the wall, but it is true. This point will be revisited later.

But how much volatility can arise from being wrong? Consider two extreme cases to understand this important point. Suppose half the investors have forecasts that are X percent higher than what they will learn to be true, whereas the other half have forecasts that are exactly X percent lower than reality. Thus, when reality emerges, proving all investors to have been wrong, everyone will reshuffle his or her portfolio. One result will be a large increase in the quantity of trades that results. Price however will not change, since for every investor that was X percent below reality, there was an offsetting investor that was X percent higher. In short, the “mistake structure” in this particular market is self annihilating, and hence price volatility (risk) will be minimal. This elementary observation makes clear that market volatility (risk) will be greatest when most all investors are wrong in the same direction. In this case, price and quantity both will change dramatically.

This is the case of what Kurz calls “correlated mistakes” or “correlated forecast errors” and it is a very important source of volatility that has been overlooked in earlier theories since mistakes were axiomatically ruled out by these theories. Now if a correlated mistake occurs in an unimportant market (e.g., a bet on the future price of cognac), then society at large will not be affected. Yet if the correlated mistake is a bet on house prices nationwide—a bet placed by millions of average investors on their most important asset—then the consequences for society can be very large indeed. During the recent Perfect Financial Storm, this first precondition for a Perfect Storm was met in spades.2

Precondition 2: The Left-Hand Oval—Problematic Hedging

Life would be a lot less risky for almost all of us if we could perfectly hedge all the risks in our lives, another stringent assumption of classical theory. But as Professor Robert Shiller of Yale and others have shown, many of the most important risks in our lives cannot be hedged. These include the probability of unfairly being fired, the probability of having to sell a house when house prices are low rather than high, the probability of a good marriage remaining a good marriage and not ending in divorce, the probability of stock prices being high versus low during the month we retire and seek to annuitize our wealth for a guaranteed lifetime income.

You might suppose that nonhedgability of this kind will become less of a problem as instruments for improved risk-hedging become more available (e.g., many types of “derivative” securities). While there has been great progress along these lines in recent decades, it is well known in the economics community that many of the most important risks will never be able to be hedged due to problems of “moral hazard” and related issues in economic theory.3

On top of the nonavailability of hedges to smooth out some of the bumpiest moments in our lives, there is the fact that hedges can break down and malfunction just when they are most needed, as in times of crisis. Recall the terrifying U.S. market crash of 1987, “Black Monday.” Also recall the collapse of the hedge fund Long-Term Capital in 1997. In both cases, distress was compounded and hence “risk” was greater when numerous hedges ceased to function. The problem here is quite simple. A hedge is constructed on the assumption that, whenever the price of asset A drops, the price of some other asset B will rise. Statistical analysis reveals such patterns, which are then utilized to construct hedges.

But in times of crisis, the prices of assets A and B may both fall in tandem. Poof goes that hedge! Billions upon billions of dollars have been lost by people including PhD quants who never allowed for this source of risk and market pandemonium.4 In the case of the recent Global Financial Crisis, not only did hedges melt away, but huge quantities of securities (not merely mortgage-backed assets) proved unhedgable and indeed unsalable, as Martin Wolf correctly pointed out earlier.

Precondition 3: The Lower Oval—Pricing Model Uncertainty

Have you ever been in a car with other impatient passengers, vainly trying to locate a destination on a roadmap when you cannot read the map? Confusion and mistakes result. The same idea arises in finance due to investors’ inability to read another kind of map. Do you recall from school the idea of a “function” F? For example, consider Y = F(X), expressing the concept that variable Y is a function F of some other variable X. Suppose X denotes “the news” in some market, and Y denotes market price. Then the function F transforms or “maps” this news X into price Y. In traditional finance and indeed economics as a whole, it was always assumed that, if investors knew the news, then they would know the price, or more correctly, the true probability of price that would result. Everyone knows and agrees on the function F. This is called Pricing Model Certainty.

Now let’s consider a different function Z = G(Y) where Z denotes the price of mortgage-backed securities—that class of weird, newfangled assets that melted down during the GFC. Let Y in this case represent the mortgage default rate on single-family houses. So this time, the function G specifies mortgage-backed security prices as a function of mortgage default rates Y.

Suppose, finally, that investors neither understand nor agree on the nature of the function G. More specifically, assume that everyone knows that everyone else is uncertain about the true nature of G. Then, even if everyone knows the news about the default rate, they will not know the price of the associated securities. In my own research, I have called this Pricing Model Uncertainty. Then it can be proven that, the greater this type of uncertainty is, the greater the volatility of the market will be. In the case of a bubble, the greater the magnitude and duration of the bubble will be. When the bubble bursts, the greater the resulting crash will be. Price “overshoot” thus occurs in both directions. Classical financial theory assumes that Pricing Model Uncertainty does not exist. This is one reason why classical theory has trouble explaining market booms and busts, and why it underestimates risk.

The existence of Pricing Model Uncertainty in the recent Perfect Storm was implicitly acknowledged by Federal Reserve Chairman Ben Bernanke in a speech he gave at the Plaza Hotel in New York some four years ago, early in the financial crisis. He apparently asked a large audience: “We know the news, but can anyone in this room stand up and tell me what this stuff (mortgage-backed securities) is worth?” I cannot remember the exact details of this speech that I did not attend, but I read about it with great interest as I was writing a paper on Pricing Model Uncertainty at that very time. Martin Wolf expresses a similar view in his comment cited at the beginning of this chapter when he points out that these assets were “almost impossible to understand, value, or to sell.”

The reason why this particular type of uncertainty generates price overshoot can be intuitively summarized as follows: The more a group of investors know that the pricing model is unknown, for example that “no one knows how high is high or how low is low,” then the more each trader will have an incentive to stay with the trend (thus prolonging overshoot on the way up and down) rather than exit early and be penalized for relative underperformance.

So this is our third precondition for a Perfect Financial Storm: maximal Pricing Model Uncertainty. It is important to note that the kind of risk being generated here encompasses short-term volatility as well as longer-term trends, including what traders call momentum-driven overshoot.5

Precondition 4: The Topmost Box—Excess Leverage

The last requirement for a Perfect Storm is that the various parties involved be maximally leveraged, just as both banks and homeowners were in the years preceding the GFC. The intuitive reason for precondition 4 is pretty obvious. Just think of yourself having made a significant investment without any leverage. From the start, there is the risk that this investment will either increase your wealth or reduce it. Now, suppose that you make the same investment, but you leverage your position 1:1, that is, you borrow half and put the other half down in cash. While you can now gain much more wealth if the investment pays off, you also face the prospect of losing much more of your net worth if the investment sours. Since most all humans attach much more importance to losing what they have than to gaining more, there is an innate tension between having too little leverage (not being willing to assume a mortgage to buy a house for a growing family), and having too much leverage (not being able to sleep well at night).

Interestingly, it is possible to utilize the economics of uncertainty to calculate the optimal amount of leverage for any investor, as long as we know the individual’s “taste for risk” (and this can easily be measured), and the risk/return investment opportunities he or she confronts. This result can then be extended from the case of an individual to groups of individuals and indeed to all members of society. This is very important: For once we can define the optimal amount of leverage for an individual or for society as a whole, we can then determine whether or not there is excess leverage for either. This point will be stressed further on when we discuss policy solutions to financial crises. As will be shown, it turns out that excess societal leverage is a true “public bad” that must be regulated. For the moment, what matters is the role of excess leverage in amplifying the endogenous risk that the other three preconditions would give rise to on their own. This amplification process is highly nonlinear, and can explode into fat-tailed events.

The Importance of the Four Preconditions: Six Propositions

It is now possible to summarize the importance of the four preconditions of Figure 4.2 in the form of six summary propositions.

Proposition 1—Perfect Financial Storms Can Arise without Malfeasance

In a world of “good” people (e.g., a world without malfeasance, incompetence, or conflicts of interest), then the higher the values of each or all of the four precondition variables (e.g., the greater the leverage), the greater will be the resulting market turbulence. When all four variables assume “high” levels, a true “perfect storm” results with very fat-tailed outcomes. In the earlier discussion, I have attempted to make intuitively clear how each variable on its own increases risk. Without a much more formal analysis, however, it is impossible to demonstrate how pernicious the nonlinear interactions between high levels of each can be.6 Happily, the recent global financial crisis offers a real-world example of the magnitude of distress that can result from precisely such interactions. Remember that during this recent crisis, all four variables assumed values of “high,” as I have taken pains to demonstrate.

All in all, the best way to think about Figure 4.2 is to understand that market volatility and societal distress increase explosively with increases in the values of the four preconditions. We can now extend this finding in an obvious manner.

Proposition 2—The Magnitude of Perfect Storms is Increased with Malfeasance

Suppose we relax the “good people” assumption made previously. We now incorporate stupidity, greed, malfeasance, conflicts of interest, and the general incompetence that Martin Wolf cited. Suppose, moreover, that the net effect of such real-world deficiencies is to increase the excesses that result without these influences. For example, suppose that (1) house prices are driven even higher because of false promises of price appreciation by sellers and brokers, and/or because regulators failed to impose higher down payments as prices soared; that (2) bogus theories of risk assessment by bank supervisors and others end up discredited because they failed to incorporate considerations of endogenous risk; and that (3) incentives are skewed so as to transform the concepts of “due diligence” into a bad joke (NINJA [no income, no job, no assets] loans), and so forth. Then the amount of market risk and price overshoot predicted in Proposition 1 will be even greater. In sum, these human foibles that commentators stress, and that are frequently cited as the “causes” of the crisis, emerge as add-ons to the deeper story told in Figure 4.2. Thus little more will be said about them below, which is not to deny their importance.

Proposition 3—Why Perfect Storms Cannot Arise in Classical Financial Theories

What kinds of assumptions did efficient market theories introduce that ruled out as impossible every one of the four preconditions cited earlier—and by extension, Perfect Storms? Briefly, classical finance was predicated on four main assumptions that produced this result.

1. Mistakes-Free Economics

There can be no “correlated mistakes” as a source of volatility. This is because the so-called assumption of rational expectations incorporated within these theories assumes that, while the future is uncertain, all investors possess the same probabilistic forecast of the future, and that this forecast is correct. That is, while no one is assumed to know the future because of uncertainty, all investors know and agree upon the true probabilities of future outcomes. Thus they do not know if it will rain or shine next August 23, but they are assumed to know that it is 20 percent likely that there will be rain rather than sun. This is the “relative frequency” probability than can be computed from historical data (e.g., it has rained 20 percent of the time on all August 23 dates during the past century). Since everyone has access to the same data, and has an equal ability to crunch the data, all agents end up with the same probability forecast. It is correct and it remains correct across time. There are no structural changes. This assumption goes by the term “stationarity” in statistical theory.

The result is “mistakes-free economics” in the sense that no one will look back and say: “Gosh. Things have changed. My forecast based upon historical data was wrong. Structural changes like the advent of global warming have changed the odds of rain and shine. There is no longer an objective truth that I can mine from the historical data. Thus any forecast I arrive at will be subjective, and others will have forecasts different from mine.” More often than not, it is the advent of difficult-to-predict structural changes (e.g., the rise of China, the invention of derivatives, or the advent of global warming) that make our forecasts wrong, ex post. Moreover, given the irrelevance of much historical data about such new developments, different investors will inevitably hold diverse beliefs (forecasts) about the future. Sometimes these will be both correlated and wrong.

At other times, forecasts can be wrong because of psychological “biases” of the kind stressed in modern behavioral finance. No structural changes per se are needed for such biases to exist. Finally, at still other times, forecasts are wrong because the phenomena being forecast are inherently counterintuitive. Thus, the ancients were wrong in supposing that the sun rotated around the earth, despite lots of data suggesting that it did. They did not understand that it only appeared that the sun orbited the earth, but that in fact the earth was spinning on its own axis. Note that in this case, people’s mistakes did not reflect “irrationality” at all, but rather outright ignorance. There are many analogous situations in economics, some of which are reviewed further on.

To sum up, the classical theory’s assumption of “stationarity” (a structural-change-free environment) rules out Precondition 1 of our Perfect Storm story. It guarantees that everyone will learn the truth by crunching historical data, and that there will be no panic selling (or buying) as investors discover they have been wrong. Structural changes do not occur. Mistakes do not occur. Check any finance textbook you wish, and you will never find the word “mistakes” in its index. In contrast to all this, within the new theory of endogenous risk, the advent of structural changes is all-important in explaining not only why mistakes occur, but why forecasts are both diverse and subjective, and why market risk is thus much greater than in classical theory.7

2. Perfect Hedging

Classical theory also traditionally assumed that all uncertainties not only can be hedged, but must be hedged in order that all goods and services be efficiently allocated. This was one of the principal findings of the seminal 1953 paper by Stanford’s Kenneth Arrow, which I cited in Chapter 3. This paper extended the concept of “market equilibrium” from a world of certainty about the future to a world of subjective uncertainty.8 But as Robert Shiller at Yale and many others have documented, many of the most important risks we face neither are nor ever will be able to be hedged. This is the famous “missing markets” problem in economic theory. It is one, but only one, reason why markets cannot know best and hence will not know best in many cases, and hence often allocate resources inefficiently.9

It is also assumed in classical theory that hedges do not melt down and malfunction as they usually do in periods of extreme market turmoil. More specifically, the assumed “stationarity” (nonchangeability) of the environment guarantees that the correlation of asset prices within a hedge never changes: There are no periods when the prices of all assets within the hedge start moving in the same direction, thus undermining the hedge.

3. Pricing-Model Certainty

All classical theory stipulates that, given the news X about any asset market, then everyone will know the correct new price Y of the asset. Everyone knows all the road maps linking news to price, or to use my earlier notation, they know the function F mapping news into price. Moreover, what they believe they know about F is assumed to be correct.

4. Optimal Yet Irrelevant Leverage

Classical theory assumes that investors will optimally leverage their positions in accord with their own risk tolerances. “Excess leverage” per se is rarely discussed. Intuitively, why would leverage be an important source of market risk when it is assumed that all investors know the correct probability of all future events (and prices), and can hedge away any undesired uncertainty via the perfect hedge assumption cited previously. The result is a theory predicting much less volatility and risk than Kurz’s theory.

Classical theory also predicts a very different kind of risk: “exogenous” as opposed to “endogenous” risk. The distinction here is all-important. These two adjectives derive from the Greek exo and endo, referring loosely speaking to external and internal. In classical theories, the only source of risk concerns variables exogenous to the system, for example a change in Fed policy, or a change in tax rates. Endogenous risk refers to all other sources of risk, sources bubbling up from within the system, such as the volatility spike that occurs when investors experience correlated mistakes and a high degree of leverage. Scholars such as Robert Shiller have found that external news can only explain about 20 percent of observable market risk. When endogenous risk is added in, some 90 percent can be explained as Kurz has shown in his research. Thus endogenous risk is all-important.

The main point here, to conclude, is that in a classical setting of no mistakes, no endogenous risk, and of complete hedging markets, leverage is not an interesting source of risk. In the real world, where the converse of these assumptions holds true, leverage becomes all-important. Taken together, these four assumptions of classical theories ensure that such theories will predict a far lower level of volatility and risk than a successor theory that rejects all these assumptions as unrealistic. In short, classical theories cannot in any way predict the occurrence of Perfect Storms in financial markets.

Proposition 4—The Fundamental Conceit of Classical Financial Theory Gets Exposed

The comments and behavior of bankers, economists, and policy makers during the past 30 years make abundantly clear the power that efficient-markets financial theory has had. The paradigm led to the development and production of an array of dazzling new financial products. The increasingly dominant “markets knows best” view adopted feverishly by then–Fed Chairman Alan Greenspan and others led to reduced regulation in many aspects of financial market activity. The conceit that risk could be optimally “sliced and diced” and indeed nearly eliminated on a decentralized basis (i.e., via the market alone) pervaded both Wall Street and the City of London. But none of it was true. Instead, it was a bad joke, one debunked theoretically by Kurz’s new theory, and empirically by the experience of the world financial crisis, and of many related crises in the past.

Proposition 5—Excess Leverage Is the Most Important Policy Variable in Explaining the Perfect Storm

What can be said about the policy implications of our analysis in Figure 4.2? To understand this, a very important distinction must be made between what engineers call “state variables” and “control variables.” A state variable is a variable that is “taken as given.” It cannot be significantly changed via human policy. Examples of state variables include the greediness of people, the inability of people to make correct forecasts, and human incompetence. A control variable on the other hand is a variable that can be changed by policy choices. Control variables are often referred to as policy variables in economics.

The only control variable and hence the only policy-relevant variable in our four preconditions is excess leverage. This is indicated in Figure 4.2 by the fact that the top-most variable “excess leverage” is in a rectangle rather than an oval. The other three variables are state variables about which we might complain, but can do little. The all-important point here is that we are largely whistling Dixie when we wail on about missing hedging markets, hedging meltdowns, forecast mistakes, pricing-model confusion, and the incomprehensibility of complex financial products.

Quite simply, you are not going to change these aspects of reality. They are state variables. The same can be said about greed, yet another state variable. In proposing to regulate greed, we might as well go further and propose exorcising sexual aggression in teenagers. Greed is a fact of life, and any suggestion that bankers today are intrinsically greedier than they used to be will be hard to defend. To be sure, there has been a proliferation of opportunities to be greedier, and to appear to be greedier (e.g., a hedge fund manager getting mega-rich via 400-to-1 leverage made possible by financial engineering). But basic human instincts have not changed.

This is not the case with leverage: It can be regulated, it has been regulated, and it must be reregulated given its all-important role in hatching Perfect Storms. This is the main message of Figure 4.2, which I ask you to revisit one last time. When we consider a host of concrete policy reforms introduced later in the chapter, controlling leverage will emerge as the most important challenge for the reasons previously given. It can be dealt with. It is a policy variable.

Proposition 6—The New Theory Could Revolutionize Risk Assessment

Much is now being said about the burgeoning field of “risk assessment.” It is a truth universally acknowledged that the world needs superior risk assessment. Experts assume that with ever more data, better crunched than ever before by ever more risk management consultants, we may soon be able to assess the true risks of future Black Swans. Is this true? The new theory of endogenous risk cuts both ways in helping us answer this question. On the one hand, it promises hugely improved qualitative risk assessment. For now that we have discovered four new determinants of market risk (the four Perfect Storm preconditions of Figure 4.2), we can monitor them and determine whether they are pointing towards another Perfect Storm, or not. Thus, dramatically improved qualitative forecasting may be at hand.

On the other hand, it can be proven as a theorem that endogenous risk cannot be quantified with the precision that traditional exogenous risk can be (e.g., the objective probability of rainfall being the percentage of rainy days every August over a long sample period). For example, a simple change in beliefs about the future by a group of investors can change the “true” probabilities of the future in a way that cannot be objectively assessed. The situation recalls the Heisenberg uncertainty principle in quantum physics: the mere act of observing a particle’s position changes its momentum, and vice versa.10 Despite this particular limitation to the ambitions of risk quantifiers, Kurz’s new theory has profoundly deepened our understanding of when and why fat tails and Black Swans will occur. We finally know what variables to look for before we start “assessing” the wrong risks as we so often have in the past.11

This completes our discussion of the role of “bad economic theory” in Figure 4.1, the first of four different causes of what went wrong in the global financial crisis. The second development to which we now turn can be viewed as an extension of this first cause, although the emphasis here is on poor political theory as opposed to poor economic theory.

Misguided Theories of Market Deregulation—“Markets Always Know Best”

The advent of laissez-faire regimes associated with President Reagan and Prime Minister Thatcher transformed the dominant philosophy of government regulation of the economy. In the case of financial markets, the advent of a new “hands-off” philosophy was reinforced by the rise of the Efficient Market Theory in financial economics. After all, if markets really do know best, who needs government interference? We once again witnessed how important ideas can be in impacting policy, just as John Maynard Keynes observed in pointing out that we are all slaves of the ideas of defunct economists.

To understand how dramatic an impact this philosophical revolution had in financial regulation, just consider the two graphs of Figure 4.3. We see here that government largely abandoned the use of two of its most important regulatory tools for regulating financial leverage: (1) the margin requirement for purchasing equities, and (2) the reserve requirement of the entire banking system—arguably the principal lever of Fed power when the Fed was established in 1913.12 Note also that the result of both of these deregulations was to generate a much more leveraged economy.

Figure 4.3 Deregulation Gone Wild: Two Examples of What Happened

Source: Federal Reserve Board of Governors, Strategic Economic Decisions, Inc.

image

The margin requirement, expressed as a percentage, is the difference between the market value of the securities being purchased, or carried, and the maximum loan value of the collateral as prescribed by the Board of Governors. The reserve requirement (or required reserve ratio) sets the minimum reserves that each bank must hold relative to customer deposits and notes. Please note that, due to slight variations in reserve requirements across regions and changes in the banking structure over time, the reserve requirement line represents a weighted approximation of historical data.

To bring this story to life, consider what happened to the stock market margin requirement in 1958. The first graph shows how it was tightened in only three months from 50 percent to 90 percent due to a stock market bubble. The message to investors was: “We are strapping on your seat belt for your own protection, folks.” Contrast this with the recent 2004–2007 housing bubble: As house prices soared, home owners were allowed and indeed encouraged to put ever less down on their houses. Millions of homeowners ended up with virtually infinite leverage on their investments. Even my Labrador retriever knew this was idiocy.

In commenting upon the abandonment of using changes in margin requirements to rein in equity market speculation in the United States, Yale economist Robert Shiller wrote:

Why did the Fed abruptly abandon its active margin-requirement policy in 1974? An important reason was the influence of the new Efficient Market Theory, the idea that markets always work extremely well. Eugene Fama’s highly influential academic article “Efficient Capital Markets” appeared in 1970, and Burton Malkiel’s book A Random Walk Down Wall Street was first published in 1973. Ever since, the Efficient Market Theory has been a powerful influence. Most of our leaders who might feel like commenting on the level of the market have retreated from doing so, thinking that such an action might be viewed as rash and irresponsible.

In applying his analysis to the tech bubble that was peaking when this article was published in 2000, Shiller goes on to point out:

If the Fed were to follow the same (margin-requirement) procedure today as in the past, then given today’s very high price/earnings ratios and recent equity price increases, margin requirements would probably be over 90 percent today. The absence of such a reaction from the Fed board members today, the abandonment of their old concern about speculation, and the reluctance of national leaders to say anything about speculation in the market, must be part of the reason why the current boom is bigger than any before.13

To further understand the impact of the new “markets-know-best” philosophy, consider then–Fed Chairman Greenspan’s explanation of his refusal to “interfere” in either of the two celebrated U.S. bubbles of the two most recent decades: the high-tech and housing bubbles. Despite his admission that both reflected “irrational exuberance,” Greenspan did little to curtail the rise of either. He proffered the lame excuse that “it is hard to know whether there is a true bubble until after it bursts.”

Really? Was the chairman not familiar with the statistical mean reversion that characterizes every known asset class, with the singular exception of contemporary art where, the worse the art, the higher the price ad infinitum? But the Fed was not the only culprit here. Consider the abject role of the SEC in permitting the leverage afforded to broker dealers to rise from around 12:1 to over 35:1 in 2004. How dare the regulators involved have permitted such levels of leverage given the obvious consequences of doing so? What is important to understand in all of this is that a new philosophy arose during the Thatcher-Reagan-Greenspan era—one that legitimized this markets-know-best view of the world. Once again we see that philosophies matter. They hatch ideas. These in turn hatch policies. And bad ideas hatch bad policies.

Emergence of a Pathological Incentive Structure

The excesses and resulting societal distress precipitated by these philosophical developments were exacerbated by a perverse change in the incentive structure operative throughout the financial community. Bluntly, it became rational for mortgage brokers and bankers to develop and peddle ever more complex and opaque products. These “enhanced yield” securities promised suspiciously high returns for an amount of risk that was either deliberately or carelessly underestimated.

Moreover, in a low-yield world, there was a huge demand by investors of all stripes for high-yield instruments. After all, without sufficiently high returns, investment managers got fired by clients who had become used to 12 percent indexed returns throughout much of the 1980s and 1990s. Finally, large fees were garnered by those who produced and marketed such securities. Even worse, the story was not restricted to new-fangled securities tailored to the alleged needs of sophisticated institutional investors like pension funds. It trickled down to normal home owners to whom bankers peddled subprime mortgages with negligible down payments and teaser interest rates—insanely risky loans.

Exacerbating matters was the reality that many of these new instruments were inherently complex and thus, opaque. When this is the case, it becomes partially forgivable for a banker selling such instruments to exaggerate their true risk/return appeal. Nonetheless, “buyer beware” takes on a new and ominous meaning when the underlying risk and embedded leverage cannot be understood by the very quants that created the products in the first place, much less by anyone else.

Anyone who doubts the importance of this comment need only note the recent advent of so-called “mark-to-model” asset valuations—valuations of complex assets reflecting the failure of traditional mark-to-market valuations to exist at all! This is an entirely new phenomenon in applied finance, one unthinkable within the dominant paradigm of classical Efficient Market economics. And this is another reason why I spent a good bit of time discussing the new theory of risk developed at Stanford University, since it can accommodate such phenomena in a rigorous manner.

What can be said about the behavior of product originators and investment bankers who peddled those sophisticated and opaque securities that have collapsed in value and are now regarded as a bad joke? Do we assume that there was an increase in the percentage of bad guys per se? Not necessarily. For to some extent, this behavior stemmed from the advent of new incentives and new technologies that made it rational for bankers to do what they did. In the past, their behavior would have been different due to the different incentive structure that prevailed, and due to the fact that financial engineering had not yet come of age.

One final point should be made about the role of pathological incentives in helping to create the global financial crisis. Consider the scandal whereby banks could bundle many mortgages together into “securitized packages,” and then sell these to yield-hungry investors as almost riskless. Outrageously, banks were free to do this without retaining any ownership of the products peddled. If they have no skin in the game, so to speak, then why would they bother to do their homework when creating such packages? No wonder the quality of loans turned out to be much lower either than was advertised, or expected.

The same point about a lack of skin in the game applies to the structure of today’s investment banking firms. These were once partnerships where the partners had every reason to be cautious about how their own capital was deployed. The buck stopped with them. This was no longer true once Goldman Sachs and many other firms became limited liability corporations, and could play a Heads-I-Win, Tails-You-Lose game with the taxpaying public.

Who is to blame when undesirable behavior results from pathological incentives? It is easiest to blame the immediate perpetrator for his or her behavior. But is this really fair? The incentives that drive all of us to do what we do in our lives are best viewed as “given by society” and not chosen by any of us individually. To this extent, bankers themselves were perhaps not as blameworthy as they appeared to be in the recent crisis. The moral is that, if society does not like the behavior that results from an existing set of incentives, then legislators and regulators must change and improve the incentive structure. Instructing people to act “better” given unchanged incentives is hypocritical and will not succeed. This is as true at the level of the family as it is of entire societies.

Excess Societal Leverage

The fourth and final box appearing in Figure 4.1 contains the title “Excess Leverage.” I have already stressed the all-important role of leverage in amplifying distress during a Perfect Storm. I have also stressed the fact that the only policy-relevant control variable in Figure 4.2 was leverage. The other three Perfect Storm variables are state variables that are not controllable. Finally I mentioned that it is possible to determine the optimal amount of leverage both for individuals and for society as a whole. This will be a function of people’s degree of risk aversion, and of the probabilities of gains and losses they think they face when making an investment. Once we can determine what is optimal, we can also determine what is excessive.

What I did not do was to make the case that government should not allow excess leverage since it is a “public bad” or “externality” which must be reined in. It is important to prove this because, if we don’t establish this from first principles, then markets-know-best dogmatists can oppose leverage regulation as unnecessary and “meddlesome.” At a much deeper level, I also want to show that excess leverage is problematic above and beyond its role in amplifying Perfect Storms. For even without any such storms, excess leverage remains a drag on public welfare. Fortunately, the required proof can be explained with the help of a simple diagram.

Consider a society in which it is legal for people who wish to leverage way up to do so, borrowing as much as they wish to or can. Suppose that some subset of the population avails itself of this ability, typically players in the financial sector. Suppose that the remainder of the population does not wish to leverage up, other than to assume normal (and often optimal) amounts of household debt. Might a nonmarket “externality” arise here to the extent that those who choose to leverage up can cause significant harm to those who choose not to? If this is the case, then there is an a fortiori right for government to regulate leverage. But do those who leverage up highly harm the general public, and if so, how? This is the root question, and one that is rarely addressed.14

The answer is that yes, they do. An informal proof can be given by discussing Figure 4.4. The solid line or “turnpike,” as economists call it, represents that “natural” rate of growth of wealth and living standards of an economy over time. The greater the slope of the line, the more rapid wealth growth becomes. This growth rate depends upon such factors as the growth of the workforce, of the capital stock, and of labor productivity. In classical growth theory dating back to the work of scholars such as Frank Ramsey and John von Neumann early in the twentieth century, leverage and uncertainty played no role in determining the nature of this turnpike.

Figure 4.4 Explaining the Wealth Growth Paradox

Source: Strategic Economic Decisions, Inc.

image

When classical theory is extended to include uncertainty, and the ability to leverage, the economy gets somewhat riskier. Let the dashed line in Figure 4.4 represent the trajectory of the economy in this setting in which everyone is assumed to leverage “optimally” as described earlier on in the chapter. For example, a normally risk-tolerant person might assume a mortgage representing 70 percent of the cost of their house—certainly not 95 percent! And no fund managers exist who would leverage up other investors’ money by factors of 10-, 50-, or even 500-to-1, not even for a single, 10-minute, “once in a lifetime” trade. The curving dashed line in the figure represents normal business cycles in this regime of optimal leverage, and of correspondingly greater cyclical risk.

Now allow a subset of investors (and indeed institutions) to leverage way up. This will give rise to the much riskier ride for the entire economy, as depicted by the dotted line in the figure. The analysis of Figure 4.2 comes into play, and thus we have the Full Monty of endogenous risk, and possibly even a Perfect Storm or two. Business cycles become much more extreme, as indicated in the figure by the dotted line. Remarkably, the first fully formal model showing how gyrating bouts of collective optimism and pessimism (“animal spirits”) in tandem with debt cycles generate business cycles of this kind was published by Kurz, Motolese, and Jin in 2006.15 The underlying ideas go back to Keynes, and more recently to the late Hyman Minsky with his brilliant theory of credit cycles. But a theoretically correct model pulling it all together—along with statistical testing—is only six years old.

Now in this rough-and-tumble new world, there will clearly be huge winners who reap millions and indeed billions from their highly leveraged positions. Others will lose and go bust, and may take down financial institutions with them. The societal problem is that, due to leverage-fueled speculation, to endogenous risk, and to financial sector fragility, most citizens will not only fail to benefit from these excesses, but will almost certainly end up worse off. Just ask those tens of millions of workers worldwide that lost their jobs to the GFC!

This is how average citizens are hurt, and this is the central point of the argument for reining in excess leverage. It turns out that excess leverage does nothing to increase the long-run growth rate of the economy as a whole, or the wealth of workers. That is, it does not increase the slope of the “turnpike” in the figure. It just creates a much riskier environment with far more pronounced business cycles.

Yet it is axiomatic in economics that society should only assume more risk if the expected returns (average wealth growth) rise appropriately. But this is not the case with excess leverage, as is known within the theory of economic growth. The reason is intuitively clear: Periods of economic boom due to excessive optimism and leverage are inevitably offset by subsequent periods of busts. The long-term average rate of growth (the turnpike) is not impacted.

To conclude, the dashed line is far superior to the excess-leverage-driven dotted line. It delivers the same end result with much less risk. Excess leverage is indeed a public bad, and must be checked. The case for government intervention to limit leverage is thus well grounded in first principles.16

This completes our discussion of the contents of the four boxes appearing in Figure 4.1 that summarize the four origins of the credit market crisis of 2007–2010: the advent of poor economic theory, of a markets-always-know-best deregulatory environment, of pathological incentive structures, and of excess leverage that greatly amplifies the impact of mistakes. We now discuss the two kinds of policy reforms needed to redress today’s state of affairs.

Requisite Policy Reforms

To complete this chapter, and keep things simple, let me just summarize two types of financial market reforms that should make Perfect Storms much less likely in the future. Both proposals stem from the foregoing analysis of what causes such storms to occur.

The behavior of interacting agents in an economy can always be represented as a “game,” as is done within game theory. A game has two main components: its payoff matrix associating a payoff to each player corresponding to any choice of strategies by all the players; and a feasible set of the strategies available to each player. In Figure 4.5, Type I reforms are policy changes whereby the payoff matrix is altered in a way that incentivizes the players to act in a way that is better for social well being. More specifically, these payoff-changing reforms specify new carrots and sticks intended to persuade players to choose new strategies that are collectively beneficial to society as a whole. Type II reforms are different. They alter the feasible strategy sets of the players, imposing outright restrictions as to which strategies are legal (feasible) under what circumstances, and which are not.

Figure 4.5 Type I and Type II Policy Reforms

Source: Strategic Economic Decisions, Inc.

image

For the sake of brevity, the principal Type I reforms are simply listed in Figure 4.5 rather than discussed, with one exception: the need to rethink the concept of “risk assessment and risk management.” Most of the other proposals are well known, and to some extent have been incorporated in new legislation, such as the Dodd-Frank Wall Street Reform and Consumer Protection Act. In the case of Type II reforms, we zero in on the need to limit leverage throughout the financial system, and to reduce the size or at least the riskiness of banks.

Type I Reforms

These are the types of remedies that have been discussed in the financial press, and in some cases acted upon by Congress and by the regulatory authorities. Among the reforms that are most needed are:

This last reform merits special discussion. What is needed is a radical change in what it means to assert that “risk is being properly managed.” To begin with, it must become understood that superior risk management logically presupposes superior risk assessment of the risk being managed. More specifically, any meaningful claim that a given strategy or product will successfully manage risk must be backed up by a demonstration of how the strategy successfully transforms the probability distribution of resulting gains/losses into one with less risk. Why is this necessary? Because the entire purpose of risk management is to permit a transformation of the original distribution into a post-risk-management distribution that is less risky and more acceptable to the client. Therein lies the essential value of risk management. Obvious as this may seem, few risk-management customers are ever shown the true transformation of risk that they are purchasing. For example, fat tails are rarely identified as a possible risk.

However, wholly new training in risk assessment is needed if superior risk management is to be possible. Why? Because there will be environments in which the dominant risks are endogenous, and therefore not amenable to assessment by traditional quant statistical analysis at all. Even worse, such risks will often be fundamentally nonknowable, for reasons stressed previously. As a result, the very concept of optimal risk management becomes problematic: How can risk management exist if the underlying risks cannot be described, much less quantified?

Such limitations inherent in risk management should be explained to clients. In the past two decades, risk management technology centered on value-at-risk (VAR) models. As is now finally recognized, the theory underlying VAR completely sidesteps the endogenous risk story, and with it the very kinds of risk that recently brought down the world financial system. My own view is that an entirely new program of risk assessment and risk management is needed. Its basic premise would be that both classical exogenous risk and endogenous risk must always be taken into account, even though they must each be assessed differently.

Trainees in the new program would learn that “fat-tailed” events are not simply random, but are rather caused by factors that generate them (e.g., by the four drivers depicted in Figure 4.2). They must learn how to quantify and monitor the probabilities of these drivers on an ongoing basis and, as a result, to assess the probabilities of fat-tailed events to the extent possible. I described a simple process for doing so in greater detail in endnote 11. This is just a start.

A good place to kick off such a rethink would be in the curriculum of the CFA Institute. The material it requires investment management trainees to master for certification is studied by students all over the world. The institute has a genuine monopoly, for better or worse. As someone who has lectured at CFA events worldwide, I find the content of today’s CFA program stale and dated: classical efficient market-type theories, enlivened with an overlay of behavioral finance. This content needs to be completely reworked given the disaster of the GFC, given the fall from favor of all efficient market theories, and given the advent of new theories that can make sense of Perfect Storms. I tried a few years ago to interest the powers that be at the Institute to proceed along these lines, but met with no success. Hopefully this will change, as the CFA Institute is in a position to move the entire financial profession forward in a long-overdue and useful manner.

Type II Reforms—Limiting Leverage and Breaking Up the Banks

The Type I reforms just cited amount to rearranging the sticks and carrots (the payoff matrix) of the existing game. They will help reform the financial system, but they do not go far enough. Restrictions must also be imposed upon the types of strategies that are legal (feasible) in the first place.

Limiting Leverage

This is by far the most important policy in the entire panoply of Type I and II reforms. But to listen to pundits of most persuasions, reducing leverage somehow ends up being of secondary or tertiary importance. They put far more emphasis on mitigating self-dealing, greed, and incompetence—all worthy goals. I personally find this remarkable. But then again, I have been steeped like a tea bag in the new theory of endogenous risk, so I am particularly sensitive to the role of excess leverage in nonlinearly amplifying financial disasters.

The comments by Martin Wolf on the opening page of this chapter are typical of the current attitude toward leverage. Wolf cites conflicts of interest and rampant incompetence as the true culprits behind the mortgage crisis. He does not seem to view leverage per se as the true villain, much less the regulatory authorities that permitted and indeed encouraged such leverage: the Fed, the SEC, Fannie and Freddie Mae, and so forth. He does cite NINJA loans, but cites them as proof of incompetence by bankers rather than as one more manifestation of excess leverage that never should have been allowed in the first place. To be fair, Wolf may have written elsewhere about the perils of excess leverage, as he surely understands these.

Yet deep down, almost everyone intuits that excess leverage lies at the heart of the GFC. What explains why this topic is skirted as much as it is? Why is leverage a phantom variable that is duly acknowledged but then ignored? I can think of four reasons:

1. There is no sense that an “optimal” amount of leverage is a meaningful concept, much less one that can be quantified at both the individual and societal level. And without such a benchmark, who can determine what leverage is excessive? So why focus on it? Yet the concept is indeed meaningful, and can be quantified, as was pointed out earlier.

2. There is no awareness of the role of endogenous risk in dramatically amplifying the damage wreaked by excess leverage. I know of no commentator on the housing crash, much less on the resulting GFC, who has ever cited or utilized this new concept. Instead, we are treated to endless discussions of the “mysteries” of Black Swans and fat-tailed events. But there is no longer a mystery here given the new theory of Kurz and his colleagues. There are reasons for when and why highly improbable and damaging events occur.

3. Given the fools that regulators, bankers, quants, and Nobel laureates in financial economics have made of themselves, it is simply irresistible for the financial press to go after these parties rather than to discuss a highly complex subject such as optimal leverage. The press should set its sights much higher, and delve much deeper.

4. There is a sense that it is hopeless to try to reduce leverage given the opposition to doing so by financial behemoths, and by hedge fund managers and traders. This sense is magnified by the patchwork nature of banking and regulatory supervision. If we are to seriously limit leverage, exactly who is going to lead the charge to do so against the self-interests of banks and other investors?”

A Radical Proposal—Need for a Leverage Czar

At present, the principal steps taken to reduce leverage are those Basel III (and related) Accords requiring increased bank capital to support loans, and thus to reduce bank balance sheet leverage. Financial institutions have been fighting these, are dragging their feet in implementing them as of this writing, and are discovering new ways to dilute their force. In the United States, the Dodd-Frank Wall Street Reform and Consumer Protection Act places regulatory authority in these matters inside the Fed. I share the view of Sir Mervyn King of the Bank of England and of Paul Volcker in New York that the measures being proposed and implemented fall far short of what is needed. Indeed, some analysts believe that equity capital of 20 percent of the balance sheet is needed, roughly double what is being proposed. But this higher percentage would significantly reduce bank profitability. Guess who will prevail in this tug of war?

Yet I have a related concern. The Fed is the wrong institution in which to vest regulatory power over the banks. It is well-known that, within the Fed, power and prestige are located in the area of monetary policy, not in regulatory activities that are much lower profile. Given this reality, along with the Fed’s failure to display much concern with the leverage issue, I believe an entirely new arrangement is needed. Specifically, I would like to see the creation of a Department of Asset Market Leverage whose sole job it would be to deal with market-by-market leverage in a novel manner. It would be headed by a Leverage Czar known for his or her integrity and independence, for example, a Paul Volcker of sorts.

The proposed name of the new department bespeaks the problem being addressed. It is asset market bubbles and the leverage (“financing”) that propels them that has always been the root problem, a point stressed often by the late Hyman Minsky in his celebrated work on debt cycles. But if asset market bubbles tend to be the problem, and if excess leverage amplifies the boom and its subsequent bust, shouldn’t leverage be explicitly targeted and controlled, and not always play the role of second fiddle? Recall that excess leverage is an externality, and a big one. But now a new problem arises: Which asset markets should be regulated as regards leverage, and when? There should not be one leverage policy for all since assets markets often move independently of one another. Bubbles in different sectors thus rarely coincide. Moreover, since different asset markets are prone to bubbles of different sizes.

The solution is to have independent regulation for the different asset markets in which bubbles occur, in particular the stock market, household real estate, commercial real estate, bonds, and commodities. There would be a separate regulator for each asset class. Extensive use would be made of “mean reversion” evidence, the tendency for prices to rise above and then below some long-term mean value. For example, price/earnings valuations of stocks average 15 over the long run. As this valuation rose to over 30 during the late 1990s, a large stock-market bubble was clearly underway. Optimal policies would raise/lower legal leverage ceilings depending on the deviation of current valuations from mean-reverting trends, and would do so on a market-by-market basis.

Furthermore, it would not be banks alone that are regulated. Rather, institutions of every kind ranging from individual speculators to hedge funds to commodity brokers and to banks would be subject to the appropriate leverage limits, with extremely severe penalties for dodging these. To be sure, different limits would apply to different categories of investors.

Would such a policy amount to “meddlesome government interference” as apologists for markets-know-best economics would probably argue? Must it become politicized? No. Policy could take the form of well-publicized “rules” known by all in advance, just like the Taylor rule formerly utilized in setting Fed policy. Transparency and predictability would thus be achieved. The rules would be quasi-robotic, driven by objective considerations of deviation from trend values. Political interference would be minimized, just as it is in the Fed’s Open Market Committee decisions.

To be sure, many commentators have proposed dynamic regulatory adjustment rules, although these rarely target leverage per se. I am going somewhat further, partly because everyone now acknowledges the harmful role played by asset bubbles, and partly because we understand the all-important role of leverage in amplifying endogenous risk and creating Perfect Storms. Never forget the most important point about Figure 4.2: Leverage alone is a control variable that can be changed at will. The other variables are state variables. Learn to live with them and focus instead on the control variables. An irresistible analogy: While the courtiers of King C’nut allegedly railed against the waves crashing upon the shores of Denmark many centuries ago, their wise king counseled them to construct seawalls instead of bewailing the state of nature.

Macro-Controllability

There is one last virtue in the proposal to regulate leverage on its own, and it is a very important one stemming from the foundations of macroeconomic theory. Many observers blame the Greenspan and Bernanke Fed for causing the technology-stock bubble of the late 1990s and the housing bubble of 2004–2007 by keeping interest rates too low. “The Fed could and should have used interest rates to prevent these asset market bubbles.” But this is not true: Even if higher rates would have prevented bubbles (and this is not at all clear), the Fed should not necessarily have used its interest rate tool. This is because the Fed’s mandate is to use monetary policy to regulate employment and prices on Main Street—no more—and both could have been adversely impacted by much higher rates.

The problem here is that asset prices per se were never explicitly targeted, partly because it was not understood how adversely asset market bubbles could impact Main Street. Suppose, however, that the Fed now does wish to better control asset prices. If it does, can it do so? Here we run into a problem known as “controllability” in macroeconomics. It happens that the correlation between asset prices and consumer prices is very low, along the order of –0.05 during the past two decades.17 Thus, in using conventional monetary policy to impact consumer prices and the state of the economy, its mandate, the Fed would have a hard time in also controlling asset prices. Looked at in reverse, the Fed could attempt to prevent an asset bubble by raising rates to, say, 12 percent, but doing so could cripple Main Street and cause deflation, thus violating its mandate.

What can be done? The theory of controllability tells us that the government needs a new and independent “policy instrument” to help regulate asset prices. That new instrument would be the policies of the proposed Department of Leverage with its robotic, dynamic regulation of asset market leverage. Happily, controlling leverage and hence asset prices would not impact consumer prices due to the lack of correlation between goods and asset prices. The policy mandates of the Fed and the Department of Leverage would in this sense be independent of one another. An important point of economic policy arises here, and is discussed in an endnote for the interested reader.18

Breaking Up the Banks

The last reform I would propose is one championed by many, notably Mervyn King and Paul Volcker. There are two different ways in which to break up banks that are “too big to fail,” and these are often confused. First, there is the proposal that existing banks shed their “proprietary trading” activities of investing and speculating with their own capital. The bank that remains would be a less leveraged if less profitable entity. A different reform would be to permit banks to continue their proprietary trading, but require them to hold much higher capital reserves to back up such activities, again making them less leveraged and less profitable. The remaining bank would be spared the vicissitudes of large gains and losses generated by large trading bets of the kind that cost UBS over $2 billion in September 2011.

The second main proposal would require the large commercial banks to be split into many smaller banks, thus mitigating too-big-to-fail concerns in a different direction. Completely aside from the prop trading issue, there is growing concern that there are now even fewer huge banks than before controlling far too large a share of total bank assets.

I believe both proposals have merit and should be implemented. In the case of prop trading, there is clearly a conflict of interest in large firms when prop traders within a bank assume leveraged positions (long or short) at odds with what the banks’ own customers are being advised to do. It is not so much that this is bad per se, but that it stokes public cynicism about banks and thus undermines the confidence in the financial system that is so important for its credibility. Additionally, prop trading as we know it has increased the riskiness of institutions that engage in it.

As for breaking up the banks themselves into many smaller pieces, this should have the beneficial role of reducing “systemic risk.” Recall that it was the collapse of confidence in the huge banks right after the Lehman Brothers fiasco that precipitated the GFC to spread like wildfire from the United States and United Kingdom to the entire world. I have been told that such contagion would be much less likely to happen in a world of many smaller banks. Frankly, I am not sure this is true.

Yet there is a deeper reason for ridding ourselves of too-big-to-fail banks, and this reason is moral and political-philosophical in nature. The public quite rightly believes that, during the recent crisis, the large banks got away with murder. People rightly believe that large banks are playing a game of Heads-I-Win, Tails-You-Lose. They come out ahead no matter what happens. In good times, the big banks reap and keep all the benefits of success. But in bad times, the risk of contagion is so great that government must bail them out to prevent panic and collapse. It is the taxpayers—the voters—who end up holding the bag, and they have every reason to be indignant about the current system. Cynicism is heightened by the highly inflated pay commanded by bankers and traders at “big” institutions, in contrast with the much more modest compensation at local banks that enjoy widespread local support. In sum, the game has been rigged in a manner that rightly undermines confidence in the entire financial system. And confidence in the financial system itself is a sine qua non for true capitalism to deliver the goods that justify its very existence. Public policy must thus ensure that such trust is in place.

For these reasons, I support fundamental revisions to today’s banking system of the kind currently being implemented to one degree or another, independently of the matter of leverage. However, if the issue of leverage itself is properly addressed, it may not be necessary to actually break up the big banks. In this regard, it is noteworthy that big banks dominate both the Canadian and Australian banking scene, yet virtually all of these institutions emerged from the GFC unscathed. What did they all have in common? They all had balance sheets whose leverage was a fraction of that of the large U.S./UK banks. India offers an even better example, where regulators were dead set against the “excesses” of the Western banks for two decades. The financial system there proved very resilient as well.

Sharing a Dirty Little Secret

This concludes the chapter on preventing Perfect Financial Storms in the future. I support all the reforms listed in Figure 4.5. But of these, the Type II reform of controlling leverage is the most important. I am often asked why this obvious point is not more widely recognized, and why leverage reform is so vigorously opposed. I offered four reasons earlier, and I would now like to offer a fifth, a dirty little secret virtually never discussed.

The powers that be in Greenwich, Connecticut, on Wall Street and in the City of London enjoy very, very, very large incomes. They are very powerful and contribute heavily to political campaigns. Moreover, their spouses are used to such incomes, incomes that have become entitlements of sorts. But what these players will not tell you is that their stratospheric incomes are due in large part to leverage. Nor will they confess that talent has less to do with their success than they might think, notwithstanding that brilliant fund managers do indeed exist, as is true in any calling. Leverage, financial sector cartelization, and luck play a larger role than skill. The two Type II reforms that I have proposed squarely confront leverage and cartelization. As for the role of luck in life, and its implications for optimal tax policy and redistribution, please see Chapter 6. If you are a fund manager, your head may spin.

1. This “inclusion principle” is a mark of genuine scientific progress, as is well known in the philosophy of science. It is thus not surprising that the new theory of endogenous risk in economics is compatible with the insights of “behavioral finance,” and can incorporate them within it. In my view, the new theory permits a synthesis of many of the alternatives to efficient-markets theory that have sprouted in recent years. But the new theory has an advantage the other theories do not: It is an analytically self-consistent and closed structure that permits falsifiable predictions about the future. That is to say, it is scientifically valid, as was the efficient-markets theory. But the latter theory did not explain or predict real-world levels of volatility.

2. While the role of correlated forecast mistakes in amplifying risk is intuitively obvious, it is extremely difficult to model in a suitably rigorous manner. See Mordecai Kurz, “On Rational Belief Equilibria,” in Economic Theory (Berlin/Heidelberg: Springer Verlag, 1994) for a fairly simple treatment, and Kurz, Handbook of Game Theory with Economic Applications (Amsterdam: Elsevier Press, 1994) for a more advanced treatment.

3. For readers not familiar with the concept of a moral hazard, the term refers to situations where the act of obtaining insurance on some event changes the probability that the event will occur. If an insurer knows that the true probability of a disability such as a bad back is one in 300, and you obtain insurance against it, then you will have an incentive to pretend you have a bad back once you have insurance. What now are the odds of purported bad backs? Insurers are understandably reluctant to issue policies in these circumstances. This is an example of “the moral hazards problem” that leads to many “missing markets” that in turn make it impossible for honest people to obtain insurance at all.

4. There is little need for a formal justification of this precondition. Suffice it to say that classical finance postulates the existence of a complete set of hedging markets for an optimal reallocation of risk, as was demonstrated by Kenneth Arrow in his 1953 paper, “The Role of the Securities Markets in the Optimal Allocation of Risk.” This paper is discussed in Chapter 3. The well-known fact that many hedges do not exist, and that hedging strategies often malfunction and break down, is sufficient to establish the validity of precondition 2.

5. Pricing Model Uncertainty is not explicitly identified as such in Kurz’s own theoretical papers. Rather, it is scrambled up with other sources of endogenous risk, most importantly uncertainty about the “extended state space” central to his theory of rational beliefs. I myself have focused on this particular dimension of risk, partly because I have found it to be important in the minds of investors whom I have advised during the past two decades, and partly because I was able to demonstrate its importance at a theoretical level. The proof was presented as an invited paper at a Stanford University seminar in theoretical economics during the summer of 2009. It is based upon a complex theorem in the theory of games of incomplete information applied to “optimal exiting” in duopoly theory proven by D. Fudenberg and J. Tirole. I am indebted to my colleague John O’Leary for discovering their 1986 result and appreciating its relevance to Pricing Model Uncertainty. The basic intuitive idea at work here is summarized in the text. Importantly, it is required that the investors be motivated by relative (not absolute) performance for the theorem to hold true, as indicated in the main text. See D. Fudenberg and J. Tirole, “A Theory of Exit in Duopoly,” Econometrica (1986), vol. 54, no. 4, 943–960.

6. See M. Kurz and M. Motolese, “Endogenous Uncertainty and Market Volatility,” in Economic Theory 2001: 16, 497–544.

7. At the most fundamental level, the restrictive axiom here is that the environment is stochastically stationary, in the sense of that term within ergodic theory. In a stationary system, whereas things can change over time (for example, GDP output rises and falls with the business cycle), the way things change cannot change over time. Formally, the joint probability distributions representing forecasts are invariant across time. Classical theory also allows for “learning” of a restrictive type. The problem is that, for genuine learning to be possible, it must be assumed that some fixed truths exist that can be learned with enough data. In a nonstationary environment, this is not the case. Kurz deals with all these details, and in fact develops an important and useful halfway house between stationary and nonstationary systems known as “weak asymptotic mean stationarity” or more briefly “stability.” I believe that Kurz’s theory permits a satisfactory integration of both classical and behavioral finance. On the one hand, it retains the classical view that people are goal-seeking in their behavior (“weakly rational” in a sense), but that in seeking to maximize their risk-adjusted expected returns, they make mistakes because their forecasts are inevitably wrong. In other words, agents are weakly rational yet wrong. His theory of rational beliefs does not delve into why we are wrong when we make mistakes. Rather, it explains the implications of being wrong (for whatever reasons) for market volatility, and for the need for corrective policies.

8. We stress “subjective uncertainty” because, in at least one version of Arrow’s early theory, there was no assumption that “all agents possess the same forecast—a forecast that is correct.” This so-called rational expectations assumption arose later within the Chicago school of efficient-markets theory, and as indicated in the text, was an unfortunate mistake. To be fair, the introduction of this assumption was a necessary and important milestone in the evolution of economic theory. It helped clarify the previously fuzzy concept of “efficient markets.” I view it as akin to Galileo’s great advance in understanding the relation between mass and gravity. Galileo had to assume “friction-free physics” to obtain his result that balls of different sizes and weights hit the ground at the same time. It is not fair to criticize him for not incorporating friction, as he lacked the tools for doing so. The same applies to the work of Sargent and Lucas in developing rational expectation economics. They did not have the analytical tools for developing general equilibrium models with mistakes. What Kurz has achieved is to make possible economics with mistakes, just as Newton and his successors did by making possible physics with friction. We could finally model the reality that feathers and lead balls do not hit the ground in equal time during a windstorm. Indeed, feathers may rise upward given the right wind conditions!

9. R. J. Shiller, Macro Markets: Creating Institutions for Managing Society’s Largest Economic Risks (New York: Oxford University Press, 1993).

10. Deep truths are often found in so-called “limitative theorems” of this kind (for example, the Heisenberg uncertainty principle in quantum theory, the Arrow impossibility theorem in preference aggregation theory, and the Godel incompleteness theorem in metamathematics). Kurz’s result here on nonknowable probabilities recalls these and other discoveries of our epistemological limits.

11. Perhaps I am being a bit too pessimistic here. It is in fact possible to quantify endogenous risk in terms of Figure 4.2 in the following very elementary manner. First, for any given time period, assess the joint probability distribution over the four drivers shown in Figure 4.2. Next, assign a payoff (in this case the magnitude of distress from a Perfect Storm) to each of the events defining the domain of this joint distribution. Finally, integrate to arrive at a marginal distribution on “distress,” and compute the mean value of this result. It will represent the expected degree of Perfect Storm distress. By performing this simple operation every few weeks, officials could track whether a Perfect Storm is becoming more or less likely.

12. To be fair, changes in policies of the kind seen in Figure 4.3 reflect not only a change in the philosophy of governance, but also a change in how “financial innovation” permitted investors to evade traditional regulations. For example, once the Chicago Board Options Exchange was opened in 1973, individual investors could take speculative (option) positions in stocks without utilizing classical margin accounts. In the case of the reserve requirement, the advent of “sweep accounts” and of securitization revolutionized the ability of banks to create balance sheets of their liking. See the interesting essay by Paul Bennett and Stavros Peristiani, “Are Reserve Requirements Still Binding?,” Economic Policy Review 8, no. 1 (May 2002).

13. Robert Shiller, “Margin Calls: Should the Fed Step In?” Wall Street Journal, April 10, 2000.

14. Remember that in a classical textbook economy, no one agent can harm or benefit another. As in Adam Smith’s Wealth of Nations, no one can gang up on anyone else, and no one wheat farmer or group of farmers can impact the interest of any other. All players in this sense are “strategically independent” or “inert.”

15. H. Jin, M. Kurz, and M. Motolese, “Economic Fluctuations and the Role of Monetary Policy,” Chapter 10 in Knowledge, Information, and Expectations in Modern Macroeconomics: Essays in Honor of Edmund Phelps, P. Aghion, R. Frydman, J. Stiglitz, and M. Woodford, Eds. (Princeton, NJ: Princeton University Press, 2003).

16. It might be thought that agents could insure themselves against the extra risk caused by a dotted-line environment by utilizing appropriate hedging strategies in private markets. But this is not the case. Moral hazard arguments, in this context, imply that the requisite hedging strategies will not exist. For no rational agent will write a hedge against the societal costs imposed by binges of excessive optimism and leveraging that can lead to a Perfect Storm. This lack of hedges is an example of the “missing markets” problem in economics.

17. More specifically, between 1990 and 2010, the correlation between the CPI and the S&P 500 was –0.316, the CPI and existing house prices (average) 0.141, the CPI and gold –0.074, and between existing house prices and the S&P 500 index 0.143, for an average negative correlation of –0.025.

18. In terms of economic jargon, I am urging for greater “controllability” of macroeconomic policy in Jan Tinbergen’s sense. The nation needs a new “target,” namely asset prices, and to regulate these it will require a new “policy instrument,” namely the proposed Department of Leverage. As is required by Tinbergen’s principal theorem, the new instrument should be independent of the government’s other instruments, namely fiscal and monetary theory, and there should be no “degeneracy” problems in achieving all three of the government’s targets: full employment, consumer price stability, and asset price stability. See Jan Tinbergen, The Theory of Economic Policy (Amsterdam: North Holland Publishing Co., 1952). As the father of controllability theory, and for other accomplishments, Tinbergen shared the first Nobel ever awarded.