Efficient Markets and Ptolemaic Epicycles
MY EXPERIENCE AS a Forbes columnist for more than thirty years—writing columns that EMH partisans did not take kindly to—gives me the tiniest inkling of what Galileo went through with his scientific work in the early seventeenth century. The great Italian scientist supported the new heliocentric idea of the solar system when a huge majority of philosophers and astronomers still subscribed to the geocentric view, namely, that the earth is the center of the universe. Galileo was forced to recant his views. When he published again, in 1632, he was arrested by the Inquisition, was found “vehemently suspect of heresy,”1 was forced to recant yet again, and spent the rest of his life under house arrest.
The geocentric view, by the way, had an impeccable pedigree going all the way back to the work of Ptolemy, a revered scholar who lived in the second century A.D. in Alexandria, the University of Chicago of its day. He was part of a cosmopolitan elite who had made the Egyptian city a jewel of scholarly activity, and his own outstanding contribution was the Ptolemaic system.
His treatise, using hundreds of years of celestial observations, explained the motion of the sun and planets and provided convenient tables allowing the computation of future or past positions of the planets. The basic premise was that the earth was the center of the universe and the planets, the sun, and the stars orbited around it. The Ptolemaic system was universally accepted by the civilized world for almost 1,600 years, filling a major role in land and sea navigation.
None other than Galileo introduced the telescope to astronomy in 1609. He became the first man to observe the craters of the moon, sunspots, the four large moons of Jupiter, and the rings of Saturn. Far from abandoning the Ptolemaic system, the model became increasingly complex in order to incorporate the new observations. Now the planets and stars moved around one another and around the earth in a combination of circles, epicycles, and eccentrics (large deferent circles—don’t ask—around whose centers the epicycles revolved). The result of this hodgepodge was a mind-boggling whirl.
Nonetheless, the Ptolemaic system met two major criteria of a useful scientific hypothesis: it was “predictive” in correctly forecasting where various celestial bodies would be at future points in time, and it was “explanatory” because it codified a system of planetary motion.
It was also entirely wrong.
Which brings us back to EMH. As discussed earlier, EMH and its companions, MPT and CAPM, are based on extensive mathematical analysis. Other critics and I would not dispute this. Apart from the strong evidence of the theory’s inaccuracy that we saw in the record of events covered in the preceding chapter, the controversy now moves further to the reason why: the underlying assumptions of EMH, as we’ll see, are highly questionable, or have never been tested, or are outright fallacious. Much of the ultrasophisticated mathematical analysis is constructed on these seriously flawed assumptions, which appear to be built on sand. Just as a space launch requires a sophisticated launch pad for a highly complex shuttle to be able to blast off, so highly advanced mathematics requires a solid base to predict market action correctly.
The second important area we’ll look at is the mathematical testing itself. Here I think you’ll find some surprises. Much of the original EMH testing was flawed, as we saw with volatility theory in the previous chapter, and didn’t prove that volatility was the sole or even an important risk factor. Once we’ve examined these assumptions and their flaws, you’ll understand why I liken the continued belief in EMH to the unyielding acceptance of the Ptolemaic system after Galileo had shown that the sun does not revolve around the earth.
The preceding chapter showed that EMH assumptions had gone down in flames in the cases of the 1987 market crash, the Long-Term Capital Management debade (1998), and then the 2007–2008 crash and the steep recession following it. One of the chief culprits was inadequate risk theory, with its focus on volatility to the almost complete exclusion of leverage, liquidity, and other important risk factors. Unlike the Ptolemaic system, EMH hasn’t even had reliable predictive power, and it offers no more accurate explanations of market movements than Ptolemy did of planetary motion.
Two questions we should now ask:
1. Why was there an almost obsessive focus on volatility as the sole measure of risk by the academics?
2. Was this focus justified?
Let’s look at these questions next.
Much of the previous chapter swirled around volatility. Does greater volatility reward investors with higher returns, and lower volatility with lower returns? From what we’ve seen in that chapter, the answer is a definite no. In reality, academic research found answers to this question more than three decades ago. Let’s look at these answers now.
How did leading EMH academics know that investors measured risk strictly by the volatility of the stock? They didn’t, nor did they do any research to find out, other than the original studies of the correlation between volatility and return, whose results were mixed at best. The academics simply declared it as fact. Importantly, this definition of risk was easy to use to build complex computer finance models, and that’s what the professors wanted to do. They could then build a simple but elegant theory.
Economists find this view of risk compelling, if not almost obsessional, because it is the way rational man should behave according to economic theory. If investors are risk-averse and economists can show this to be so, they have proof of a central concept of economic theory: that man is a rational decision maker. If investors will take greater risks only if they receive higher returns—eureka!—an eight-lane highway opens between investment markets and microeconomic theory. And via this highway, investment markets deliver to economic theory the ultimate payload—proof positive of rational behavior in markets they’ve searched for more than two centuries. Right or wrong, the idea is too seductive for economists to give up.
As we know, the professors also devised measures to adjust mutual funds’ and money managers’ performance for the riskiness of their portfolios—measures that, if volatility is not the sole measure of risk, are fallacious. Still, four decades or more later, these are still the key measurement of risk and return used. I might return 15 percent a year and my competitor 30 percent, but if her portfolio was much more volatile than mine, I would have the better risk-adjusted returns.
It turns out it could all come from the Wizard of Oz. But why is that a surprise? Volatility gives the appearance of being a highly sophisticated mathematical formula but was constructed by people looking into a rearview mirror. Volatility takes inputs that seemed to correlate with it in the past and states they will work again in the future. This is not good science. To me it doesn’t seem to be much different from a technician relying on past price movements to determine those in the future, an argument the academics almost gleefully disproved, as we saw earlier. However, to protect their volatility theory, they use a similar tactic. I’m sure you’ve deduced that it’s to protect CAPM theory and thereby EMH.
But the critical question is still there: why is volatility the measure of risk, rather than an analysis of a company’s financial strength, earnings power, leverage, liquidity, outstanding debt, and dozens of other measures that investment experts in corporate management use? Sure, volatility is alluring to economic types, but what else has it got going for it? Possibly you accept the measures without question. Most people do. But in truth it is faulty.
In the first place, it has been known for decades that there is no correlation between risk, as the academics define it, and return. Higher volatility does not give better results, nor lower volatility worse results.
J. Michael Murphy, in an important but little-read article in The Journal of Portfolio Management in the fall of 1977, reviewed the research on risk.2 Some of the conclusions were startling, at least for EMH and CAPM believers. Murphy cited four studies that indicated that “realized returns often tend to be higher than expected for low-risk securities, and lower than expected for high-risk securities, . . . or that the [risk-reward] relationship was far weaker than expected.”3 He continued, “Other important studies have concluded that there is not necessarily any stable long-term relationship between risk and return;4 that there often may be virtually no relationship between return achieved and risk taken;5 and that high volatility unit trusts were not compensated by greater returns”6 (italics in original).7
In 1975, a paper by Robert Haugen and James Heins analyzing risk concluded with the statement “The results of our empirical effort do not support the conventional hypothesis that risk (volatility)—systematic or otherwise—generates a special reward.”8 Remember, this research was done in the middle to late 1970s, just as CAPM and the concept of risk-adjusted returns were starting the investment revolution and more than a decade before Nobel Prizes were awarded to its advocates.
The lack of correlation between risk and return was not the only problem troubling academic researchers. More basic was the failure of volatility measures to remain constant over time—the assumption of constancy being central to both CAPM and MPT. Recall that Nobel laureate Robert Merton of Long-Term Capital Management almost believed he could set his watch by it—until its instability played an important role in the firm’s implosion. The instability of volatility also played a major role in the collapse of subprime mortgage bonds in 2007–2008, as well as in the 1987–1988 crash, when S&P futures and index volatility shot up enormously, and in the 2000–2002 crash, when dot-com and high-tech stock volatility increased sharply.
The impact of volatility goes far beyond the market itself. CAPM had long been used by corporate managers to determine the attractiveness of new ventures. Because the accepted wisdom holds that companies with higher volatility must pay commensurately higher returns, CEOs of higher-volatility companies might be ultracautious in investing in a new plant unless they are certain they can receive the extra return the investment must yield.
On a broader scale, volatility theory, it seems, resulted in bad business decisions in corporate America for a long time, because “good companies” were told that the markets would always have capital available for their growth. This encouraged them to reduce their liquidity. By the 1980s, economists, notably the EMH pioneer Michael Jensen at Harvard, argued that since EMH always got prices right in the market, the best thing corporate CEOs could do, not just for their companies but for the sake of the economy, was to maximize their stock prices.9 This would give them easier and cheaper access to the capital markets. The message came out loud and clear: do what’s best to get your stock price higher, even if it means compromising on your company’s long-term viability and profitability. The theory played a role in the significant decrease in liquidity reserves held by companies during the financial crisis of 2007–2008, which led to a major magnification of the economic downturn. Again we see the type of damage that a flawed theory can wreak.
Although beta is the most widely used of all volatility measures, a beta that can accurately predict future volatility has eluded researchers since the beginning. The original betas constructed by William Sharpe, John Lintner, and Jan Mossin were shown to have no predictive power; that is, the volatility in one period had little or no correlation with that in the next. A stock could pass from violent fluctuation to lamblike docility.
Since the cornerstone of MPT and an implicit assumption of EMH is that all investors are risk-averse, in the same manner, the absence of a demonstrable beta was a serious problem for the researchers from the beginning. If investors are risk-averse, beta or other risk-volatility measures must have predictive power. That they have not, that there is no correlation between past and future betas, was a major anomaly, a “black hole” in the theory. Without a tenable theory of risk, the efficient-market hypothesis was an endangered species.
Barr Rosenberg, a well-respected researcher, developed a widely used multifactor beta, which included a large number of other inputs besides volatility to measure the risk of specific securities. These multifactor betas were often called “Barr’s bionic betas.” Unfortunately, they were as hapless as their predecessors. Other betas were experimented with, all with the same result. Future betas of both individual stocks and portfolios were not predictable from their past volatility.
The evidence, for the most part, was kept on the back burner until Eugene Fama put out his own paper on risk and return in 1992. Fama and his coauthor, James MacBeth, had published a paper in 1973 indicating that higher beta led to higher returns.10 It was one of the instrumental pieces of CAPM. Later, collaborating with Kenneth French, also at the University of Chicago, Fama examined 9,500 stocks from 1963 to 1990.11 Their conclusion was that a stock’s risk, measured by beta, was not a reliable predictor of performance.
Fama and French found that stocks with low betas performed roughly as well as stocks with high betas. Fama stated, “Beta as the sole variable in explaining returns on stocks is dead.”12 Write this on the tombstone: “What we are saying is that over the last 50 years, knowing the volatility of an equity doesn’t tell you much about the stock’s return.”13 Yes, make it a large stone, maybe even a mausoleum.
An article in Fortune on June 1, 1992, concluded, “Beta, say the boys from Chicago, is bogus.”14 The Chicago Tribune summed it up well: “Some of its best-known adherents have now become detractors.”15
If not beta, then what? If risk cannot be measured by volatility, how can it be determined? According to Professor French, “What investors really get paid for is holding dogs.”16 Their study, as we will see, indicated that stocks with the lowest price-to-book-value ratios and lowest P/Es provide the highest returns over time, as do smaller-capitalization companies. Stock returns are more positively related to these measurements than to beta or other similar risk criteria.17
Fama added, “One risk factor isn’t going to do it.” Investors must look beyond beta to a multifactor calculation of risk, which includes some value measurements and other criteria.18
Fama, along with Kenneth French, in a paper in 1996, refuted another written by academics who sought to defend beta, stating, in part, “‘It’ [beta] cannot save the Capital Asset Pricing Model, given the evidence that Beta alone cannot explain expected return.”19
Buried with this canon of modern finance is modern portfolio theory, as well as a good part of EMH. Fama’s new findings rejected much of the academic work of the past, including his own. He said at beta’s graveside, “We always knew the world was more complicated.”20 He may have known it, but he did not state the fact for more than two decades. His statement that “beta is dead” was the shot at volatility heard round the financial world.
Although the beta model and thus CAPM were shattered, Fama was not in mourning for long. In 1993, he and French substituted a new theory of risk to take its place, the three-factor model, scarcely a year after beta was buried. A suspicious type might wonder if the new romance had not already been blossoming before the funeral. The formula they introduced included small-capitalization (small-cap) and low-market-to-book-value (value) stocks in addition to beta.*37 Fama did say that “the three-factor model is not a perfect story.”21 Very true. It was an anomaly the professor found that seemed to give EMH more accurate volatility measurements, although no explanation of why it should be used was given.
As we’ll see, EMH urgently needed new measurements that would show some correlation between higher return and higher volatility to save it from extinction. This methodology and reasoning, as we’ll soon see, seems to have twisted scientific method into a pretzel to keep the key pillar of EMH from collapsing. As one financial professor, George Frankfurter, put it in discussing the Fama and French findings:
Modern finance today resembles a Meso-American religion, one in which the high priest not only sacrifices the followers—but even the church itself. The field has been so indoctrinated and dogmatized, that only those who promoted the leading model from the start are allowed to destroy it.22
This is not just ivory-tower stuff, as we’ve seen. Beta and other forms of risk measurement determine how trillions of dollars are invested by pension funds, other institutional investors, and the public. High betas are no-nos, while the money manager who delivers satisfactory returns with a low-beta portfolio is lionized.23
Take, for example, Morningstar, the largest service monitoring mutual funds. Although it is an excellent and easily readable source that I refer to often, its concept of risk is problematic. Morningstar’s five stars, its top ranking, widely followed and much sought after, uses Fama’s three-factor model, which is dubious at best, as part of its risk measurement.
We saw that volatility theory has never worked and has cost most people who relied on it dearly over four decades. However, the researchers are putting together even more problematic risk/volatility hypotheses that almost send chills down my spine. My advice is to avoid it entirely. The next few pages will tell you why.
As we’ve seen, Professor Fama and many others have abandoned CAPM, the original volatility theory, acknowledging that it has failed, while Nobel laureates such as William Sharpe still dispute this contention.24 Much as the stream of new findings of celestial motion based on telescopes’ improved accuracy destroyed the Ptolemaic system, so, as new, more powerful statistical information poured in that contemporary volatility measures do not work, CAPM and EMH were similarly threatened.
EMH proponents recognize the danger their theory is now facing. For investors to be omnisciently rational, there must be a systematic correlation between risk and return. Without it, EMH goes the way of the brontosaurus. If some investors get more return with the same or lower volatility and others get lower returns over time with the same or higher volatility, as we saw in the previous chapter, this indicates that investors are not omnisciently rational, and a dagger is pointed at the heart of EMH and CAPM.
To defend efficient markets, Fama, as we saw, after abandoning CAPM, went to the three-factor model of risk in his 1992 paper, while others have gone on to a plethora of theories, including four- and five-factor risk evaluation models, to show that there is still a correlation between risk and return.
However, this leads to a number of serious errors in the formulation of the new models. These models are built to replace CAPM; but they are built in an identical fashion, that is, they all attempt to show that higher volatility provides higher return and that lower volatility provides lower return. If CAPM could not do this, why should any new model built in an identical manner do so? CAPM was dumped because it didn’t work, but its central tenet, that there is a direct correlation between risk and return, is being kept as the core of EMH risk analysis.
So researchers must search for new sets of risk and return variables that will give them the correlation between higher risk and higher return. In effect, they must create a new CAPM (naturally with a new name) that behaves almost exactly as the old one was supposed to behave but didn’t. In short, it appears they are attempting to create something along the lines of a financial Stepford wife. But this leads them deeper into the theoretical jungle.
Where can you find these critical new risk variables that will work and give you better results than the old CAPM, which was proved not to work? You can’t very well advertise for them in the classifieds of The New York Times or The Wall Street Journal under “Wanted: new risk factors.” And certainly not under “Wanted: financial Stepford wives.”
Unfortunately, this is where efficient market researchers are today. Have they subsequently found a way out of the volatility woods? Not exactly. The first problem is that they have presented a grab bag of simple correlations that attempt to show a link between risk and return. This is very dangerous scientific ground. As Milton Friedman warned, “If there is one . . . [correlation] . . . consistent with the available evidence there are always an infinite number that are.”25 Even if the academics found a correlation between volatility and return, which is doubtful from what we have seen, there could be hundreds of others that explain risk and return better.
The first problem researchers have is that they cannot prove that the three-factor, the four-factor, or any other models they put forth actually work rather than simply being chance correlations. As Friedman also noted, one of the basic rules of scientific method is that correlation is far removed from causation. There can be innumerable chance correlations for any effect, but without reasonable evidence that one or another is true, they are pure coincidence and likely to go away with time. A humorous example of chance correlation was the Hemline Indicator, which was shown in a well-known Wall Street chart for decades. When fashion dictated that hemlines should be high, in periods such as the 1920s, the 1960s, the 1980s, and the 1990s, markets roared ahead. When hemlines dropped in the 1930s, in the 1940s, and again in the 1970s, markets went lower. According to this tongue-in-cheek hypothesis, the height of hemlines dictated where markets would go. Obviously, few took it seriously.
However, many EMH researchers seem to accept chance correlations as proof, although there is no evidence that a myriad of other variables might also correlate as well as or better than the ones they’ve selected. No new theory of volatility and return has been discovered that shows a consistent correlation between the two. Still, the EMH researchers continue to struggle to find one. They have scanned thousands of possible financial variables in an attempt to do this, but none has worked consistently.
Not only is this bad science; it is likely to blow up over time because of the lack of critical scientific underpinnings to these correlations. Importantly, Fama, in his 1998 survey of market efficiency, wrote that all of the volatility models tested to date “are incomplete descriptions of average returns.”26 In brief, they do not work consistently. Using these methods, the researchers seemingly entrapped themselves.
Unfortunately, this logic has yet another hurdle to clear. Even if it were true, it is not enough to say that a correlation between risk and return has been discovered or shortly will be. If markets are efficient, the sophisticated investors that supposedly keep prices in line with value must have known about the new correlation generations back, even if the academics didn’t. If they did not, how have markets been efficient over the decades? Since there is no evidence that a new proof of volatility has been brought forth or will be, we have to conclude that EMH risk measurements have not been correct and may have been significantly wrong for decades. The logical jungle seems to get more impenetrable with each new finding.
But without this volatility correlation EMH goes the way of the Ptolemaic system. It is also remarkably similar to the Ptolemaics working on epicycles and eccentric circles to attempt to save their system. It’s sad to see gifted researchers cross over from the bounds of scientific discovery to enter a world of ideologues, some of whom at times appear to be almost zealots.
In chapter 4 we saw the supposedly overwhelming evidence that it is impossible to beat the market over time. Let’s now look more closely at the original work the researchers did, which “proved” that no investor could beat the markets. I’m sure the results will surprise some of you.
There’s no clearer statement of the testament, according to Fama,27 than his own concise description of efficient markets: if the necessary conditions for market efficiency are present—i.e., information is readily available to enough investors and transaction costs are reasonable—there is no evidence of consistently superior or inferior returns to market participants.
The argument assumes that thousands of analysts, money managers, and other sophisticated investors search out and analyze all available information, constantly keeping prices in line with value.28 Since the academics claimed that it was difficult to assess how investors analyze information to determine undervalued stocks, tests of this premise focus on whether groups of investors have earned superior returns. The group whose members most frequently serve as guinea pigs is that of mutual fund managers, because information about their decisions and performance is readily available. The research shows that mutual funds do not outperform the major averages, whether risk-adjusted or not, although the risk-adjusted studies that support the efficient-market hypothesis are now certainly open to question.
The statistics of the original mutual fund researchers in the 1960s and early 1970s failed to turn up above-average performance by investors, thereby contributing the essential evidence to make the EMH case.
But on closer examination, the efficient-market victory vanishes. Studies have demonstrated that the standard risk adjustment tools the researchers used back then were too imprecise to detect even major fund outperformance by money managers of their benchmark average. The statistical tests used made it extremely difficult to show superior manager performance when it existed, because the hurdles that outperforming portfolios had to clear were set far too high. One, for example, showed that using the techniques of Michael Jensen, only one manager of the 115 measured demonstrated superior performance at a 95 percent confidence level, the lowest statistical level normally acceptable.29
Even to be flagged on the screen, the manager had to outperform the market by 5.83 percent annually for fourteen years. When we remember that a top manager might beat the market by 11/2 or 2 percent a year over that length of time, the returns required by Jensen to pick up managers outperforming the averages were impossibly high. Only a manager in the league of Warren Buffett or John Templeton might make the grade, and certainly not every year. One fund outperformed the market by 2.2 percent a year for twenty years, but according to Jensen’s calculations, this superb performance was not statistically significant.30 “There is very little evidence,” Jensen wrote at the time, “that any individual fund was able to do significantly better than that which we expected from mere random chance.”31
In another academic paper, using standard risk adjustment techniques, the researchers showed that it was not possible, at a 95 percent confidence level, to say that a portfolio that was up more than 90 percent over ten years was better managed than another portfolio that was down 3 percent. It was also noted that “given a reasonable level of annual outperformance and variability (volatility), it takes about seventy years of quarterly data to achieve statistical significance at the 95% confidence level.”*38
One researcher, in an understatement, noted that the problem lay in weak statistical tools. Corroborating those findings, Lawrence Summers, the former head of the President’s Council of Economic Advisors, estimated that it would take 50,000 years’ worth of data to disprove the theory to the satisfaction of the stalwarts. Indeed, the EMH performance and risk measurement tools were so weak that it proved impossible to delineate even outstanding performance, which by sheer coincidence was the one thing that would invalidate the hypothesis.32 Obviously, this important “proof” that managers could not beat the market was put together with seriously inadequate statistics that coincidentally seemed to consistently give outstanding managers the short end of the count.
How, too, could the $63 billion Magellan Fund, for example, with more than a million shareholders and under three separate money managers, outperform the market for well over a decade? Or John Templeton and John Neff, the latter running billions of dollars for the Windsor Fund for more than two decades? How are these stellar results possible with only publicly available information? Is it sheer chance, as EMH adherents are forced to claim? Are these simply more on a growing list of “aberrations” (a popular term used for events that cannot be explained by a theory)? If they are, we must look at how many other institutional investors have outperformed, using statistics that can actually detect superior performance, not inadvertently filter it out, as Jensen’s methods did.
Given this fact, did the supposedly impartial academics correct their work when better statistical techniques were available? Apparently not. In spite of the above and other evidence, the conclusions of Jensen’s mutual fund study, although seriously flawed, are still used to support the main premise of efficient markets.
Although Fama, French, and others showed that CAPM risk measurements were valueless, this is only a part of the story. Risk-adjusted and non-risk-adjusted mutual fund performance measurements, in addition to Professor Jensen’s, have also been shown to be misleading, because of the weakness of the statistical tools employed. Still they, too, have not been recalculated by EMH defenders to get a fairer picture of how mutual funds have really performed against markets. We have just seen how, as a result of these measurements, outstanding performance was not detected, and this was one of the most powerful “proofs” that EMH used to show that markets were efficient. As noted, the records of most managers who consistently outperformed the market were wiped out by statistical gobbledygook.
The ghosts of beta and other academic risk measurements still walk the night, defending EMH and weeding out any above-average performance not permitted by the theory. These are not the only instances of such tactics being employed by the true believers.
Revenants and errors notwithstanding, superior performance, a death knell for EMH, could not be eliminated by the believers. Next we’ll look at some of the ghost busters.
Another major challenge to EMH is its claim that groups of investors, say, with professional knowledge or skills or methods, have consistently kept prices where they should be.33 We just saw differently. However, EMH makes an even stronger statement: that no group of investors or any investment strategy can do better than the market over time. And here again the trouble starts.
The tenet that managers do not outperform or underperform a market benchmark has a corollary: there is no method or system that consistently can provide higher returns over time. This statement is contradicted by a large body of evidence that some investment strategies consistently do better than the market and others consistently underperform over time. The jury has come in with a unanimous decision on this one: the verdict is solidly against EMH.
As we will see extensively in Part IV, a considerable body of literature demonstrates that contrarian strategies have produced significantly better returns than the market over many decades. The explanation for this explicitly contradicts the central tenet of EMH—that people behave with almost omniscient rationality in markets.
Conversely, the tenet that no group of investors and no strategies should consistently underperform in an efficient market is another rock that EMH flounders on. Below-market performance has been turned in for decades by people who buy favorite stocks, as we will see in detail when we examine contrarian strategies. Another significant underperformance finding, as noted, is the research that shows that IPOs have been dogs in the marketplace for forty years.34 So overperformance and underperformance for long periods—neither of which, EMH states, is possible—show up on both sides of the anomaly coin.
The anomalies show no sign of going away after four decades of counterchallenges; rather, they have been gaining in strength in the last few years, as dozens of articles have examined contrarian effects. The most important anomaly—contrarian strategies that beat the averages over extended periods—was as we saw documented by Professors Fama and French in 1992.35 Their own data contradict the contention that efficient markets have held up well. And the claims that these strategies are more risky have never been documented. The body of contradictory findings above challenges believers to either retract much of the theory or explain how such events can happen.
Another major premise of EMH is the hypothesis that all new information is analyzed almost immediately and accurately reflected in stock prices, thus preventing investors from beating the market. Burton Malkiel, the author of A Random Walk Down Wall Street, now in its tenth edition, wrote in an article reviewing the evidence on efficient markets in 2005, “In my view, equity prices adjust to new information without delay, and, as a result, no arbitrage opportunities exist that would achieve above average returns, without accepting above average risk.”36 But do equity prices really adjust to new information “without delay”? This statement has been hard-core EMH for more than forty years and has been cited by almost every scholar in the field. True prices often react to new information about a stock, but where is the proof that they react to it correctly?
There is none. In a series of studies we are about to examine, we’ll often find that the researchers mistakenly take any market reaction to new information as the correct one. A number of these studies also make it clear that the initial market reactions are wrong. You will also see this predictable reaction to earnings surprise over thirty-eight years in chapter 9, where the first reaction repeatedly is not the correct one. It is also demonstrated in papers by Ray Ball and Philip Brown (1968)37 and Victor Bernard and Jacob Thomas (1990)38 and noted by Eugene Fama in his 1998 survey of EMH literature.39
The fact that Professor Fama finds these latter researchers’ findings to be “robust” is particularly interesting, as they directly dispute the important assumption of efficient markets that new information is immediately and correctly reflected in stock prices. Here again we see a vital pillar of EMH begin to rock because an essential assumption of the theory was never tested by its proponents in a thorough manner. Stocks were tested merely for a reaction to new information, not for the correct reaction to the information. There are many dozens of potential prices a stock can reach on news; how do we know which one is correct? It’s almost equivalent to saying that if a man can jog he’s capable of winning the 100-meter sprint at the Olympic Games.
To give you a fuller grasp of the depth—or lack thereof—of the testing that was used to back up this argument, let’s look at other research performed in the past few decades that supposedly left no doubt of how quickly and accurately investors interpreted market information.
The landmark 1969 study to show that prices adjust to new information rapidly was done by four outstanding researchers of EMH—Eugene Fama, Lawrence Fisher, Michael Jensen, and Richard Roll (hereafter, FFJR collectively).
The researchers examined all stock splits on the New York Stock Exchange from 1926 through 1960.40 The results the investigators arrived at, using extremely sophisticated statistical techniques for the time, indicated that stock prices do not move up after splits, as investors have digested all the positive information beforehand. The authors concluded that their work provides strong support for the hypothesis that the market is efficient. In truth, this, like most of the other experiments in this category, is a rather simplistic experiment of market efficiency, as it involves a very basic test of understanding uncomplicated, readily available information, hardly on a par with the complex decisions involving thousands of interacting variables that are called for in more normal investment analysis, such as that we saw in chapters 2 and 3. But to move on.
This study has been cited in hundreds of academic papers and has been taught to hundreds of thousands of graduate students as one of the major research works upholding market efficiency. However, the study is seriously flawed. The researchers knowingly measured a time period months after the information was released to gauge its effect on the market, rather than measuring at the time when the information was made public. It’s not a little like locking the barn door after the mare has galloped away.
The information enters the market at the time of the split announcement, most often two to four months before the split is distributed to the company’s shareholders. The earlier time is when the measurement should commence to see if the news resulted in a rise in stock prices as a result of the split, as it does for earnings surprises, dividend increases or decreases, or other announcements that can have a major impact on stock prices.
Sadly, this information was unavailable, so the researchers measured from the month in which the stock split was actually distributed, a period when the information had been out for two to four months, and reported that no extra return was made from that point onward. Naturally, their measurements of stock movement at that point were meaningless, as the market had already digested the news from two to four months before and the informational content was already fully reflected in the stock prices.
In Contrarian Investment Strategies: The Next Generation, I analyzed the chart the researchers provided from the examination; it was obvious that the steepest run-up after the announcements of splits came in the two-to-four-month period immediately after the announcement. In fact, the average extra monthly return for the four months in which the splits are announced is almost double the above-market returns in the previous twenty-six months.41
This raises a difficult problem for the researchers. What the chart appears to show, assuming that the majority of split announcements occurred two to four months prior to the stock distribution, is that the stocks may indeed have provided above-average returns after the announcement date. The positive adjustment to the splits, then, appears not to have been immediate but to have taken place for some months after the split’s announcement.
If this is the case, the researchers’ argument is invalid. The most logical conclusion is that the stocks continued to rise as a group for an extended period after the split announcement, which is exactly opposite to what the paper concluded.
The academics do, as noted, explain several times in the paper that the announcement date was not in their database.*39 Perhaps this was fortunate for them. If it were possible to place the split at the correct point, as the above analysis indicates, the conclusion would have been very different, the evidence helping to prove the tenet that markets react to new information in an inefficient, not an efficient, manner, which would certainly help to question the overall efficiency of markets.
Three decades later, in 1996, the research was replicated by David Ikenberry, Graeme Rankine, and Earl Stice,42 who examined 1,275 two-for-one stock splits from 1975 to 1990 on the New York Stock Exchange and the AMEX. They observed excess returns of 3.4 percent after the split announcement and 7.9 percent for the first year after, followed by higher average returns in the three-year period following the split.
Hemang Desai and Prem Jain (1997) found higher returns of 7 percent to 12 percent in the twelve months following a stock split.43 These results flatly contradicted FFJR’s 1969 paper, again providing evidence that markets react to new information in an inefficient, not an efficient, manner. The above findings are in line with our analysis.
Professor Fama, in his 1998 survey of EMH research, ignores the fact that the critical FFJR 1969 findings have been strongly refuted, and that the glaring flaw in the methodology has been identified. Instead, he seemingly questions the other researchers’ findings, noting that the time periods of the studies are different, as well as some of their other minor methodology. In doing so, it appears, he is attempting to deflect the fact that the critical focus of the FFJR paper—to determine whether the market responds almost immediately to the announcement of a stock split—was flubbed. That the time periods were different is entirely irrelevant to this work. It was a smooth maneuver; since the point of the original study was to find out whether stock splits have an immediate impact on prices, he has sidestepped the raison d’être of the 1969 study and ducked the fact that the later findings seem to disprove the FFJR research. Some spinmeisters might want to study such thinking, which seems classic to their field.
Without the FFMR paper and other similar research, which also has significant problems, the critical tenet of EMH—that investors process information quickly and correctly—collapses completely.
As noted, the FFJR study is considered by many as one of the strongest and best-known research supporting EMH.
There’s nothing like really taking a close look at the original data. I mean really close, if you want to see what a researcher is doing. So let’s look at other studies that claim that the market adjusts quickly to new information. The first was performed by Ray Ball and Philip Brown in 1968.44 The two investigators examined the normal rates of return from 1946 to 1966 for 261 firms. They divided the stocks into two groups, those whose earnings in a given year increased relative to the market and those whose earnings decreased. The performance was measured after each year-end. They found that stocks whose earnings increased outperformed the market, while those that decreased underperformed the market. The researchers concluded that the stock prices had already anticipated most of the news of earnings announcements.
The theorists overlooked one simple fact that is well known to most investors: companies normally report quarterly, not annually. The SEC has for many years required public companies to disclose this financial information within ninety days. Furthermore, even back then, analysts provided research reports on how companies were faring, most often containing full-year earnings estimates, often supplemented by press releases from company spokesmen. Still, Ball and Brown stated that investors correctly judged the prospects of companies and thus determined the movement of their stock prices when they actually had the information on hand to do so. Again the question comes up of how aware the researchers are of practical market information, such as reporting and research. To conclude that the market is efficient from this rather obvious and again simple finding is stretching the point.
Another supposedly awesome bit of evidence to support the hypothesis was a study by Myron Scholes in 1972.45 Scholes analyzed the effect of secondary offerings of stock and concluded that, on average, a stock declined 1 or 2 percent when such an offering was made. The largest declines resulted from the sale of stock by corporations or corporate officers. He also stated that the full price effects of a secondary are reflected in six days. He concluded that since the SEC does not require the identification of the seller until six days after the offering, the market anticipates the informational content of the secondary and is therefore efficient. Here again is a sweeping conclusion based on nominal price movements over a short period of time.
Secondary offerings normally bring stock prices down temporarily; this is almost a platitude. What is important is whether the stocks are brought down appropriately. How do they perform relative to the market three, six, or twelve months later? Too, many brokers disclose beforehand who the sellers are. To state that the market anticipates this information because the SEC does not require it is a chancy conclusion. Often this information is provided anyway.
Another study examined how quickly markets integrate new information into stock prices. The research considered how companies react to the announcement of merger and tender offers. Fama, in his 1991 review of efficient markets, stated:
[I]n mergers and tender offers, the average increase in stock prices of target firms in the three days around the announcement is more than 15%. Since the average daily return on stocks is only about .04% (10% per year divided by 250 trading days), different ways of measuring expected returns have little effect on the inference that target shares have large abnormal returns in the days around merger and tender announcements.46
Again this appears to be a wee break with reality. Merger and tender offers are almost always made at a higher price, sometimes significantly higher than the price before the offer.
That stock prices go up 15 percent on average over the three days around the announcement of an IPO certainly is not proof that markets are efficient. Again, as seen earlier in this chapter, highly regarded EMH theorists make a major mistake in assuming that just because stocks respond to new information, they are responding correctly to it. Often the bids for both mergers and tender offers are raised as company management demands and frequently receives higher prices (particularly with hostile takeovers). The stocks also often trade at a discount to the proposed offering price for an extended period of time.
The 15 percent that stocks appreciate around the date of the initial announcement of an offer is half of the approximately 30 percent total increase that shareholders received on average, for periods ranging from a few weeks to a couple of months before or after the offer was consummated, according to other studies.47 Even allowing for the occasional offer that is dropped, the first tender price appears far too low. The market again seems to be incorrect in its initial pricing of tender offers and mergers.
No evidence is provided that the initial reaction to the news is the correct one, as prices far too frequently move up from the trading levels following the announcements. This premise spawned a generation of risk arbitrageurs who have made enormous returns on their capital. The study is somewhat unsophisticated in its knowledge of mergers and acquisitions, and from the evidence available it appears that markets are inefficient rather than efficient when measuring initial offerings relative to the final takeover price.
To use a chess analogy, it’s like concluding that if I move a chess piece after a move by the current world chess champion, the fact that I pushed the piece at all puts me on his level of play. Once again the theory: any price movement is the correct price movement. Nonsensical, yes; a nice daydream, yes; but also the essence of the weak “proofs” of EMH we are looking at.
Our case further strengthens with other evidence that efficiency is quite a bit rarer than the theorists admit. The evidence that markets do not adjust quickly to new information keeps mounting. Roni Michaely, Richard Thaler (one of the pioneers of behavioral finance), and Kent Womack studied the subject in 1994.48 The three researchers measured how stocks behaved after a dividend cut or increase during the 1964–1988 period. The average stock underperformed the market by 11 percent in the year after the announcement of a dividend cut, and by 15.3 percent for the three-year period. It outperformed by 7.5 percent in the year following a dividend increase, and by 24.8 percent for the three years afterward. This study indicates again that markets do not adjust to new information quickly.
A number of other studies have shown that the market is slow to digest new information. Several researchers have found that when a company reports an earnings surprise (that is, a figure above or below the consensus of analysts’ forecasts), prices move up when the surprise is positive and down when it is negative for the next three quarters.49 Jeffery Abarbanell and Victor Bernard, as will be noted in chapter 9, have shown that analysts don’t adjust their earnings estimates quickly after past mistakes.50 The “buy-and-hold” contrarian strategies presented in chapters 11 and 1251 demonstrated that “worst” stocks with earnings surprises continued to outperform and “best” stocks to underperform the market for periods of up to nine months. These findings that markets are slow to react fully to information, rather than reacting instantaneously, appear to shoot another arrow through EMH.
Finally, Robert Shiller argues that if markets were efficient, when we look back at history, stock prices at a given time should be related to prices that we can say are “rational.”52 To find out if this is true, he looked back at prices that would be considered rational in light of the dividends subsequently paid. The study covers the 1871–1979 period.
The rational index Shiller created after the fact follows a smooth, stable path, whereas the actual market index veers sharply above or below it for extended periods, displaying substantial volatility. Shiller concluded, “[S]tock price volatility over the past century appear[s] to be far too high . . . to be attributed to new information about future real dividends.”53 In short, markets over that long term did not respond accurately to information but rather moved far higher or lower than was warranted.
Recall that adherents of EMH believe that knowledgeable investors keep prices where they should be. Unfortunately, this does not appear to be true; and if it is not true, then the most important axiom of the hypothesis is gone.
One of the key questions efficient-market adherents have not examined is how professional investors keep prices in line with values. What methods do they use to do so? It’s doubtful that this question has ever been explained by EMH believers, and is anything other than a critical theoretical assumption. Perhaps this is due to the fact that academics have little understanding of the tools and models knowledgeable investors employ. In chapter 2, we looked at some of the important fundamental methods used to train analysts and money managers, including the use of numerous stock market evaluation techniques and ratios. These methods, if followed, should prevent them from buying stocks that are enormously overpriced.
If they do buy bubble stocks or other highly overpriced stocks, as actually occurs frequently, they are walking away from their years of experience and training and the valuation methods they have used repeatedly through their careers. Such actions would not be considered rational and should not happen. However, they do occur repeatedly during bubbles or periods of skyrocketing prices, as does selling excellent companies that have been knocked down sharply in periods of panic.
Importantly, the errors are made, as noted, by the very professional and knowledgeable investors that EMH states keeps markets efficient. If they can’t keep prices at correct levels, how do markets stay efficient? The answer, obviously, is that they don’t. That is the real reason bubbles happen so frequently and go to prices that are often astronomical before they are dashed down in the ensuing panic.
I know, these seem to be very peculiar rational investors. Since we are in an EMH chapter, I will only refer in passing to the psychology we learned in Part I, which can play into this otherwise unfathomable behavior.
The history of science teaches us that, given capable, intelligent people, large errors normally do not occur in the development of a hypothesis but rather occur in the assumptions upon which the work is based. Powerful statistical techniques without realistic assumptions take on a life of their own. As bad currency drives out good, more than five decades of bad constructs in finance and economics have driven out good science, leaving few useful contributions for the enormous effort expended.
To be fair, the concept of efficient markets has come under attack even by financial academics. Edward Saunders, Jr., using the work of Karl Popper, one of the important theorists on scientific method in the last half of the twentieth century, criticizes the scientific approach of EMH.54 Popper stated in a famous analogy that to prove the theory that all swans are white, the researchers should not concentrate their efforts on searching for more white swans. On the contrary, they should search for black swans, because finding even one would destroy the theory.55 EMH researchers have not followed Popper’s teachings. The black swans of EMH are the ever-increasing number of major anomalies outside the theory’s explanatory range. Not only have EMH researchers continued to search for more white swans, but they have put together an unrelenting campaign to exterminate black swans—the anomalies that cannot exist if EMH is correct.
Even if the studies claiming that markets are efficient were not problematic, there is a much more serious question about them that was raised in chapter 4. The scientific findings were far too modest to justify the researchers’ all-encompassing, revolutionary conclusions. What proof did the researchers have that the markets respond not only immediately but correctly to new information? None. They accepted market reaction to uncomplicated information as proof positive not simply of reaction but of the correct price reaction to the event. The investigators never attempted to test investors’ ability to interpret far more complex financial and economic data, such as we’ve viewed throughout the text.
The pattern is common not only to EMH studies but to most areas of mathematical economics. The researchers are very rigorous in their statistical analysis but extremely liberal, if not specious, in their interpretation of broader issues. We saw this in their presentation of the studies that attempted to prove that markets are efficient. The work looked at obvious examples of news affecting markets. These findings, which one could call “slivers of efficiency,” led the researchers by quantum leaps to much broader conclusions. If markets can understand the impact of relatively simple news of mergers or secondary offerings, the reasoning goes, they must be equally capable of gathering and correctly interpreting complex data about companies; industries; economic, monetary, and financial conditions; and the market itself.56
One must marvel at the boldness of these scholars to build an all-encompassing theory on such flimsy evidence. It is an enormous leap of faith from these simple findings to the conclusion that the market correctly interprets all information, no matter how complex, such as that contained in a bubble or panic, correctly and almost instantaneously. That is like saying that if my daughter, when she was six, could count to a hundred without difficulty, she should also be able to comprehend the theory of relativity—though I’m sure that if asked back then, she would have readily given me an answer, as she did to anything else, but somehow I think it would have missed the mark.
Unfortunately, when put to the test, the canons of EMH resound like a string of stunning military defeats. None of the risk measurements that the academics credit to rational investors have stood the test of time; here, it seems, we have a financial epicycle. The risk-return paradigm must exist, or EMH will be remembered in history much like the Ptolemaic system, a theory widely popular for a long time that ultimately failed and was discarded.
We have also examined in some depth the three key assumptions of the efficient-market hypothesis.*40 The three have been discredited by both the relative weakness of and errors in the supposed “proofs,” as well as very strong evidence that disputes the efficiency hypothesis.
Finally, as we’ve previously established, there is the major problem with the EMH assumption that investors can interpret vast amounts of data. Findings in cognitive psychology and other psychological disciplines demonstrate that this assumption is not accurate. To use another chess analogy, although hundreds of millions of people play the game, there are only a score of grand masters and only one world champion. If people are not equally adept at interpreting the complex world of the chessboard, can they be any more equal at understanding the more complex and significantly more emotional world of markets?
Ultimately, what chapters 5 and 6 seem to declare to EMH, CAPM, and MPT advocates is the equivalent of “Sorry, it turns out that the sun does not revolve around the earth. Try to accept it.”
How strong is the support for EMH today? A first thought might be that it is very strong, given the thousands of articles still written by scholars in some of the world’s most prestigious financial and economic journals and its widespread use in the investment world. Yet from what we’ve seen, this revolutionary theory seems to have been built on the flimsiest of foundations—unkind critics might say a house of cards. As we saw, one of the most important pillars of the hypothesis—the theory of the rational measurement of risk—was simply a questionable assumption of financial academics, which was necessary to bind investment theory to economics. Yet for EMH to be correct, it was essential that investors measure risk in this way. The academics willed it to be true. And they continue to do so today, although it almost boggles the mind that some of the world’s finest economists have trapped themselves in such a logically indefensible position.
The most important reason researchers failed so badly on risk measurement is the manner in which EMH and most other economic investigators conduct their research. Since World War II the social sciences have attempted to become as rigorous as the physical sciences. No discipline has put more effort into this goal than economics. Starting more than sixty years ago, economists held out high hopes that through mathematics they could make the dismal science as predictable as Albert Einstein’s theory of relativity or Johannes Kepler’s laws of planetary motion. Nobel laureate Paul Samuelson, then a young professor of economics at MIT, was the first to integrate the techniques of differential equations, which had met with such success in physics, into a structured approach that could be used to study virtually any economic problem.
The key assumption was rationality: for a firm it meant maximizing profits; for an individual, maximizing his or her economic desires. Rational behavior is the bedrock of Samuelson’s work. This dubious platform allowed economists to merrily build the most complex mathematical models. Economics could now be converted into a precise physical science.
A PACT WITH MEPHISTOPHELES
It would be unfair to say that economists and efficient-market adherents are unaware of the simplicity and vulnerability of their assumptions. The premise of economic rationality is one that has perplexed economic theorists for a long time. The assumption was derived in the golden age of rationalism in the eighteenth and early nineteenth centuries.
Absolute rationality has all but been discarded in philosophy and the social sciences. It’s commonly accepted that although people often act rationally, there are also many times when they don’t. Market and economic history strongly supports the findings of the behavioral experts.
Why, then, do most economists persist in using an outmoded concept of human behavior as the cornerstone of their theory? Many agree that the concept of rationality, so central to EMH, is problematic. Still they strongly defend its usefulness; as one book stated decades ago, “To introduce a more realistic assumption would make economic theory very difficult.”57
Economic theory and later financial theory have been caught on the horns of this dilemma for many decades. Should they espouse realistic assumptions, and if so, what should these be? Or should the assumptions, although acknowledged to be unrealistic, allow extensive analysis, however flawed in terms of practical value? It’s difficult to construct economic theory on numerous behavioral or other assumptions, even if they’re realistic. Rationality gives economists one simple and unwavering assumption to build upon; however, the construct is often seriously flawed.
Paul Samuelson, as noted, was the pioneer in using highly sophisticated mathematics to solve economic problems. A new economic age had begun. The goal was to make economics as predictable as physics or other physical sciences. The findings of economic theory would be as precise as measuring the exact expansion of steel on a bridge of a given length as the temperature rose. The only solid platform upon which the higher math could be built was the bedrock of rationality. Integrating sociological or psychological theories could result in a number of possible starting points, with new ones added over time, and it would be impossible to anchor complex mathematical formulas on a changing behavioral platform. No, the most practical solution, to most economists, was to use the assumption of consistent rationality, even if it was often incorrect.
As a result, the great majority of economic research gravitated in that direction, despite the warnings of some of the important economic thinkers of the past. John Maynard Keynes, for example, was trained as a mathematician but refused to build his classic theory on unrealistic assumptions. Like his teacher, the great Victorian economist Alfred Marshall, Keynes believed that economics was a branch of logic, not a pseudo–natural science. Marshall himself wrote that most economic phenomena do not lend themselves to mathematical equations and warned against the danger of falling into the trap of overemphasizing the economic elements that could be most easily quantified.
The Samuelson revolution, with its emphasis on complex quantification parroting the physical sciences, came to totally dominate economics in the postwar period. Mathematics, which pre-Samuelson was a valuable but subordinate aid to reality-based assumptions, now rules economics. Good ideas are often ignored by economists simply because they are not written down in pages of highly complex statistical formulas or don’t employ equations using most of the letters of the Greek alphabet. The vast amount of research published in the academic journals contains minuscule additions to economic thinking but is dressed in sophisticated mathematical models. Bad ideas planted in deep math tend to endure, even when the assumptions are questionable and evidence strongly contradicts the conclusions. As Nobel laureate Paul Krugman noted, “As I see it, the economic profession went astray because economists, as a group, mistook beauty, clad in impressive-looking mathematics, for truth.”58
Economic ideas and principles once understood by educated readers are now unfathomable to all but the most highly trained mathematical researchers. This would be well and good if economics had achieved the predictability of a physical science. But without realistic assumptions, the dismal science has been broken down rather than been rejuvenated by mathematics. Nobel laureate Joseph Stiglitz, in his 2001 Nobel Prize lecture, spoke to this point in discussing the inadequacy of preferred economic models: “[With one model] I only varied one assumption—the assumption concerning perfect information—and in ways which seemed highly plausible. . . . [As a result] we succeeded in showing not only that the standard theory was not robust. . . . Changing only the one assumption . . . had drastic consequences, [to the theory] but also indicated that an alternative robust paradigm with great explanatory power could be constructed.”59
In the wake of the financial crisis and the Great Recession, the flaws in economics and EMH have taken on new urgency as millions of people are asking how the economy could have gone so wrong. The questioning is coming not only from major economists and the large numbers of unemployed but from The Wall Street Journal and other bastions of laissez-faire.60
Most economists, including the world’s most powerful central bankers, had believed for decades that people were rational enough and the markets smooth enough that the whole economy could be reduced to “a handful of equations.” The equations are assembled into mathematical models that attempt to mimic multilevel economic behavior from Washington to Berlin to Beijing. But, as we see, they didn’t work. Instead we are still suffering through the worst financial crisis in modern history. It is certainly not about efficient-market theory alone that the questions are being asked.
The questioning has gone on for decades. As John Cassidy pointed out in an excellent article in The New Yorker, complex new mathematical theories, such as those of Robert Lucas, Jr., a Nobel Prize winner from the University of Chicago, while causing a generation of novice economists to build ever more complex models, have been discredited, with no agreement on what should replace them.
Lucas’s work concluded that the Federal Reserve should not actively guide the economy but only increase the money supply at a constant rate.61 The research came under sharp theoretical attack, again because at the core of Lucas’s complex mathematical formulas were untenable simple assumptions such as that supply always equals demand in all markets. (If this were true, we could not have unemployment; the supply of workers would never exceed the demand for them.) Once the supply/demand assumption is dropped, few of Lucas’s conclusions hold up. Commenting on the impracticality of Lucas’s work, Joseph Stiglitz, then the chairman of the president’s Council of Economic Advisors, said, “You can’t begin with the assumption of full employment when the President is worried about jobs—not only this President, but any President.”62
Economics, traditionally one of the most important of the social sciences, has suffered a self-inflicted decline. Not all in the profession are unaware of this. In 1996, the Nobel Prize in Economics was awarded to two men: William Vickrey, an emeritus professor at Columbia University (for a research paper in 1961), and James Mirrlees, a professor at Cambridge University. Although the popular press extolled Vickrey’s contribution as breaking fresh intellectual ground in fields as diverse as tax policy and government bond auctions, the professor denied the hyperbole. He said, “[It’s] one of my digressions into abstract economics. . . . At best it’s of minor significance in terms of human welfare.”63 When interviewed, he talked instead about unrelated work he had done, which he considered far more important. Complicated statistical analysis is no different in the investment arena, nor should it be, since it’s another branch of economics. Simple assumptions are usually necessary as a platform for abstruse statistical methods. More complex assumptions, although far more descriptive of the real world, do not allow the development of the mathematical analysis that the researchers desire or the academic journals will publish.
Given the simple assumption of rationality, researchers in the best tradition of the Samuelson Revolution can merrily take off to examine how the totally rational investor will approach markets. They can then use the most complex differential equations or other mathematical methodology to discover new results. Whether the assumptions have the remotest connection to reality is irrelevant. Who cares?
Thomas Kuhn, in his classic work The Structure of Scientific Revolutions,64 takes a tolerant approach to the problem of paradigm change. It is essential, Kuhn wrote, for scientists to have a paradigm from which to work. A paradigm is the body of theory the scientific community in a field accepts and works within.
“Paradigms gain their status,” Kuhn argued, “because they are more successful than their competitors in solving a few problems that the group of practitioners has come to recognize as acute.”65 Thus, EMH, in the early years, provided an explanation of prices fluctuating randomly and why technicians could not consistently outperform markets.
Kuhn also noted that “normal science does not aim at novelties of fact or theory, and when successful finds none.”66 As a paradigm becomes widely accepted, its tools and methods become more deeply rooted in the solution of problems. The accepted tools for broadening the efficient-market paradigm were beta and MPT.
The goal of normal science is not to question the reigning paradigm but to explain the world as viewed through it. Anomalies that contradict the basic tenets of the paradigm are a serious challenge to it. A paradigm must be able to explain the anomalies, or it will eventually be abandoned for a new one that provides explanations that the first one cannot.
Thus scientists naturally defend their paradigm. They prefer to believe that all swans are white and do not search for black ones. If scientists find black swans—the anomalies to EMH are such an example—they try to explain them within the theory. A change in a paradigm is a nerve-racking and difficult period with much acrimony.*41
Its adherents have a vested interest in upholding the validity of the old paradigm, because all their knowledge, experience, and recognition are tied to it. Rejecting their paradigm is often equivalent in a literal sense to rejecting their religion. Kuhn writes that many older scientists will never give up the current paradigm; others will accept parts of it and try to integrate the old with the new. Usually, it takes a new generation of researchers to completely accept a new paradigm. As Paul Samuelson once put it, “Scientific progress is advanced funeral by funeral.”
And here is a key about why EMH still has so many adherents. Kuhn also brings up a critical point: scientists will never abandon a paradigm, no matter how harsh the criticism, unless they have a more compelling one to take its place that will solve most of the problems the old one could not. It is not surprising, then, that even with the major challenges put to EMH, the hypothesis has not been abandoned. Even though its central tenets have been destroyed empirically, it lives on. Thus, when CAPM was destroyed, the deans of efficient markets stated that there were new measures of risk standing patiently in the wings, with others waiting to be discovered. Or when contrarian value methods were shown to outperform the market, efficient-market researchers claimed that they were riskier. EMH is following the precise course of scientific discovery that Kuhn predicted.
In Part IV, a new paradigm of market behavior will be offered, based on much we have learned about predictable investor psychology and with strong empirical evidence to back its assumptions. The good news for investors is that it leads to methods that have consistently outperformed markets over time. The bad news is that it is likely to go through an academic Dante’s Inferno for years, perhaps decades, if Kuhn is correct.
Kuhn also noted that not only is new research not rejected, but its adherents have at times been punished. Thus Giordano Bruno, a Renaissance poet and philosopher, was burned at the stake, and Galileo, as we saw, was imprisoned. The fact that EMH researchers seem intolerant of work that opposes their theory is certainly predicted by the history of scientific discovery. Not surprisingly, there is no forum for dissenting thought, as the academic journals normally do not publish work they consider at odds with their paradigm, including that of knowledgeable Wall Streeters and psychologists.
Too, EMH adherents are not above attacking research that disagrees with their beliefs. In the early 1980s, for example, both Barron’s and Forbes ran feature stories questioning the efficacy of EMH. The result was an onslaught of critical letters from hundreds of academics that lasted for months. The most common theme was: how could the magazines dare to challenge the work of the distinguished researchers?
Another disagreeable characteristic of changes in paradigms, demonstrated again with EMH, is the researchers’ use of a number of methods to make the black swans go away. Any criticism of EMH research is met either by silence, if it is not published in major financial or economic journals, or by dismissal on methodological grounds, as was the case with contrarian strategies until the evidence became too strong. EMH researchers then stated the strategies must be more risky, although they have not yet found a reason why.
If black swans cannot be ignored, they are attacked. A favorite charge of senior academics defending EMH is data mining, that is, mining only the data you want. This charge, of course, is not used against EMH researchers, who, as we have seen, are now desperately trying to find a risk-reward formula that actually works, although their data mining appears to be on a scale worthy of giant mining concerns such as Rio Tinto or BHP Billiton. Another of their favorite techniques is to criticize methodological flaws, almost down to the misplacement of a semicolon. EMH believers have fortunately never made such errors.
But the black swans refuse to swim away. They are hatched by many causes, from the shattered assumptions of EMH risk theory to the widespread evidence of investor overreaction to the enormous mispricing in both bubbles and crashes. EMH believers summarily dismiss two different but opposite anomalies—investor overreaction and underreaction—glibly stating that since there is significant evidence on the two, one offsets the other.*42 This is questionable science, since they are two separate anomalies that the researchers cannot explain. Eliminating two separate bodies of evidence because they show opposite results is like stating that 1 + 1 = 0.
These, then, are some of the hurdles nonbelievers must jump. Some of us have been through this routine numerous times. I turned in my first paper on the superior results of low-P/E strategies in 1977. It was never sent out to a referee, because the editor obviously believed it to be heretical. Only when several of the deans of EMH came out with very similar research fifteen or more years later was it recognized, the credit, of course, going to several of the deans who had originally dismissed such work. Perhaps even worse, the academic journals are, in effect, in the pockets of the major EMH researchers. To publish in the major journals, you must be a true believer or at least a reasonable compromiser. The journals, of course, can make or break the careers of most academics.
Recall Popper’s statement that only one black swan would be sufficient to kill a theory. Unfortunately for the theorists, there are too many paddling around in the EMH pond to become an endangered species.
This, then, is the dark side of EMH. But to put it into context, it is not very different from the protest and rancor when any established body of knowledge is threatened by inexplicable facts, and dissenters, at least to my knowledge, are not burned at the stake.
If I have been somewhat hard on EMH, it is because I cannot accept the manner in which its case has been built, or the widespread damage that its core ideas have brought to markets and, through them, to many millions of people. Although I do not believe the hypothesis, I certainly respect the arduous experimental efforts made by the many researchers in the area. They have finally brought the long-overdue winds of change to Wall Street. Investors who are really interested in how the market works must appreciate these university researchers. Much of the research was necessarily tedious, dull, and time-consuming, but it was essential in building the foundation of a new investment structure.
Without the thorough measurement of technical and fundamental performance records, Wall Street would have continued in the old, unsuccessful, often disastrous ways, with no impetus toward change. Although it’s obvious that I believe EMH is transitory, it did start the winds of change blowing.
Armed with the knowledge of the power of psychology to influence our investment decisions, as well as the discovery that the most widely followed investment theory of our time will not help us but work against us, we are now ready to begin to examine strategies that have worked and will continue to work in the difficult markets we currently face.