CHAPTER 8
WOBBLY CURVES
The publication of Theory of Games opened the door to a deluge of research that has not abated to this day. Every time one question is answered, a new one arises. One of the first questions that came up was, “Why do people gamble?” Daniel Bernoulli had shown in the seventeenth century that because a person’s utility rises at a diminishing rate, he or she is risk averse and therefore willing to pay a premium in order to avoid risks. This is why people insure their homes, cars, and other belongings. The word premium is well chosen because the payments demanded by insurance companies are higher than the expected losses. The fact that premiums are over and above the actuarial value of any likely losses is what keeps insurance companies viable and profitable.
But there is another aspect to people’s attitude toward risk: Many people like to gamble! This may seem quite surprising indeed. Given the general aversion to risk, who would want to put up money to assume risk? It seems counterintuitive.
Actually, it is not all that surprising. Humans have been gambling since time immemorial. Prehistoric evidence of men’s propensity toward games of chance includes dice from the third millennium BCE that were unearthed in Mesopotamia (today’s Iraq). An Egyptian tablet from about the same time also seems to indicate gambling. The Greek poet Sophocles mentioned dice in a document from the fifth century BCE. And there is evidence that in China, in the third century BCE, lotteries were used to raise money for a war effort, and later to fund the building of the Great Wall.
In spite of their appeal, games of chance were generally deemed disreputable. The Buddha (c.480–400 BCE) denounced games of chance in his Eightfold Path, although a later Indian thinker, the minister Kautilya (371–283 BCE), not only permitted them but even regulated them. He tasked a superintendent of gambling with the administration of gaming activities—in exchange for a 5 percent share of the winnings.
Early Romans allowed gambling until Justinian I (482–565 CE), the Christian emperor of Byzantium, banned it in his fundamental work on jurisprudence, the Corpus Iuris Civilis. In fact, most religions cast disapproving eyes on the vice of gambling. Notwithstanding such condemnation, however, many churches do not hesitate to raise money via games of chance such as bingo. Likewise, states sponsor lotteries to fund public works and services like education. In fact, lotteries helped to establish elite institutions like Harvard, Yale, and Princeton.
So we return to the earlier question: Who would put up money to assume risk? Well, there are people who love the adrenaline rush that comes with risky physical activities, like car racing, bungee jumping, rock climbing, off-piste skiing, and the like. But conventional wisdom since Bernoulli’s time has it that human beings will not willingly pay to engage in risky monetary ventures. So why would an ordinary John or Jane Doe put up money to buy a lottery ticket? Obviously, the expected payout is lower than the cost of the ticket. After all, it is the premiums over and above expected winnings that keep gambling outfits in business.
What is even more surprising is the fact that there are people who buy insurance against all kinds of risk and—at the very same time—gamble with their hard-earned spending money. What is going on? A person pays money to avoid risk and simultaneously pays out more money to assume it?
Among the first to weigh in on this conundrum were the economist Milton Friedman and his colleague at the University of Chicago, the statistician Leonard Savage. The year was 1948. The thirty-six-year old Friedman (figure 8.1) had spent the last years of World War II at Columbia University, working on problems of weapons design, military tactics, and metallurgy; most important, though, he was a rising star in the world of theoretical economics.
FIGURE 8.1: Milton Friedman.
image
Source: Wikimedia Commons; the Friedman Foundation for Educational Choice
His family hailed from Beregszasz, then in the Hungarian part of the Austro-Hungarian Empire. The town had about 10,000 inhabitants, and most of them were Jewish, including the Friedmans.1 In the late 1890s, two teenagers emigrated separately to the United States, met in New York a few years later, married, and made their home in Brooklyn. This is where Milton was born in 1912. The father was a small-time trader, while his mother worked in a sweatshop. A precocious boy, Friedman entered Rutgers University on a scholarship at age sixteen. He had to augment his meager funds by clerking in a retail store, waiting tables at a restaurant in exchange for free lunch, and working during the summers. Intending to become an actuary, he passed some of the exams and failed others. But then the economics department of the University of Chicago offered a scholarship to do graduate work and that made up his mind.
At Chicago, he had the good fortune to pursue his studies with and under a slew of brilliant economists. He also met a shy and withdrawn, but very bright fellow economics student, Rose Director. Six years later, when their fears of the Great Depression had dissipated, they married. After stints at Columbia, where he completed his PhD, the National Bureau of Economic Research (NBER), the U.S. Treasury Department, and the University of Minnesota, he returned to Chicago. He remained there as the economics department’s towering intellect for three decades. Upon his retirement, he continued his research and writing for another three decades at the Hoover Institution at Stanford University, until his death in 2006. In all their years together, Rose would be an active partner with her husband in his professional work.
Friedman was a proponent of monetarism, the theory that money supply influences the national output and can control inflation. He supported free markets, with minimal intervention by government. He advocated freely floating exchange rates, school vouchers, a volunteer army, and the abolition of medical licenses.2 He was the undisputed leader of the department of economics at Chicago. Like Austrian Economics, the moniker Chicago School became a mark of distinction. In 1976, Friedman was awarded the Nobel Prize in Economics,3 and in 1988, he received both the Presidential Medal of Freedom and the National Medal of Science.
Five years younger, Friedman’s coauthor of the paper that I shall discuss here was Leonard Savage (figure 8.2), universally known as “Jimmie” (his middle name). His grandparents’ family name had been Ogushevitz, which Jimmie’s father changed to the more familiar, if slightly ferocious sounding, Savage.
FIGURE 8.2: Leonard Savage.
image
Source: Leonard Jimmie Savage Papers (MS 695). Manuscripts and Archives, Yale University
At school, Jimmie did not stand out academically. On the contrary, his teachers thought him feebleminded. But that was due to the boy’s very bad eyesight. “He paid no attention to what was going on in school because he couldn’t see what was going on in school,” his brother recounted in an interview.4 The fact that the boy was actually brilliant, and most probably bored by the humdrum of his classes, must have exacerbated his teachers’ misconception.
Savage’s most noted work was the book Foundations of Statistics, published in 1954. Influenced by Frank Ramsey’s groundbreaking work (see chapter 6) and von Neumann and Morgenstern’s game theory (see chapter 7), he proposed a theory not only of subjective utility but also of personal probability, based on one’s degree of conviction. The theory was based on a set of axioms, one of which was the apparently innocuous “Independence of Irrelevant Alternatives” axiom of von Neumann and Oskar Morgenstern. As we shall see in chapter 10, this axiom cast a dark cloud over the entire theory of expected utility.
It was at the University of Chicago that Friedman, the professor of economics, and Savage, then a research associate in statistics, collaborated on a paper that would become famous. Just a year earlier, the second edition of Theory of Games had been published, with the all-important proof that adherence to the von Neumann‒Morgenstern axioms is a necessary and sufficient condition for a person to possess a utility function.
We know that whenever a decision-maker has to choose among several riskless alternatives, he will choose the one that maximizes the payout,5 which in this case is the same thing as maximizing utility. But in the presence of riskiness, it is the expected utility of the payout that is the factor to be maximized…which is not the same thing as the utility of the expected payout. Because marginal utility of money diminishes as wealth increases, maximizing the expected utility is no longer equivalent to maximizing the expected payout. And because marginal utility of money diminishes for everybody, and diminishing marginal utility is equivalent to risk aversion, it is obvious that everybody must avoid risk. Or is it?
After reviewing all kinds of behavior that reveal human beings’ propensity for taking risks, they concluded that “it turns out that these empirical observations are entirely consistent with the [von Neumann‒Morgenstern] hypothesis if a rather special shape is given to the total utility curve of money.” Now, what is this “rather special shape”?
So far, we have repeated again and again, both in this chapter and in the previous ones, that marginal utility for wealth decreases as wealth increases: Utility for wealth rises, but does so less and less as wealth increases. (In other words, the utility curve’s slope becomes flatter and flatter.) After all, as already stipulated, a second scoop of ice cream provides less pleasure than the first, and an additional dollar provides less utility to a millionaire than to a pauper. Depicted on graph paper, the curve of the utility function rises, but it bends downward or, in technical terms, the shape of the utility function is concave when viewed from below. But maybe this is not so throughout the wealth spectrum? Maybe there are pockets of wealth, somewhere between one dollar and a million dollars, where the marginal utility for wealth increases?
This is exactly what Friedman and Savage claimed. For example, they argued, an additional chunk of money may allow a member of the working poor to jump into the middle class. Now, that would be really worthwhile. This chunk may change his life. Hence, while one additional dollar may be all but unnoticeable, 10,000 additional dollars may offer more than 10,000 times the utility of a single additional dollar. The implication is that in this region of wealth, marginal utility increases. And that, of course, implies that the individual would be willing to engage in a gamble even if the odds were to his disadvantage. (By the way, recall in this context that von Neumann and Morgenstern did not stipulate that marginal utility must decrease, i.e., that the slope of the utility curve must become flatter.)
If, against all odds, she does win the lottery and becomes a quarter-millionaire, she will protect her newfound status by being risk averse again. Hence, beyond a quarter of a million dollars, her utility function is again concave (see figure 8.3). What all this means is that at her current wealth, $150,000, she is willing both to purchase lottery tickets and to buy insurance for her home. Paradox resolved!
FIGURE 8.3: The utility function of Sheila’s wealth.
The theory that Friedman and Savage put forth explains all sorts of risky behaviors, not just lottery gambling and betting in casinos; investment decisions, occupational choices, and entrepreneurial undertakings are also covered. One riddle continued to puzzle them, however: “Is it not patently unrealistic,” they asked themselves, “to suppose that individuals consult a wiggly utility curve before gambling or buying insurance, that they know the odds involved in the gambles and insurance plans open to them, that they can compute the expected utility of a gamble or insurance plan, and that they base their decision on the size of the expected utility?”
Yes, that would be patently unrealistic; surely, nobody inspects his or her utility function and performs complex calculations before making a decision. However, the objection is not relevant, according to Friedman and Savage. What is relevant, they claim, is that decision-makers behave as if they inspected their utility functions, as if they knew the odds, and as if they calculated the expected utilities. The validity of a theory depends solely on whether it yields sufficiently accurate predictions about the class of decisions with which the hypothesis deals. It is, they say, as if a billiard player knew the equations of elastic collisions, could estimate accurately by eye the angles, make lightning calculations, and then perform the shot. The proof of the pudding is in the eating. And on that count, wiggly utility curves do very well.
image
In the Friedman-Savage framework, Alberta, willing to take risks at her current wealth, should jump at the occasion. But in practice, no sane person of medium wealth would ever participate in a fair gamble that would render her either rich or poor, claimed the student. Why should she? Just to end up, on average, where she started? But in Friedman and Savage’s world, Alberta would love such gambles. So, the theory needed further developing.
The graduate student’s name was Harry Markowitz. Born in 1927 in Chicago, the only child of Morris and Mildred Markowitz, he grew up during the Great Depression. His parents owned a small grocery store and, being in the food and dry goods business, were not severely affected by the general despair that had gripped the country. There was always enough to eat, and the boy had his own room. He enjoyed baseball and tag football, played the violin in the high school orchestra, and read a lot.
At college, he was inclined toward philosophy and physics. But upon graduation after two years at the University of Chicago with a bachelor’s degree in the liberal arts, he decided on economics. He was interested in microeconomics and macroeconomics, but what really fascinated him were the economics of uncertainty. He read Theory of Games and became engrossed with von Neumann and Morgenstern’s arguments concerning expected utility, Friedman and Savage’s utility function, and Savage’s personal utility ideas. At Chicago, he had the good fortune of counting Friedman and Savage among his teachers.
Once he had become aware of the inconsistency in the Friedman-Savage paper, Markowitz felt the need to dig further. He did a survey among his friends, asking them the following questions:
➢   Do you prefer to get 10 cents for sure, or one chance in ten of getting $1?
➢   Do you prefer to get $1 for sure, or one chance in ten of getting $10?
➢   Do you prefer to get $10 for sure, or one chance in ten of getting $100?
➢   Do you prefer to get $100 for sure, or one chance in ten of getting $1,000?
➢   Do you prefer to get $1 million for sure, or one chance in ten of getting $10 million?
Markowitz then asked the questions in the other direction:
➢   Do you prefer to owe 10 cents for sure, or one chance in ten of owing $1?
➢   Do you prefer to owe $1 for sure, or one chance in ten of owing $10?
➢   Do you prefer to owe $10 for sure, or one chance in ten of owing 100?
➢   Do you prefer to owe $100 for sure, or one chance in ten of owing 1,000?
➢   Do you prefer to owe $1 million for sure, or one chance in ten of owing $10 million?
In general, people preferred to owe 10 cents for sure, rather than one chance in ten of owing a dollar, and owe one dollar for sure rather than one chance in ten of owing $10. Thereafter, opinions differed again, and when he got to the biggie, “the individual generally will prefer one chance in ten of owing $10,000,000 rather than owing $1,000,000 for sure.”8
What Markowitz’s survey demonstrated was that his friends behaved differently depending on whether they were about to get something or about to owe something. When confronted with potential receipts, they take risks for small amounts but are risk averse for large amounts. When confronted with potential losses, though, they avoid risks for small amounts but take risks for large amounts.
What does that mean for the shape of the utility curve? Friedman and Savage had posited three regions in their paper—concave for low wealth, convex for medium weath, and concave again for large wealth. Markowitz added a further wobble—he fixed the person’s current wealth in the middle, at the cusp between the concave and the convex regions (see figure 8.4). To the right of the current wealth, for gains, the utility curve is, at first, convex. To the left, for losses, it is, at first, concave. But farther out, there are two more regions: For very large gains, the curve becomes concave again, and for very large losses, it becomes convex.
FIGURE 8.4: Friedman’s wobbles, Markowitz’s wobbles.
These facts were borne out, Markowitz claimed, by “a common observation that, in card games, dice games, and the like, people play more conservatively [i.e., are risk averse] when losing moderately, more liberally [i.e., will take more risk] when winning moderately.”
According to modern standards, Markowitz’s paper is not very rigorous. The lack of real-life verification of his propositions—apart from a cursory poll among his friends—made it a dubious candidate for publication.9 Apparently, Markowitz himself was aware of that, as he revealed in the paper’s closing paragraph: “It may be objected that the arguments in this paper are based on flimsy evidence…I realize I have not demonstrated ‘beyond a shadow of a doubt’ the ‘truth’ of the hypotheses introduced. I have tried to present, motivate, and to a certain extent, justify and make plausible a hypothesis which should be kept in mind when explaining phenomena or designing experiments concerning behavior under risk or uncertainty.” As justifications go, such a message to referees and readers should have raised red flags. Nevertheless, utility functions shaped like Friedman and Savage suggested, or like Markowitz proposed, do seem to be the answer to the insuring/gambling paradox.
image
In any case, the “Utility of Wealth” paper was only a sideline for Markowitz. The paper that truly and justifiably made him famous, “Portfolio Selection,” was published the same year in the Journal of Finance.10 While still only a doctoral student, Markowitz decided to apply mathematical methods to the stock market. The idea had come up during a chance conversation with colleagues. According to conventional wisdom at the time, the value of a stock is the present, discounted value of its future dividends. But because future dividends are uncertain, Markowitz interpreted this to mean that the stock’s value is determined by the present, discounted value of its expected future dividends. “But if the investor were only interested in expected values of securities,” Markowitz pointed out, “he or she would only be interested in the expected value of the portfolio; and to maximize the expected value of a portfolio, one need invest only in a single security. This, I knew, was not the way investors did or should act. Investors diversify because they are concerned with risk as well as return.”11
Thus was born modern portfolio theory. The qualifier modern is required here because we saw in Chapter 1 that the idea of diversification had already been proposed three centuries earlier by Daniel Bernoulli. For this paper, and for his many subsequent contributions to the development of financial economics, Markowitz was awarded the Nobel Prize in Economics in 1990 (figure 8.5).
FIGURE 8.5: Harry Markowitz receiving the Nobel Prize.
image
Courtesy of the Harry Markowitz Company