4
Beating the Dealer
THE YEAR IS 1961. The place, Las Vegas. It’s a Saturday night in the middle of June. The temperature is hovering around 100 degrees even though the sun has already set. Inside the casinos, no one cares. Vegas is at the height of its postwar golden age. A dozen world-class resorts, the first of their kind, line the nascent Strip, from the Sahara in the north to the Tropicana in the south. The loud, smoky casino floors are packed with tourists from across the country hoping to get lucky at the tables, or at least ogle some celebrities. This is the Vegas of the original Ocean’s Eleven, the Vegas of Michael Corleone, the Vegas James Bond visits in Diamonds Are Forever. The Vegas of Elvis and the Rat Pack, Liberace and the Marx Brothers.
A slender man with a crewcut, just shy of thirty, is sitting at a roulette table. He stares straight ahead, his face impassive behind a pair of horn-rimmed glasses. The crowd packs in around him, boisterously throwing chips at the table. But he ignores them. He looks intent, deeply focused, though on what is unclear. The minutes tick by and the crowd starts to wonder if he’s forgotten about the game. Then, at the last possible moment, he places his chips on seemingly random spots on the board. For one round, it’s black 29, red 25, black 10, red 27. On the next, it’s black 15, red 34, black 22, and red 5. To the people around him he seems crazy. Roulette players often have systems, but they’re consistent, like lottery players: you bet your birthday, or your girlfriend’s phone number. Or, if you like a safer bet, you play a color. But this guy’s bets keep changing, as though someone is whispering the future into his ear. Whatever he’s doing, it doesn’t seem quite right. Especially because he’s winning. A lot.
His name is Edward Thorp. Today he is one of the most successful hedge fund managers in history. In June 1961, he was only a few years out of graduate school. He had just been hired as an assistant professor of mathematics at New Mexico State University. In graduate school he had specialized in the mathematics of quantum physics. But Thorp was also fascinated by games. He was particularly interested in strategy games: blackjack, poker, baccarat. Even the ancient Chinese game Go. But on that sweltering Vegas night back in 1961, he was playing roulette. This was odd, because the results of spinning a roulette wheel should be perfectly random. Each spin is independent of the spin before and the spin after. There’s no place for strategy.
Back at the roulette table, a man and a woman walk past Thorp, gulping whiskey sours. A cheer goes up at another table as someone from Des Moines wins big. Distracted for a moment, Thorp looks up — just in time to catch a look of horror from the woman next to him. Thorp’s hand shoots to his ear. Attracted by the movement, a few bystanders glance in his direction and catch a glimpse of . . . what is that? An earpiece? Thorp is already on his feet, gathering his chips and stuffing them into his pockets with one hand while his other hand remains pinned to his ear. He pushes his way out of the crowd and hurries toward the street.
We’ve seen how Bachelier and Osborne used insights from physics to propose that markets can be understood in terms of a random walk, and how Mandelbrot refined that idea. Their work revolutionized the study of financial markets, once economists came to appreciate it. But all three were firmly ensconced in academia. Bachelier worked at the Bourse, but there’s no evidence that he put his ideas to any use there, and he certainly never made much money. Osborne may have turned to finance in an attempt to feed his family, but he ultimately concluded that there was no profit to be had in speculating on the unrelieved bedlam of financial markets. Mandelbrot, too, seems to have avoided trading.
Certainly ideas introduced by Bachelier, Osborne, and Mandelbrot percolated through economics departments and affected how traders thought about financial markets. For instance, the 1973 book A Random Walk Down Wall Street, by Princeton economist Burton Malkiel, has become a classic among investors of every stripe; it owes a great deal to Osborne in particular, though this influence is largely uncredited.
But the introduction, and subsequent sharpening, of the random walk hypothesis is only part of the story of how physicists have shaped modern finance. Physicists have been equally, or even more, influential in their role as practitioners. Ed Thorp is a prime example. He accomplished what Bachelier and Osborne never could: he showed that physics and mathematics could be used to profit from financial markets. Building on the work of Bachelier and Osborne, and on his own experience with gambling systems, Thorp invented the modern hedge fund — by applying ideas from a new field that combined mathematical physics and electrical engineering. Information theory, as it’s known, was as much a part of the 1960s as the Vegas Strip. And in Thorp’s hands, it proved to be the missing link between the statistics of market prices and a winning strategy on Wall Street.
Thorp was born at the peak of the Depression, on August 14, 1932. His father was a retired army officer, a veteran of the First World War. When Thorp was born, his father was fortunate enough to have found work as a bank guard, but money was still tight and the young Thorp developed an early instinct for thrift and financial savvy. He realized he could buy a packet of Kool-Aid mix for a nickel but could make six glasses with each packet. So he sold glasses of cold Kool-Aid to WPA workers for a penny each. He bet a storekeeper that he could add up a tab in his head faster than the cash register and won himself an ice cream cone. An older cousin showed him that the slot machines at his local gas station were rigged so that if you jiggled the handle right, they would pay out.
When World War II began, the Thorps headed west to find work in defense manufacturing. They settled down in Lomita, California, just south of Los Angeles. Both parents took jobs, leaving Thorp to fend for himself. It was around this time that he discovered something even more exciting than betting on his quick head: blowing stuff up. He started with a children’s chemistry set, a gift from his parents, and ultimately set up a junior mad scientist’s lab in the garage. While his parents helped with the war effort, Thorp was building pipe bombs and blowing holes in the sidewalk with homemade nitrocellulose. Later his tinkering would expand to include playing with telescopes and electronics, including ham radios.
Thorp’s boyhood penchant for explosives belied a deep fascination with the science behind his experiments, and along the way he learned a considerable amount of chemistry and physics. In 1948, at the end of his sophomore year in high school, Thorp signed up to take an All Southern California test in chemistry, competing for a scholarship to the University of California. When he told his chemistry teacher of his plan, the teacher was dubious. Thorp was over a year younger than the other competitors, who were preparing for college. But after the teacher gave Thorp a practice exam, he was convinced. Thorp didn’t know everything, but he had clear aptitude. Thorp’s teacher recommended three books for Thorp to read and gave him a stack of practice tests to work on over the summer.
When the test results came back, Thorp learned that he had come in fourth overall. The results were remarkable, but he knew he could do better. The version of the test he took included a new section that hadn’t been on the previous year’s test, and it had called for a slide rule. Thorp had a ten-cent slide rule, small and poorly machined. The numbers didn’t always line up correctly, introducing errors in Thorp’s calculations. Thorp was convinced that if he’d had a proper slide rule, he would have won the competition. The problem was that he couldn’t take the chemistry test again. So the following year he signed up for the corresponding test in physics. This time he came in first and won the scholarship, which paid his way through UCLA. He’d successfully parlayed backyard explosives into college tuition.
Since it was physics rather than chemistry that had gotten Thorp to UCLA, he decided to make it his major. Four years later, he stayed on for graduate school. Thorp loved his studies, but graduate school wasn’t a natural choice for him, given his lack of means. If not for the scholarship competition, it’s unlikely that he would have been able to afford college. And now, when he was twenty-one, money was as big an issue as ever. Thorp mustered a budget of $100 a month — about $850 in 2012 dollars — half of which went immediately to rent. Strapped for cash, Thorp began scheming about ways to make a little extra money on the side, à la his childhood exploits.
It was a conversation on just this topic — how to make extra money without much work — that first got Thorp thinking about roulette. It began as a debate at the UCLA Cooperative Housing Association dining room in the spring of 1955, just as Thorp was preparing to finish his master’s degree in physics. The first Las Vegas casinos had just begun to open, and gambling was a hot topic. One of Thorp’s friends suggested that gambling was a good way to get rich quick. The problem, someone else pointed out, was that you usually lose. After a discussion of whether it was possible to get an advantage at various games (that is, improve your chances so you win more often than you lose), roulette came up. Most of Thorp’s colleagues argued that roulette was a terrible choice for a get-rich-quick scheme. Maybe if the wheel had something wrong with it, certain numbers would come up more often than others. But the wheels at big casinos, like the ones in Las Vegas or Reno, were made so precisely that you could never find an imperfection to exploit. Roulette wheels were as close to random as you could get, and without some special trick, the odds were against you.
Thorp didn’t disagree with the premise. But he thought the conclusion was wrong. After all, he reasoned, physicists are good at predicting how things like wheels behave. If a roulette wheel really is perfect, well, then shouldn’t normal high school physics be enough to predict where a ball starting at such and such a place, rolling around a wheel spinning with such and such velocity, would land? You don’t need quantum physics or rocket science to figure out how balls roll around wheels. The fact that roulette wheels are so perfectly manufactured could only help: there aren’t going to be small imperfections in the wheel that might throw off your calculations, and each wheel should be pretty similar to every other.
To test his hypothesis, Thorp started doing experiments. He did a few calculations and then bought a cheap, half-size wheel and filmed a ball going around it so he could watch, frame by frame, how it behaved. Meanwhile, he thought about how to put his idea to use. Major casinos accept bets even after the ball is moving, so in principle it’s possible to know the initial speed and position of the wheel and ball, which ought to be all you need to calculate where the ball will land, before you make your bet. He fantasized about building a machine that could quickly make the necessary calculations. But he didn’t get very far. Vegas roulette wheels might be flawless, but the toy wheel he bought was a piece of junk. Watching the films convinced him that the wheel was useless for his experiments; professional wheels, meanwhile, cost well over $1,000 — an impossible investment for an impoverished grad student.
So Thorp gave up on roulette, at least for a while. After finishing his master’s degree, he began working on his doctorate, again in physics. He quickly realized, however, that his mathematical background wasn’t sufficient to tackle the newest topics. He made a list of the courses he would need to take, most of which were in a then-burgeoning field known as functional analysis, and discovered that if he took them all, he’d have enough for a PhD in mathematics, while his work on physics would have just begun. And so he switched to math. All the while, his ideas about the physics of roulette spun around in his mind. He was sure that with the right resources — a professional roulette wheel and some computer know-how — he could strike it rich.
Soon after finishing his PhD, Thorp was awarded the prestigious C.L.E. Moore instructorship in mathematics at MIT — a position held a decade earlier by John Nash, the pioneering mathematician profiled by Sylvia Nasar in her book A Beautiful Mind. Thorp and his wife, Vivian, left Southern California and moved to Cambridge, Massachusetts. They spent only two years on the East Coast before moving back west, to New Mexico. But it was enough to set their lives on a different track: it was at MIT that Thorp met Claude Shannon.
Shannon may be the only person in the twentieth century who can claim to have founded an entirely new science. The field he invented, information theory, is essentially the mathematics behind the digital revolution. It undergirds computer science, modern telecommunications, cryptography, and code-breaking. The basic object of study is data: bits (a term Shannon coined) of information. The study of things such as how light waves move through air or how human languages work is very old; Shannon’s groundbreaking idea was that you could study the information itself — the stuff that’s carried by the light waves from objects in the world to your retinas, or the stuff that passes from one person to another when they speak — independently of the waves and the words. It is hard to overstate how important this idea would become.
Information theory grew out of a project Shannon worked on during World War II, as a staff scientist at Bell Labs, AT&T’s research division in Murray Hill, New Jersey. The goal of the project was to build an encrypted telephone system so that generals at the front could safely communicate with central command. Unfortunately, this was hard to do. There is only one code system that can be mathematically proven to be unbreakable. It’s called a one-time pad. Suppose you start with a letter that you want to send to your friend but that you don’t want anyone else to read. Say the letter has 100 characters in it, including spaces. To protect the letter with an unbreakable code, you need to come up with a random list of 100 numbers (corresponding to the number of characters in the letter) called a key, and then “add” these numbers to the characters in the letter. So if the first character in the letter is D (for “Dear John,” say), and the first number in your random list is 5, you want to add 5 to D by moving down the alphabet by five letters. So you write down I as the first letter of the coded message. And so on. In order to decrypt the letter, your friend needs to have a copy of the key, too, which can then be used to subtract the right number from each letter and recover the original message. If the key is really random, there’s no way to decrypt the encoded message without access to the key, since the randomness of the key will wash out any patterns in the original message.
A one-time pad such as I just described can be tricky in practice because the sender and the receiver have to have the same random keys. But in principle, the idea is simple. It gets more complicated when you try to implement the idea of a one-time pad for a telephone conversation. Now there are no letters to add a number to or subtract a number from. There are sounds, and what’s more, the sounds are transmitted over long distances by a wire (or at least they were in 1944), which means that anyone who can gain access to the wire, at any point between the generals in the field and their home base, can listen in on the conversation.
The Bell Labs team realized that the essence of the one-time pad was the fact that patterns in the “signal,” the message being transmitted, get lost amid the randomness of the “noise” — the key consisting of random numbers. So you need to take whatever medium is being used to carry the message (in this case sound) and add something to it that’s totally random so that you can’t make out any of the message-bearing patterns. In a telephone conversation, the word noise isn’t a metaphor. Imagine trying to talk to someone with a loud vacuum cleaner running in the background. You wouldn’t be able to make out much, if anything, of what the person was trying to say. This is the principle behind SIGSALY, the system that Shannon and his collaborators invented. If you add enough noise to whatever your general is saying, you can make it incomprehensible. Meanwhile, if you have access to a recording of the exact same random noise on the other side of the message, back in Washington, you can “subtract” it from the coded message to recover the original voice. Implementing the system was an engineering marvel: signal processing of the sort necessary to remove noise from a telephone line, even if you knew exactly what the noise sounded like, was only at its earliest stages. But Shannon and his team figured out how to make it work. SIGSALY devices were built at the Pentagon for Roosevelt, in Guam for MacArthur, in North Africa for Montgomery, and in the basement of Selfridges department store in London for Churchill.
Thinking about the relationship between a signal and noise led Shannon to his most important insight — the basic idea underlying all of information theory and, by extension, the information revolution. Suppose you’re driving on the highway, having a conversation with the person in the passenger seat. You’re chatting away, and then an eighteen-wheeler passes by, and for a moment your passenger can hear only every other word you say because the truck is so loud. Will the passenger figure out what you were trying to say? It depends. Maybe you’ve just gotten started on your regular rant about traffic in Los Angeles. You complain about it constantly, so your friend knows the riff by heart. Just a few words — maybe “construction” or “bad drivers,” plus an obscenity or two — would be enough to transmit the full force of your views on traffic. In fact, the passenger could be a complete stranger; no one likes traffic, and so a keyword here or there would be sufficient to get your message across. But what if you were trying to explain the details of a new film you just saw? Then every word could be important. Your passenger would have little idea what to make of it if all he could hear was “The lead — was — in the green — .”
Shannon concluded that the amount of information carried by a signal has something to do with how easy it is for the receiver to decode, or in other words, on how unpredictable the signal is. Your rant on traffic doesn’t contain much information — it’s easy to predict; your film synopsis contains more. This is the essence of Shannon’s information theory.
Perhaps the easiest way to see why this way of looking at information makes sense is to turn Shannon’s picture around. Information is the kind of thing that takes you from feeling not so sure about something to feeling more sure about it. If you gain information, you learn something about the world. Now imagine two cases. Suppose you begin by thinking that the Yankees have a great chance of winning half their games in any given year, but that there’s very little chance that there are aliens living on the moon. Shannon’s essential insight could be put as follows: if you were to learn, as in become absolutely certain, that there are aliens living on the moon, you would have gained a lot more information than if you were to learn that the Yankees have won more than half their games this year. The reason? In Shannon’s terms, it’s that the probability of there being aliens on the moon is much, much lower than that of the Yankees (or any other team) winning half their games. This connection between the probability of a message and the information contained in the message provides the crucial link needed to quantify information. In other words, by connecting information with probability, Shannon discovered a way to assign a number to a message that measures the amount of information it contains, which in turn was the first major step in building a mathematical theory of information.
The invention of information theory turned Shannon into an overnight sensation, at least in the worlds of electrical engineering, mathematics, and physics. The applications proved to be endless. He stayed at Bell Labs for another decade after the war, before he moved to MIT in 1956.
Thorp arrived in Massachusetts in 1959, just a year out of graduate school. By then, Shannon held an endowed chair, with dual appointments in the mathematics and electrical engineering departments. His most important work had already been published and its influence was spreading rapidly. By the late 1950s, he was an academic rock star. Already famously eccentric, Shannon was now powerful enough to dictate his own terms to MIT: whom he would meet with, what he would teach, how much time would be devoted to research. He was not the kind of man whose office you would casually stick your head into — especially if you were just a lowly instructor. To meet Shannon, Thorp needed an appointment. And to get an appointment, he needed something worth talking about; as Shannon’s secretary would later inform Thorp, Professor Shannon didn’t “spend time on topics (or people) that didn’t interest him.”
Fortunately, Thorp had a topic that would entice Shannon. A few months before moving to Massachusetts, the Thorps had visited Las Vegas for the first time. They chose Vegas because they expected it to be a bargain: close to Los Angeles, plenty of inexpensive hotels, a lot to see and do. Plus, Thorp thought, he’d have a chance to scope out professional-level roulette wheels. But as it turned out, roulette wasn’t Thorp’s principal interest on this trip. Shortly before the young couple left for their vacation, a colleague passed along a recent academic article from the Journal of the American Statistical Association. It concerned the game of blackjack, or twenty-one.
As far as casino games go, blackjack is old — older, even, than roulette. Cervantes, the author of Don Quixote, used to play a variation in Spain in the early seventeenth century and wrote stories in which his characters became proficient at cheating. The game is typically played with one or more standard decks of cards. You start by placing your bet. The game begins with each player (including the dealer) being dealt two cards, and then players have a chance to ask for additional cards until they decide they’ve had enough or they “bust,” which happens if their cards sum to more than twenty-one points. Number cards are worth their face value; face cards are worth ten points; and an ace can be worth either one point or eleven points, at the player’s discretion. The goal is to have the highest number of points without going over twenty-one. At a casino, each player is competing individually against the dealer, who represents the house. The goal, then, is to beat the dealer without busting. If you win, the game pays a dollar for every dollar you bet unless your initial two cards add up to twenty-one. In that case, the game pays a $1.50-per-dollar bet.
Casinos always employ the same strategy. The dealer has to take a new card as long as his total number of points is less than seventeen. If it’s seventeen or more, the dealer stops. And if the dealer busts, everyone wins. The twist, at least in a casino, is that although the players’ cards are all dealt face up, one of the dealer’s cards is dealt face-down, so the players do not get to see it until the end of the game. Not knowing what you’re up against makes it more difficult to know when to stop asking for new cards.
Casinos have run blackjack tables for a long time. And they’ve made money doing it. This suggests, but doesn’t quite prove, that the odds are with the house. The reason it doesn’t quite prove it is that blackjack, unlike roulette, is a strategy game. The player has a choice to make: When do you ask for additional cards? Even by the early 1950s, as gambling took hold in Vegas, no one knew if there was a strategy that a player could adopt that would give him an advantage over the house. All anyone knew for sure was that whatever most people were doing, it was good for the house. Figuring out more than that would prove incredibly difficult. It involved calculating the probabilities of all of the possible hands, under all sorts of different circumstances. Millions of calculations.
This was just what a group of army researchers set out to do, beginning in 1953. Over the course of three years, using “computers” (which in the early 1950s meant people, perhaps with electronic adding machines), the army team worked out (almost) all of the possible hands, figured out their probabilities, and then devised what they claimed was the “optimal” blackjack strategy. It was this strategy that they published in the Journal of the American Statistical Association, and that Thorp decided to try on his trip to Vegas. It wasn’t a winning strategy. According to the army’s calculations, the house had an advantage even if you played with their optimal strategy, because of the essential role of uncertainty about the dealer’s hand in the player’s decision making. But the advantage was tiny. If you made a thousand one-dollar bets at successive hands of blackjack using their strategy, the army predicted, you should expect to have (on average) about $994 left at the end of the day. Compare this to slots, where you could expect to have about $800 left, and the optimal blackjack strategy looked pretty good. Unfortunately, the strategy wasn’t simple, so Thorp had to make a cheat sheet; he wrote out all of the possibilities on a little card, which he consulted as he played.
He lost. Quickly. Starting with a pile of $10, Thorp was down to $1.50 within the hour. But the other people at the table lost even more quickly, and by the time Thorp left the table, he was convinced that the army’s researchers were on to something. He was also convinced that he could do better.
The problem with the army strategy, as Thorp saw it, was that it treated each round of blackjack as independent: it was as though each time around, a brand-new deck was being used. But in real life, especially in 1958 (casinos have since changed the rules slightly), this wasn’t the case. A dealer would shuffle a deck and then keep playing as long as there were enough cards to go around. This changes everything. Consider that the probability of receiving, say, an ace from a new deck is 4/52, since there are 4 aces in a deck of 52 cards. But suppose you’re on your second hand, and on the first hand 10 cards came up, two of which were aces. Now the odds of getting an ace are 2/42, which is much less than 4/52. The point is that if your strategy depends on the probabilities of getting different card combinations, and if you’re being careful, you need to take into account what cards have already been played. Adopting such a strategy, where you keep track of what cards have already been played and vary your strategy accordingly, is called card counting.
Card counting, Thorp believed, could make the odds in blackjack even better than what the army researchers found. Using MIT’s IBM 704, one of the first mass-produced electronic computers, Thorp managed to prove that the player would have an advantage if he combined a modified version of the army’s strategy with a simple card-counting technique. This was Thorp’s in with Shannon. He wrote a paper describing what he had found, with the hope that Shannon would help him publish it.
When the day of the meeting arrived, Thorp knew the pressure was on. He had his thirty-second elevator pitch ready: what he wanted; why Shannon should care.
As it turned out, Thorp had little to worry about. Shannon immediately saw what was interesting about Thorp’s results. And after a few piercing questions, Shannon was convinced that Thorp was the real deal. He made some editorial suggestions and suggested that Thorp tone down the title (from “A Winning Strategy for Blackjack” to “A Favorable Strategy for Twenty-One”) and then offered to submit Thorp’s paper to the Proceedings of the National Academy of Sciences, the most prestigious academic journal that would consider publishing such work (only members of the Academy could submit papers). Then, as Thorp prepared to leave, Shannon casually asked if Thorp had any other gambling-related projects. This kind of math, with clear and fun applications, was right up Shannon’s alley. After a pause, Thorp leaned forward. “There is one other thing,” he began. “It’s about roulette . . .”
It was dusk on a snowy winter’s evening in Cambridge, Massachusetts. A dark sedan circled the block once and then slowed to a stop in front of the Thorps’ apartment building. The doors opened, and from each side of the car a beautiful young woman emerged. Both women had mink coats draped over their shoulders. They stepped back from the car to reveal its third passenger, a short man in his early sixties. His name was Manny Kimmel. He was the owner of a growing parking lot and funeral home concern known as the Kinney Parking Company. The Kinney Parking Company was in the process of going public. Over the next decade, under the joint leadership of Kimmel’s son Caesar and legendary CEO Steve Ross, Kinney would rapidly expand: first to commercial cleaning and facilities management, and then to media. In 1969, Kinney Parking Company would acquire Warner Brothers Studios as the first step in a transformation that would ultimately culminate in Time Warner, which is today the world’s largest media conglomerate.
In 1961 all of this was in the future. But Kimmel was already a very wealthy man. His fortune had been made the old-fashioned way: gambling and booze. Legend has it that Kimmel won his first parking lot, on Kinney Street in Newark, New Jersey, in a high-stakes craps game. And the early success of the Kinney Parking Company had as much to do with Kimmel’s side business of running limousines to illegal gambling houses as it did with people parking their cars. During Prohibition, he teamed up with his childhood friend, the Jewish mobster Longy Zwillman. Zwillman would import rye whiskey from Canada and then use Kimmel’s New Jersey garages to store it.
It was gambling that brought Kimmel to Thorp’s doorstep that cold Sunday in February. A few weeks before, Thorp had given a public talk on his National Academy paper at the American Mathematical Society’s annual meeting, in Washington, DC. This time around, he permitted himself a provocative title: he called the talk “Fortune’s Formula: A Winning Strategy for Blackjack.” Blackjack aside, Thorp’s talk was a winning strategy for attracting media attention. He delivered the talk to a packed audience, and soon the AP and other news outlets came knocking. Within days, stories had begun to appear in the national media, including the Washington Post and Boston Globe. The dry annual AMS meeting rarely attracted much notice in the news, but something about an MIT mathematician taking Vegas to the cleaners struck a chord.
At first, Thorp reveled in the attention. His phone began ringing off the hook, with reporters looking for interviews and gambling fanatics hoping to learn Thorp’s tricks. He boasted to reporters that if he could get sufficient funding for a trip to Vegas, he would prove that his system worked in practice. As a publicity stunt, the Sahara, one of the big Vegas Strip casinos, offered him free room and board for as long as he liked — trusting that Thorp’s system, like the hundreds that preceded it, was at best a fantasy. But the Sahara wouldn’t front Thorp gambling money, and on his $7,000-a-year salary, Thorp couldn’t raise sufficient funds himself. (Since casinos have minimum bets, an early losing streak can wipe you out if you don’t have a pile of cash on hand — even if you’re very likely to win in the long run.)
This is where Kimmel came in. Some men like fine wines or expensive cigars. Others prefer cars, or sports, or perhaps art. As an inveterate gambling man, Kimmel was a connoisseur of the favorable betting system. When Kimmel read about Thorp’s blackjack system, he wrote to Thorp and offered to fund his experiment to the tune of $100,000. But first he needed to see the system in action. So when Thorp contacted him and agreed to meet, Kimmel took a car up from New York. When Kimmel arrived — introducing the two young women as his nieces — Thorp began by showing Kimmel his proofs and explaining his methodology. But Kimmel didn’t care about any of that. Instead, he took a deck of cards out of his pocket and began to deal. Kimmel would believe a system worked only after he’d watched someone win with it. They played all evening, and then again the next day. Over the coming weeks, Thorp would drive down to New York regularly to play against Kimmel and an associate, Eddie Hand, who was putting up part of the money for the casino trip.
It took about a month, but at last Kimmel was convinced that Thorp’s system worked — and that Thorp had what it took to use the system in a real casino. Thorp decided that $100,000 was too much and insisted on working with a smaller sum — $10,000 — because he thought gambling with too much money would attract unwanted attention. Kimmel, meanwhile, thought that Las Vegas was too high profile, and besides, too many people knew him there. So over MIT’s spring break, Thorp and Kimmel, who was once again accompanied by a pair of young women, descended on Reno to test Thorp’s system. It was a resounding success. They played, moving from casino to casino, until they developed a reputation that moved faster than they could. In just over thirty man-hours of playing, Thorp, Kimmel, and Hand collectively turned their $10,000 into $21,000 — and it would have been $32,000 if Kimmel hadn’t insisted on continuing to play one evening after Thorp announced he was too tired to keep counting. Thorp would later tell the story — with Kimmel’s name changed to Mr. X and Hand’s to Mr. Y — in a book, Beat the Dealer, that taught readers how to use his system to take Vegas to the cleaners themselves.
Thorp developed several methods for keeping track of how the odds in blackjack change as cards are played and removed from the deck. Using these systems, Thorp was able to reliably determine when the deck was in his favor, and when it was in the house’s favor. But suppose you are playing a game of blackjack, and suddenly you learn that the odds are slightly in your favor. What should you do?
It turns out that blackjack is extremely complicated. To make the problem tractable, it’s better to start with a simpler scenario. Real coins come up heads and tails equally often. But it’s possible to at least imagine (if not manufacture) a coin that is more likely to come up one way or the other — for now, suppose it’s more likely to come up heads than tails. Now imagine you’re making bets on coin flips with this weighted coin, against someone who is willing to pay even money on each flip, for as many flips as you want to play (or until you run out of money). In other words, if you bet a dollar and win the bet, your opponent gives you one dollar, and if your opponent wins, you lose one dollar. Since the coin is more likely to come up heads than tails, you would expect that over the long run money will tend to flow in one direction (yours, if you consistently bet heads) because you’re going to win more than half the time. Finally, imagine that your opponent is willing to take arbitrarily large or small bets: you could bet $1, or $100, or $100,000. You have some amount of money in your pocket, and if it runs out, you’re sunk. How much of it should you bet on each coin flip?
One strategy would be to try to make bets in a way that maximizes the amount of money you could stand to make. The best way to do this would be to bet everything in your pocket each time. Then, if you win, you double your money on each flip. But this strategy has a big problem: the coin being weighted means that you will usually win, not that you’ll always win. And if you bet everything on each flip, you’ll lose everything the first time it comes up tails. So even though you were trying to make as much money as possible, the chances that you’ll end up broke are quite high (in fact, you’re essentially guaranteed to go broke in the long run), with no chance to make your money back. This scenario — where your available funds run out, and you’re forced to accept your losses — is known as “gambler’s ruin.”
There’s another possibility — one that minimizes the chances of going broke. This is also a straightforward strategy: don’t bet in the first place. But this option is (almost) as bad as the last one, because now you guarantee that you won’t make any money, even though the coin is weighted in your favor.
The answer, then, has to be somewhere in the middle. Whenever you find yourself in a gambling situation where you have an advantage, you want to figure out a way to keep the chances of going broke to a minimum, while still capitalizing on the fact that in the long run, you’re going to win most of the bets. You need to manage your money in a way that keeps you in the game long enough for the long-term benefits to kick in. But actually doing this is tricky.
Or so it seemed to Thorp when he was first trying to turn his analysis of card-counting odds into a winning strategy for the game. Fortunately for Thorp, Shannon had an answer. When Thorp mentioned the money management problem to Shannon, Shannon directed Thorp to a paper written by one of Shannon’s colleagues at Bell Labs named John Kelly Jr. Kelly’s work provided the essential connection between information theory and gambling — and ultimately the insights that made Thorp’s investment strategies so successful.
Kelly was a pistol-loving, chain-smoking, party-going wild man from Texas. He had a PhD in physics that he originally intended to use in oil exploration, but he quickly decided that the energy industry had little appreciation of his skills, and so he moved to Bell Labs. Once he was in New Jersey, Kelly’s colorful personality attracted plenty of attention in his staid suburban neighborhood. He was fond of firing plastic-filled bullets into the wall of his living room to entertain houseguests. He was an ace pilot during World War II and later earned some local notoriety by flying a plane underneath the George Washington Bridge. But despite the theatrics, Kelly was one of the most accomplished scientists at AT&T — and the most versatile. His work ran the gamut from highly theoretical questions in quantum physics, to encoding television signals, to building computers that could accurately synthesize human voices. The work he’s best known for now, and that was of greatest interest to Thorp, was on applying Shannon’s information theory to horseracing.
Imagine you’re in Las Vegas, betting on the Belmont Stakes, a major horserace held in Elmont, New York. The big board in the off-track-betting room shows various odds: Valentine at 5 to 9, Paul Revere at 14 to 3, Epitaph at 7 to 1. These numbers mean that Valentine is expected to have a roughly 64% chance of winning, Paul Revere has an 18% chance of winning, and Epitaph has a 13% chance of winning. (These percentages are calculated by dividing the odds of each horse winning by the sum of the odds of that horse winning and losing — so, for Valentine, if your odds are 5 to 9, you divide 9 by 14.)
In the first half of the century, there was often a delay in communicating racing results between bookies. This meant that sometimes a race would be over, while people in other parts of the country continued betting on it. So if you had a particularly fast method of communication, you could in principle get the results before betting closed. By 1956, when Kelly wrote his paper, this had become quite difficult: telephones and television meant that bookies in Las Vegas would know what had happened in New York almost as soon as the people in Elmont. But suspend disbelief for a moment and imagine that you had someone in Elmont who could send you messages about the Belmont Stakes instantaneously — faster, even, than the bookies got their results.
If the messages you were receiving over your private wire service were perfectly reliable, you’d be wise to bet everything, since you’re guaranteed to win. But Kelly was more interested in a slightly different case. What happens if you have someone send you correct racing results, but there’s noise on the line? If the message that comes along is so garbled that you can’t make out much of anything, your default guess is going to be that Valentine is going to win, since that’s what the odds were to begin with and you haven’t received any new information. If it’s garbled but you’re pretty sure you heard a t sound, you’ve gotten some information — you have good reason to think Paul Revere didn’t win, since there’s no t in his name. If pressed, you would probably guess that your contact said “Valentine,” because that’s the more likely message, but you can’t know for sure. You wouldn’t want to put all of your money on one horse, because you still have a chance of losing. But you can rule out one possibility, which gives you an advantage: you now know that the bookie thinks Valentine’s and Epitaph’s chances aren’t as good as they really are, because the bookie is assuming Paul Revere has an 18% chance of winning. So if you make a combined bet on both Valentine and Epitaph in the right proportions, you’re guaranteed to win one of them for a net profit. Hence even the partial information is enough to help you decide what bets to place.
Shannon’s theory tells you how much credence to give a message when there’s a chance that the message is being distorted by noise, or when the level of noise makes it difficult to interpret the message in the first place. So if it’s difficult to decipher your racing tips, Shannon’s theory provides a way of deciding how to place your bets based on the partial information you do receive.
Kelly worked out the solution to this problem, provided you want to maximize the long-term growth of the money you start with. As in the example above, where you could make out a t sound but nothing else, partial information can be sufficient to give you an advantage over a bookie who is setting odds without any information about how the race turned out. The advantage can be calculated by multiplying the payout — the number b when someone gives you b-to-1 odds — by what you believe is the true probability of winning (based on your partial information), and then subtracting the probability of losing (again, based on your partial information). To figure out how much of your starting money to bet, as a fraction of what you have, you divide your advantage by the payout. This gives the equation now called the Kelly criterion or Kelly bet size. The percentage of your money to bet on any given outcome is
advantage
—————
payout
If your advantage is zero (or negative!), Kelly says not to bet at all; otherwise, bet the fraction of your wealth given by the Kelly criterion. If you always follow this rule, you will be guaranteed to outperform anyone adopting another betting strategy (such as betting it all or betting nothing). One of the most surprising things in Kelly’s paper, something that feels almost mystical, is a proof of what will happen if you follow his rule in a scenario like the horse-betting story, where you have a stream of (partial) information coming in: if you always use the Kelly criterion, under certain ideal circumstances your wealth will increase at exactly the rate that information comes in along the line. Information is money.
When Shannon showed Kelly’s paper to Thorp, the last piece of the blackjack puzzle fell into place. Card counting is a process by which you gain information about the deck of cards — you learn how the composition of the deck has changed with each hand. This is just what you need to calculate your advantage, as Kelly proposed. Information flows and your money grows.
As Thorp and Kimmel made their preparations for Reno, Shannon and Thorp were collaborating on Thorp’s roulette plan. When he heard Thorp’s ideas, Shannon was mesmerized, in large part because Thorp’s roulette idea combined game theory with Shannon’s real passion: machines. At the heart of the idea was a wearable computer that would perform the necessary calculations for the player.
They began testing ideas for how the actual gambling would work, assuming they could make sufficient progress on the prediction algorithm. They agreed that it would take more than one person for it to go smoothly, because one person couldn’t focus sufficiently on the wheel to input the necessary data and still be prepared to bet before the ball slowed down and the croupier (roulette’s equivalent of a dealer) announced that betting was closed. So they decided on a two-person scheme. One person would stand near the roulette wheel and watch carefully — ideally while doing something else, so as not to attract attention. This person would be wearing the computer, which would be a small device, about the size of a cigarette pack. The input device would be a series of switches hidden in one of the wearer’s shoes. The idea was that the person watching the wheel would tap his foot when the wheel started spinning, and then again when the ball made one full rotation. This would initialize the device and synchronize it to the wheel.
Meanwhile, a second person would be sitting at the table, with an earpiece connected to the computer. Once the computer had a chance to take the initial speeds of the ball and the rotor into account, it would send a signal to the person at the table indicating how to bet. It was too difficult to predict just what number the ball would fall into, as the calculations for that level of precision were far too complicated. But roulette wheels are separated into eight regions, called octants. Each octant has four or five numbers in it, arranged in an order that would seem random to someone who didn’t have the roulette wheel memorized. Thorp and Shannon discovered that in many cases, they could accurately predict which octant the ball would fall into, narrowing the possible outcomes from thirty-eight to four or five. The computer was designed to indicate whether there was a higher-than-normal chance that the ball would fall into a particular octant. Once the person at the table received the signal, he would quickly place bets on the appropriate numbers — using a betting system based on the Kelly criterion to decide how much to bet on each.
By the summer of 1961, the machine was ready for action. Thorp, Shannon, and their wives traveled to Las Vegas. Aside from broken wires and the night the earpiece was discovered, the experiment was a (middling) success. Unfortunately, technical difficulties prevented Thorp and Shannon from betting any substantial amounts of money, but it was clear that the device did what it was intended to do. With Shannon’s help, Thorp had beaten roulette.
The trip as a whole, though, proved more stressful than it was worth. Gambling can be tense enough without the constant possibility that burly enforcers will descend on you. Meanwhile, Thorp had already received the job offer in New Mexico when the two couples made their Vegas trip. Despite their small profit, by the time they left Vegas, Thorp knew that he and Shannon wouldn’t continue the roulette project. But it was just as well. With the blackjack and roulette experiences under his belt, Thorp was ready to try his hand at a new, bigger challenge: the stock market.
Thorp bought his first share of stock in 1958, before he had finished his PhD. He was living on a modest salary as an instructor at UCLA, but he had managed to cobble together a small sum to put away for the future. Over the next year, his investment dropped by half, and then slowly inched its way back up. Thorp sold, essentially breaking even after a year-long roller-coaster ride.
In 1962, flush with blackjack winnings and the proceeds from his card-counting book, he decided to try again. This time he bought silver. In the early 1960s, demand for silver was sky-high — so high that many people expected the open-market value of the silver in U.S. coins to exceed the coins’ denominations, which would make quarters and silver dollars more valuable as scrap metal than as money. It seemed like a safe bet. To maximize his profits, Thorp borrowed some money from his broker, with the silver investment as collateral. Silver went up for most of the sixties, but it was very volatile; not long after Thorp bought in, the price fell temporarily, but sharply, and the broker decided he wanted his money back. When Thorp couldn’t come up with the cash, the broker sold Thorp’s silver, at a loss of about $6,000 to Thorp. It was devastating — over half the annual salary for an assistant professor in 1962.
After this second setback, Thorp decided to get serious. After all, he was a world-renowned expert in the mathematics of gambling. And the stock market wasn’t so different from a casino game or a horserace: you make bets, based on some partial information about the future, and if things go your way, you get a payout. You can even think of market prices as reflections of the “house” odds, meaning that if you can get access to even partial relevant information, you can compare market odds and true odds to determine whether you have an advantage, just as in blackjack.
All Thorp needed was to figure out a way to get information. Thorp began his careful study of markets in the summer of 1964, by reading The Random Character of Stock Prices — the collection of essays that featured papers by Bachelier, Osborne, and Mandelbrot. Thorp was soon convinced by Osborne and the other authors in the collection who argued that when you look at the detailed statistics, stock prices really do behave randomly — because, as Bachelier and Osborne both argued, all available information was already incorporated into the price of a stock at any given moment. By the end of the summer, Thorp was stymied. If Osborne was right, Thorp didn’t see a way to gain an advantage over the market.
With a full teaching load for the 1964–65 academic year, Thorp had little time for anything else. He put his market studies aside, planning to return to the project the following summer. In the meantime, things in New Mexico took a turn. A growing faction of mathematicians working in a different field had taken control of the department, prompting him to look for other jobs. He learned that the University of California was preparing to open a new campus, about fifty miles south of Los Angeles, in the middle of Orange County. He applied for, and received, a job at the new University of California, Irvine. It looked as if work on the stock market would have to be deferred further, since he now had another major move to plan and a new department to settle into.
Still, he remained interested in the project, and at some point during the year, while scanning advertisements in investment magazines, Thorp came across a publication called the RHM Warrant Survey. Warrants are a kind of stock option, offered directly by the company whose stock is being optioned. Like an ordinary call option, they give the holder the right to purchase a stock at a fixed price, before a fixed expiration date. Throughout the middle of the twentieth century, options weren’t traded widely in the United States. Warrants were the closest thing to an option available. RHM claimed that trading warrants was a possible source of untold wealth — if you understood them. Implicit was that most people didn’t know what to do with warrants. This was just the kind of thing Thorp was looking for, and so he decided to subscribe. But he didn’t have much time to look at the documents that began arriving.
As the spring semester came to an end in New Mexico, Thorp found himself with a few weeks to spare before his move to California. He began to riffle through the RHM documents. The writers at RHM apparently thought of warrants as a kind of lottery ticket. They were cheap to buy, usually worthless, but occasionally you could strike it rich if a stock started trading well above the warrant’s exercise price.
Where RHM, and most other investors, saw a lottery ticket, Thorp saw a bet. A warrant is a bet on how a stock will perform over a fixed period. The price of the warrant, meanwhile, is a reflection of the market’s determination of how likely the buyer is to win the bet. It also reflects the payout, since your net profit if the warrant does become valuable is determined by how much you had to pay for the warrant in the first place. But Thorp had just spent an entire summer reading about how stock prices are random. He pulled out a piece of paper and began to calculate. His reasoning followed Bachelier’s thesis closely, except that he assumed prices were log-normally distributed, à la Osborne. He quickly arrived at an equation that told him how much a warrant should really be worth.
This was valuable, if not trailblazing. But Thorp had an ace up his sleeve, something Bachelier and Osborne never imagined. With five years of gambling experience, Thorp realized that calculating a “true” price for a warrant is a lot like calculating the “true” odds on a horserace. In other words, the theoretical relationship that Thorp discovered between stock prices and warrant prices gave him a way to extract information from the market — information that gave him an edge, not in the stock market directly, but in the associated warrant market. This partial information was just what Thorp needed to implement the Kelly system for maximizing long-term profits.
Thorp was energized by this work on warrants. It seemed to him that he had finally found the perfect way to use his gambling experience to profit from the world’s biggest casino. But there was a problem. When he finished his calculations and plugged some numbers into a computer (Thorp wasn’t able to solve the equations he set up explicitly, but he was able to come up with a way to use a computer to do the final calculations for him), he discovered that there was no advantage to buying warrants. In other words, you couldn’t go out and buy warrants and expect to make a profit — according to the Kelly betting system, you should invest nothing! The reason for this wasn’t that warrants were all trading at exactly what they were worth; rather, they were trading at much too high a price. The dirt-cheap lottery tickets that RHM Warrant Survey was advertising were actually much, much too expensive.
If you think of investing as a kind of gamble, buying a stock represents a bet that the stock price will go up. Selling a stock, meanwhile, is a bet that the stock will go down. Thorp, like Bachelier before him, realized that the “true” price of a stock (or option) corresponds to the price at which the odds of the buyer winning are the same as the odds of the seller winning. But with traditional trades, there’s an asymmetry. You can virtually always buy a stock; but you can sell a stock only if you already own it. So you can bet against a stock only if you’ve already chosen to bet for it. This is similar to a casino: it would be highly desirable, in roulette, say, to bet against a number. This, after all, is what the house does, and the house ultimately has the long-term advantage. But it isn’t possible. No casino will let you bet that your blackjack hand will lose.
In investing, however, there is that possibility. If you want to sell a stock you don’t already own, all you need to do is find someone who does own the stock but doesn’t want to sell it, and who is willing to let you borrow the shares for a while. Then you sell the borrowed shares, with the expectation that at some later time you will buy the same number of shares back and return them to their original owner. This way, if the price goes down after you sell, you see a profit, since you can buy the shares back at the lower price. Whoever loaned you the shares, meanwhile, is no worse off than if he had simply held on to them. The origins of this investment practice, known as short selling, are obscure, but it is at least three hundred years old. We know this because it was banned in England in the seventeenth century.
Today, short selling is perfectly standard. But in the 1960s — indeed, for much of the practice’s history — it was viewed as dangerous at best, and perhaps even depraved or unpatriotic. The short seller was perceived as a blatant speculator, gambling on market moves rather than investing capital to spur growth. Worse, he had the nerve to take a financial interest in bad news. This struck many investors as déclassé. Views on short selling changed in the 1970s and 1980s, in part because of Thorp’s and others’ work, and in part because of the rise of the Chicago School of economics. As those economists argued at the time, short selling may seem crude, but it serves a crucial social good: it helps keep markets efficient. If the only people who can sell a stock are the ones who already own it, people who have information that could be bad for the company often don’t have any way of affecting market prices. This would mean that information could be available that isn’t reflected in the stock price, because the people who have access to the information aren’t able to participate in the market. Short selling prevents this situation.
Whatever the social impact, short selling does have real risks attached. When you buy a stock (sometimes called taking a “long” position, in contrast to the “short” position that short sellers take), you know how much money you stand to lose. Stockholders aren’t responsible for a corporation’s debts, so if you put $1,000 into AT&T, and AT&T goes under, you lose at most $1,000. But stocks can go up arbitrarily high. So if you make a short sale, there’s no telling how much money you stand to lose. If you sell $1,000 worth of AT&T short, when it comes time to buy the shares back to repay the person you borrowed them from, you might need to come up with a lot more money than you originally received in the sale in order to get the shares back.
Still, Thorp was able to find a broker who was willing to execute the required trades. This solved one problem, of figuring out how to apply Kelly’s results in the first place. But even if Thorp could ignore the social stigma of short selling — and he could — the real dangers of unlimited losses remained. Here, though, Thorp had one of his most creative insights. His analysis of warrant pricing gave him a way of relating warrant prices to stock prices. Using this relationship, he realized that if you sell warrants short, but at the same time you buy some shares of the underlying stock, you can protect yourself against the warrant increasing in value — because if the warrant increases in value, according to Thorp’s calculations the stock price should also increase, limiting your losses on the warrant. Thorp discovered that if you pick the right mix of warrants and stocks, you can guarantee a profit unless the stock price moves dramatically.
This strategy is now called delta hedging, and it has spawned other strategies involving other “convertible” securities (securities that, like options, can be exchanged for another security, such as certain bonds or preferred shares of stock that can be converted to shares of common stock). Using such strategies, Thorp was able to consistently make 20% per year . . . for about forty-five years. He’s still doing it — indeed, 2008 was one of his worst years ever, and he made 18%. In 1967, he wrote a book, called Beat the Market, with a colleague at UC Irvine who had worked on similar ideas.
Beat the Market was too unusual, too different from then-current practices, to change Wall Street overnight. Many traders simply ignored it; most who read it didn’t understand it, or missed its importance. But one reader, a stockbroker named Jay Regan, saw Thorp’s genius. He wrote to Thorp and proposed that they enter a partnership to create a “hedge fund.” (The term hedge fund, originally “hedged” fund, was already twenty years old when Thorp and Regan first met, but nowadays so many hedge funds are based on ideas related to Thorp’s delta hedging strategy that the name might as well have originated with Thorp and Regan.) Regan would take care of the tasks that Thorp hated: he would promote the fund, find and manage clients, interface with brokers, execute the trades. Thorp would just be responsible for identifying the trades and working out the mix of stocks and convertibles to buy and sell. Thorp wouldn’t even have to leave the West Coast: Regan was happy to run the business end of things from New Jersey, while Thorp stayed in Newport Beach, California, building a team of mathematicians, physicists, and computer scientists to identify favorable trades. The deal seemed too good to be true. Thorp quickly agreed.
The company that Thorp and Regan created was initially called Convertible Hedge Associates, though in 1974 they changed the name to Princeton-Newport Partners. Success came quickly. In its first full year, their investors made just over 13% each on their investments, after fees — while the market returned only 3.22%. They also had some impressive early admirers. One of their earliest investors, Ralph Gerard, the dean of UC Irvine’s graduate school — in a sense, Thorp’s boss — had inherited a fortune. He was looking to invest with a new fund, because his old money manager was moving on to other projects. Thorp was close to home, but before Gerard would invest with the new partnership, he wanted his old money manager, a trusted friend, to take a careful look at Thorp. Thorp agreed to the meeting, and one evening he and Vivian drove a few miles down the Pacific Coast Highway, to Laguna Beach, where the old money manager lived. The plan was to play bridge and chat casually, so that the old money manager could size Thorp up.
Thorp learned that his host was leaving the money management business to focus on a new venture — an old manufacturing and textiles company that he was hoping to rebuild. He’d made his first million managing other people’s money, and now it was time to put his own money to work. But mostly, Thorp and his host discussed probability theory. While they were playing, the host mentioned a kind of trick dice, called nontransitive dice. Nontransitive dice are a set of three dice with different numbers on each side. They have the unusual property that if you roll dice 1 and 2 at the same time, die 2 is favored; if you roll dice 2 and 3 at the same time, die 3 is favored; but if you roll dice 1 and 3 at the same time, die 1 is favored. Thorp, always a fan of games and the probabilities associated with them, had long been interested in nontransitive dice. From that point on, the two were fast friends. On the ride back to Newport Beach, Thorp told Vivian that he expected their host to someday be the richest man in the world. In 2008, his prediction came true. The old money manager’s name was Warren Buffett. And at his recommendation, Gerard invested with Thorp’s company.
Princeton-Newport Partners quickly became one of the most successful hedge funds on Wall Street. But all good things must end. And Princeton-Newport’s demise was particularly dramatic. On December 17, 1987, about fifty FBI, ATF, and Treasury Department agents pulled up in front of the firm’s Princeton office. The agents stormed into the building, looking for records and audiotapes regarding a series of trades the firm had made with the soon-to-be-indicted junk bond dealer Michael Milken. A former Princeton-Newport employee named William Hale had testified to a grand jury that Regan and Milken were engaged in a tax dodge known as stock parking. One downside to delta hedging and related strategies is that profits from short-term and long-term positions are taxed differently. So when you buy and sell at the same time, profits and losses that otherwise would cancel each other out don’t cancel from a tax perspective. Regan was trying to avoid additional taxes by concealing who actually owned the long-term positions, “parking” the stocks at Milken’s firm. Parked stocks were officially sold to Milken, with an unofficial agreement that Regan could buy them back for a predetermined price, irrespective of what had happened in the market in the meantime. Though hardly nefarious, stock parking was illegal, and Rudy Giuliani, who was prosecuting the case, hoped that by applying pressure on the Princeton-Newport side, he could dig up additional evidence against Milken.
By all accounts, Thorp was completely in the dark. He didn’t know that the East Coast side of the firm was doing anything illegal until the scandal broke in the news. He was never accused of, let alone charged with, any crime. And by the time he got wind of the raid, Regan already had a lawyer and refused to talk to his partner. The firm hobbled along for another year, but the legal proceedings had ruined its reputation. In 1989, Princeton-Newport Partners closed. Over the course of twenty years, it had average returns of 19% (over 15% after fees) — an unprecedented performance.
After Princeton-Newport closed, Thorp took some time off before regrouping to form Edward O. Thorp Associates, his own money management firm. Though he has long since given up managing other people’s money professionally, he still runs the fund today using his own capital. Meanwhile, hundreds of quant hedge funds have opened (and closed), trying to reproduce Princeton-Newport’s success. As the Wall Street Journal put it in 1974, Thorp had ushered in a “switch in money management” to quantitative, computer-driven methods. It’s amazing what a little information theory can do.