6

The Prediction Company

WHEN THE SANTA FE TRAIL was first pioneered in 1822, it stretched from the westernmost edge of the United States — Independence, Missouri — through Comanche territory and into the then-Mexican state of Nuevo Mexico. From there it passed over the high plains of what is now eastern Colorado and then took the Glorieta Pass through the Sangre de Cristo Mountains, the southernmost subrange of the Rockies. To the southwest was the foot of the trail, the Palace of the Governors in the city of Santa Fe, the seat of Mexican power north of the Rio Grande. In front of the palace was the city’s central market square, where traders from the United States displayed their goods. Twenty years after the first American trailblazers arrived in the city, the U.S. Army followed, battling through the Glorieta Pass and claiming the city and all of its surrounding territory as part of the newly annexed state of Texas.

A century and a half later, two men in their late thirties sat in a saloon at the end of the trail, long paved over and replaced by an interstate highway, sipping tequila. They were surrounded by younger men, chatting furiously. Outside, the park in the bustling market square was green from late-summer rains. Across the way, the Palace of the Governors sat as it always had, the oldest continuously used public building in North America. The square was surrounded by low-slung buildings, reddish brown and in the pueblo style, much as it was when the American army arrived in 1846. The men in the saloon were the newest traders to hang their sign in Santa Fe’s historic market district. Down the road from the square, in a one-story adobe house on Griffin Street, a bank of state-of-the-art computers was humming, following the instructions set by the men before they left for their evening drink. The year was 1991. The men were in the prediction business.

The two graybeards — at least by the standards of the new field of nonlinear dynamics and chaos, which they had spent the last fifteen years helping to create — were James Doyne Farmer and Norman Packard. Until recently, Farmer had been head of the Complex Systems group at Los Alamos National Laboratory, the government lab most famous for having been the headquarters of the Manhattan Project. Packard, meanwhile, had just left a tenured position as associate professor of physics at the University of Illinois’s flagship campus. Among the other men at the bar were former graduate students and recent PhDs, adventurers looking to follow Farmer and Packard as they blazed a new trail.

The new venture was a company, soon to be called the Prediction Company (though as they sat that evening on the Santa Fe market square, the company was still nameless). Their goal was to do the impossible: to predict the behavior of financial markets. If anyone could do it, it was this group. Between them, Farmer and Packard had three decades of experience in a subject known as nonlinear forecasting, an area of physics and applied mathematics (and increasingly other fields as well) that sought to identify predictive patterns in apparently random phenomena. In Packard’s words, it involved identifying the order at “the edge of chaos,” the small windows of time in which there was enough structure in a chaotic process to predict where a system would go next. The tools they used had been developed to predict things like how a turbulent fluid would behave in a narrow pipe. But Farmer and Packard, and the half-dozen acolytes who had followed them to Santa Fe, believed they could predict far more than that.

As head of the Manhattan Project, J. Robert Oppenheimer was certainly the most important member of his family at Los Alamos. But he wasn’t the only one. His kid brother, Frank, was also a physicist — and when the elder Oppenheimer took over work on the bomb, Frank pitched in, first at Lawrence Berkeley lab in California, and then at Oak Ridge in Tennessee, before finally joining his brother in New Mexico. Eight years younger than his famous brother, Frank arrived at Los Alamos just in time to help coordinate the Trinity test, the world’s first nuclear detonation, which was staged in the middle of the Tularosa Basin in New Mexico on July 16, 1945. After the war, Robert appeared on the covers of Time and Life. He was the public spokesman for Cold War science in the United States, and for military restraint regarding the use of the nuclear technology he had helped develop. Frank was not quite so prominent, but even so, his military research landed him a job in the physics department at the University of Minnesota.

In 1947, J. Robert Oppenheimer was appointed director of both the Institute for Advanced Study in Princeton — possibly the most prestigious scientific research institute in the world — and the newly formed Atomic Energy Commission. The same year, the Washington Times-Herald reported that Frank Oppenheimer had been a member of the American Communist Party from 1937 to 1939. Eager as he was to continue in his brother’s footsteps, 1947 was not a good year for a would-be nuclear physicist to be outed as a Communist. Frank initially denied the charges and appeared to have escaped with his reputation intact. But two years later, amid mass fear about Soviet nuclear research and the mishandling of the “atomic secret,” Frank was called before the infamous House Un-American Activities Committee. Under oath and before Congress, he admitted that he and his wife had been members of the party for about three and a half years, pushed to political extremes during the Great Depression.

The confession was a newspaperman’s dream. Frank Oppenheimer, brother of the American scientist-savior, was an admitted Communist. He was never convicted of a crime, nor was there any reason to think that he had compromised classified information. But during the heady and paranoid days of McCarthyism, the mere suggestion of Communist affiliation was enough to blacklist someone, no matter whose brother he was. Frank was forced to resign from his position at the University of Minnesota, and for more than a decade he was effectively strong-armed out of physics. Living on a substantial inheritance (sadly, he was forced to sell one of the van Goghs he’d inherited from his father), he and his wife bought a ranch in Colorado and made a new start as cattle farmers and homesteaders.

It was not until 1959 that McCarthyism had cooled enough that Frank Oppenheimer could get a job teaching physics at a research university, and even then it took the endorsements of a handful of Nobel and National Medal of Science laureates. Grateful to be back to work, he accepted a position at the University of Colorado. By now, though, the field had long outpaced him, so he limited himself to working on topics only indirectly connected to physics, such as science education.

It was at the University of Colorado that Oppenheimer met a young graduate student named Tom Ingerson. Ingerson had grown up in Texas and had gone on to major in physics at the University of California, Berkeley. He had come to Colorado to work on general relativity, the theory of gravitation that Einstein had proposed in 1915 as an alternative to Newton’s theory. General relativity had brought fame and fortune to its discoverer, but it was overshadowed by the new quantum theory, which attracted far more attention and funds. This didn’t seem to bother Ingerson, who was strong-willed and fiercely independent. He would work on what he liked.

In 1964, Ingerson began to think about finding a job in a physics department. In the 1960s, academia was an old boys’ club in the strongest sense. Jobs at the top universities were filled by calling up famous physicists at famous schools and asking for recommendations — which were then given, in frank and certain terms. The “best men” from schools like Princeton, Harvard, and the University of Michigan were given the best jobs. Lesser men were dependent on the goodwill and reputation of their faculty, though personal connections and called-in favors were usually enough to find a job, especially during this, the heyday of the military-scientific-industrial era. Colorado may not have been in the very highest echelon, but it was up there, and a graduate could be reasonably assured of good employment. Unless, of course, he used the wrong person as a reference.

Ingerson didn’t learn until many years later that his cardinal sin had been mentioning that Frank Oppenheimer would vouch for him. At the time, the physics community’s uniform disinterest in his application seemed like a mystery to him. None of the employers he contacted wrote back to him until the very end of the school year, and then he heard from only a single school, the old New Mexico Territory’s teaching college, newly retooled as Western New Mexico University. This was how a bright, independent-minded young physicist found himself in Silver City, New Mexico, the sole member of the local university’s physics department.

Perched on the Continental Divide, Silver City was a paradigm Western mining town. Built in the wake of a major find by silver prospectors, it was in the middle of what was traditionally Apache territory. Trade and transport were difficult and dangerous, with regular attacks by regional tribes (and local bandits). In 1873, Billy the Kid, then just a teenager, settled in Silver City with his mother and brother — it was there that, in 1875, he was arrested for the first time, for stealing some cheese. Later that year, he would escape from a Silver City jail to begin his life as an outlaw, a fugitive from the Silver City sheriff. By the time Ingerson arrived, the days of cowboys and Indians were over. But Silver City was still a one-horse town. Resigned to make do with the cards he had inexplicably been dealt, Ingerson looked for ways to engage with the Silver City locals.

He started by volunteering with the local Boy Scout troop, which he thought might benefit from his experience as a teacher. It was at his first meeting, the same year that he moved to Silver City, that Ingerson met a pudgy twelve-year-old named Doyne Farmer. Silver City was filled with engineers, attracted by the mining industry. But a scientist was a rarity. Farmer didn’t really know what a physicist did, yet he found Ingerson irresistible. Farmer decided at the meeting that whatever physics was, if Ingerson did it, then Farmer would do it, too. He lingered afterward and then followed Ingerson home. On the way, Farmer announced his newfound career goal.

It was an unlikely friendship. But Farmer and Ingerson were kindred spirits, stuck for different reasons about as far away from the center of the scientific universe as they could be. For Ingerson, Farmer was a welcome diversion, a smart student ready to talk seriously about all sorts of scientific topics. For Farmer, though, Ingerson was pure inspiration. He changed his life.

Ingerson soon started a new group, which he called Explorer Post 114, with his home as clubhouse. The Explorer groups were a subsidiary of the Boy Scouts of America, intended for older children to learn by doing. Farmer was the inaugural member of Ingerson’s group, but he was soon joined by others. The Explorers shared some features with the Boy Scouts — they went camping and hiking in the desert — but the real focus was on tinkering and building things, like ham radios and dirt bikes.

Officially, to join an Explorer post one needed to be at least fourteen years old. But one day in 1966, a younger boy was invited to come to a meeting. He had been asked to give a lecture on new radio technology, a topic on which he was apparently an expert. Though he was only twelve, the other explorers recognized Norman Packard as one of their own, and he was immediately welcomed into the group as the new electronics guru. Unlike Farmer, Packard had known he wanted to be a physicist from an early age. He seemed made for it. After all, it was his precocious expertise that earned him an invitation to the Explorers. Packard and Farmer quickly became friends.

Ingerson lasted for two more years in Silver City before he got a job at the University of Idaho. But in just four years, he had succeeded in shaping the lives of two men who would go on to become world-class physicists. When Ingerson left, Farmer was sixteen and a junior in high school (Packard was two years younger). Bored with Silver City and anxious to follow his friend to conquer new territory, Farmer decided to apply to the University of Idaho a year early. He got in, and instead of finishing high school in Silver City, he moved into Ingerson’s attic in Moscow, Idaho, to start his career as a physicist. After a year in Idaho, though, Farmer was ready for bigger pastures. In 1970, he transferred to Stanford University. True to his ambitions, he majored in physics — laying the groundwork for a career that would change science, and finance, forever.

The ideas at the heart of Farmer’s and Packard’s work were first developed by a man named Edward Lorenz. As a young boy, Lorenz thought he wanted to be a mathematician. He had a clear talent for mathematics, and when it came time to select a major at Dartmouth, he had few doubts about what he would choose. He graduated in 1938 and went on to Harvard, planning to pursue a PhD. But World War II interfered with his plans: in 1942, he joined the U.S. Army Air Corps. His job was to predict the weather for Allied pilots. He was given this task because of his mathematical background, but at that point, at least, mathematics was of little use in weather forecasting, which was done more on the basis of gut feelings, rules of thumb, and brute luck. Lorenz was sure there was a better way — one that used sophisticated mathematics to make predictions. When he left the service in 1946, Lorenz decided to stick with meteorology. It was a place where he could put his training to productive use.

He went to MIT for a PhD in meteorology and stayed for the rest of his career — first as a graduate student, then as a staff meteorologist, and finally as a professor. He worked on many of the mainstream problems that meteorologists worked on, especially early in his career. But he had some unusual tastes. For one, based on his experience in the army, he maintained an interest in forecasting. This was considered quixotic at best by his colleagues; the poor state of forecasting technology had convinced many that forecasting technology was a fool’s errand. Another oddity was that Lorenz thought computers — which, in the 1950s and 1960s, were little more than souped-up adding machines — could be useful in science, and especially in the study of complicated systems like the atmosphere. In particular, he thought that with a big enough computer and careful enough research, it would be possible to come up with a set of equations governing how things like storms and winds developed and changed. You could then use the computer to solve the equations in real time, keeping one step ahead of the actual weather to make accurate predictions long into the future.

Few of his colleagues were persuaded. As a first step, and as an attempt to show his fellow meteorologists that he wasn’t crazy, Lorenz came up with a very simple model for wind. This model drew on the behavior of wind in the real world, but it was highly idealized, with twelve rules governing the way the wind would blow, and with no accounting for seasons, nightfall, or rain. Lorenz wrote a program using a primitive computer — a Royal McBee, one of the very first computers designed to be placed on a desk and operated by a single user — that would solve his model’s equations and spit out a handful of numbers corresponding to the magnitude and direction of the prevailing winds as they changed over time. It wasn’t a predictive model of the weather; it was more like a toy climate that incorporated atmospheric phenomena. But it was enough to convince at least some of his colleagues that this was something worth pursuing. Graduate students and junior faculty would come into his office daily to peer into Lorenz’s imaginary world, taking bets on whether the wind would turn north or south, strengthen or weaken, on a given day.

At first, it seemed that Lorenz’s model was a neat proof of concept. It even had some (limited) predictive power: certain patterns seemed to emerge over and over again, with enough regularity that a working meteorologist might be able to look for similar patterns in actual weather data. But the real discovery was an accident. One day, while reviewing his data, Lorenz decided he wanted to look at a stretch of weather more closely. He started the program, plugging in the values for the wind that corresponded to the beginning of the period he was interested in. If things were working as they should, the computer would run the calculations and come up with the same results he had seen before. He set the computer to work and went off for the afternoon.

When he returned a few hours later, it was obvious that something had gone wrong. The data on his screen looked nothing like the data he had seen the first time he had run the simulation with these same numbers. He checked the values he had entered — they were correct, exactly what had appeared on his printout. After poking around for a while, he concluded that the computer must be broken.

It was only later that he realized what had really happened. The computer contained enough memory to store six digits at a time. The state of Lorenz’s mini world was summed up by a decimal with six figures, something like .452386. But he had set up the program to record only three digits, to save space on the printouts and make them more readable. So instead of .452386 (say), the computer printout would read .452. When he set up the computer to rerun the simulation, he had started with the shorter, rounded number instead of the full six-digit number that had fully described the state of the system during the first run-through.

This kind of rounding should not have mattered. Imagine you are trying to putt a golf ball. The hole you are aiming for is only slightly larger than the ball itself. And yet, if you miscalculate by a fraction of an inch, and you hit the ball a little too hard or a little too softly, or you aim a little to one side, you would still expect the ball to get close to the hole, even if it doesn’t go in. If you are throwing a baseball, you would expect it to get pretty close to the catcher even if your arm doesn’t extend exactly as you want it to, or even if your fingers slip slightly on the ball. This is how the physical world works: if two objects start in more or less the same physical state, they are going to do more or less the same thing and end up in very similar places. The world is an ordered place. Or at least that’s what everyone thought before Lorenz accidentally discovered chaos.

Lorenz didn’t call it chaos. That word came later, with the work of two physicists named James Yorke and Tien-Yien Li who wrote a paper called “Period Three Implies Chaos.” Lorenz called his discovery “sensitive dependence on initial conditions,” which, though much less sexy, is extremely descriptive, capturing the essence of chaotic behavior. Despite the fact that Lorenz’s system was entirely deterministic, wholly governed by the laws of Lorenzian weather, extremely small differences in the state of the system at a given time would quickly explode into large differences later on. This observation, a result of one of the very first computer simulations in service of a scientific problem, contradicted every classical expectation regarding how things like weather worked. (Lorenz quickly showed that much simpler systems, such as pendulums and water wheels, things that you could build in your basement, also exhibited a sensitivity to initial conditions.)

The basic idea of chaos is summed up by another accidental contribution of Lorenz’s: the so-called butterfly effect, which takes its name from a paper that Lorenz gave at the 1972 meeting of the American Association for the Advancement of Science called “Predictability: Does the Flap of a Butterfly’s Wings in Brazil Set Off a Tornado in Texas?” (Lorenz never took credit for the title. He claimed one of the conference organizers came up with it when Lorenz forgot to submit one.)

Lorenz never answered the question asked in the title of his talk, but the implication was clear: a small change in initial conditions can have a huge impact on events down the road. But the real moral is that, even though chaotic systems are deterministic — in the sense that an infinitely precise description at any given instant can in principle lead to an accurate prediction — it is simply impossible to capture the state of the world with such precision. You can never account for all the flaps of all the butterflies across the globe. And even the tiniest errors will quickly explode into enormous differences. The result is that, even though weather is deterministic, it seems random because we can never know enough about butterflies.

Farmer finished his physics degree at Stanford in 1973, although not without a few bumps along the way (after his first year there, he had done poorly enough to be put on academic probation — after which he entertained the possibility of dropping out to open a smoothie shack in San Francisco or maybe smuggle motorcycles). By the end of his college years, however, Farmer had pulled himself together sufficiently to be admitted to a handful of graduate schools for astrophysics. A trip down the California coast was enough to make up Farmer’s mind, however, and he decided to attend the new University of California campus in Santa Cruz. Packard, meanwhile, had gone to Reed College, in Portland, Oregon, a school famous for the independent spirit of its undergraduates.

During the summer of 1975, the year after Packard’s junior year at Reed and Farmer’s second year of graduate school, Packard and Farmer decided to try their hands at gambling. They had explored the idea independently, Farmer through reading A. H. Morehead’s Complete Guide to Winning Poker, and Packard by reading Ed Thorp’s Beat the Dealer. With their analytic minds and disdain for authority, both men found gambling systems had a certain appeal. They could make money without doing work — and at least in the blackjack case, they could do it by being smarter than everyone else. It was a romantic idea. The trouble was in the execution.

Packard studied Thorp’s system carefully and then, along with a friend from Reed named Jack Biles, he took it to Vegas. They kept careful track of their winnings and losses — and observed an awful lot of wins. Day after day, they would record profits. They would switch to higher-stakes tables as their accumulated capital increased, and the profits would soar even higher. But then something happened. No matter how much success they had, there would always be a losing streak that would bring them back to zero. In the end, they barely broke even. It was only at the very end of a summer of gambling that they realized they were being cheated. In the years since Thorp’s card-counting system had first been introduced, casinos had become very good at identifying — and foiling — card counters, often by simple methods like crooked dealing.

Farmer, meanwhile, had memorized Morehead’s book. But he had never played poker before reading it, so even though he knew what to do in any given situation, he didn’t know how to shuffle cards or handle chips. He dealt like a kindergartener. But the poor mechanics ultimately worked to his advantage: he looked like an easy mark. Playing in the card rooms of Missoula, Montana, under the alias “New Mexico Clem,” Farmer and a friend from Idaho — an accomplice from the motorcycle-smuggling scheme named Dan Browne — cleaned up against the Missoula cowboys. Browne, a more seasoned player who had paid his way through college by gambling in Spokane, Washington, marveled at Farmer’s unlikely success.

At the end of the summer, Farmer and Packard decided to meet up to compare notes on their gambling adventures. Farmer had good news to report: you could make a killing in poker, if you just played by the book. Packard’s experience was less auspicious. But in place of blackjack winnings, he brought something even better: a new idea for a gambling system. Prompted in part by some cryptic remarks that Thorp had made at the end of his book, Packard convinced himself that another game could be beaten more effectively than blackjack (and with less likelihood for casino shenanigans). Packard, like Thorp before him, had an idea about roulette.

Farmer was skeptical, but Packard was persistent and finally convinced Farmer to think about it. Soon enough, Farmer was on board. He, Packard, and Biles spent three days thinking about the problem, working out some initial calculations and getting excited about their newest project. By the time Farmer had to go back to Santa Cruz, the three men had decided to pursue the project. They would build a computer to beat roulette.

In the fall of 1975, Farmer was starting his third year of graduate school. He was supposed to be settling on a dissertation topic and beginning research in astrophysics. Instead, he and Browne began running experiments on a roulette wheel they had bought in Reno, at Paul’s Gaming Devices, the manufacturer rumored to provide the regulation wheels used in Reno and Las Vegas. (Farmer’s thesis advisor, a man named George Blumenthal, had enjoyed his own run as a would-be card counter in Las Vegas. He was tickled enough by Farmer’s project to look the other way as Farmer’s academic research stalled — in fact, after reviewing Farmer’s calculations, he even suggested that there might be a physics dissertation lurking in the roulette project.) Packard and Biles, meanwhile, were back in Portland, working on an electronic clock that could take precise measurements of the ball traveling around the wheel. Along with his work on the roulette project, Packard was finishing college and applying to graduate school. Santa Cruz was at the top of his list. At this stage, even though Packard knew Thorp had thought about beating roulette, no one in the group knew anything about Thorp’s calculations, or about the computer that Thorp and Shannon had tested in Las Vegas. They were reinventing the wheel.

At the end of that academic year, in the spring of 1976, the four gambling men met up in Santa Cruz to put their work together and make a plan for the summer. One of their first pieces of business was to settle on a name for the group. Farmer had recently stumbled on a new word, eudaemonia, while flipping through the dictionary. Central to the ethics of the ancient Greek philosopher Aristotle, eudaemonia was a state of ideal human flourishing. The roulette group took the name Eudaemonic Enterprises, and the members referred to themselves as Eudaemons (Greek for “good spirits”). They rented a professor’s house for the summer and built a tinkerer’s lab, assembling electronics and running experiments on roulette wheels. The Eudaemons independently arrived at the same basic strategy that Thorp and Shannon had used, with two people working the game, one timing the wheel and the other making bets. Ingerson’s legacy was manifest in Farmer and Packard’s conviction that they could build anything. The Eudaemons were an only slightly more grown-up version of the Explorer Post 114 (indeed, Ingerson later helped the group in Vegas when they tried to put the scheme into action).

The original four were soon joined by another physicist named John Boyd and a friend of Farmer’s from his undergraduate days, Steve Lawton. Lawton was a humanist, a specialist in utopian literature. His role was to organize a reading group on political fiction. From the start, the group was devoted to a revolutionary mindset. Over the years, as they continued to work on roulette, more and more people joined — gamblers, physicists, computer programmers, utopians. The group thought of themselves as Yippies, members of the countercultural movement founded by Abbie Hoffman and others in 1967 and devoted to undermining the status quo through anarchic pranks they called “Groucho Marxism.” For the Eudaemons, the roulette project was a way to beat the Man and take his money — money they planned to use to build a commune on the Washington coast.

Thorp and Shannon never had much luck with their roulette adventure, on account of frayed wires and nerves. The Eudaemons did better, plugging away at the problem for the better part of five years. Not that they didn’t have their own share of hardware problems. Instead of an earpiece like Thorp wore, the Eudaemons’ first generation of technology sent signals via a vibrating magnet attached to the bettor’s torso, hidden by clothes. One night, the wires on Farmer’s magnet kept coming undone, burning his skin whenever the signal arrived. Every ten minutes he had to jump up from the table and announce something like “Boy, have I got the runs today!” on his way to the men’s room to fix the equipment (this continued until the pit boss followed him in and sat in the next stall until Farmer decided to call it quits for the night). But by the summer of 1978 the computers were running well enough that the team took them to Vegas — and started to profit.

Meanwhile, as the team at Eudaemonic Enterprises continued work on building a better bettor, Farmer, Packard, and some of the others in the group began thinking more about the physics at the heart of the project. They had derived the equations they needed to predict roulette. But thinking about roulette had piqued their interest in a more general problem. Roulette is an example of a dynamical system that exhibits some pretty funky behavior. Most importantly, where the ball lands is sensitive to the initial conditions — much like the weather system Lorenz discovered. Working out how to use computers to solve the differential equations necessary to predict roulette had unwittingly put Farmer and Packard at the cutting edge of the newest research in chaos theory. Farmer’s advisor was right that there was a dissertation in the roulette calculations. What he didn’t know was that the dissertation would be part of a rising tide of ideas that would usher in a new age of physics.

In 1977, some of the physicists working on Eudaemonic Enterprises (Farmer and Packard, along with an undergraduate named James Crutchfield and an older graduate student named Robert Shaw) started an informal research group called by turns the Dynamical Systems Collective and the Chaos Cabal. Shaw threw out a nearly finished dissertation to start working on chaos theory full-time; Farmer officially switched away from astrophysics. By the late 1970s, a great deal had been done on chaos theory. Lorenz had discovered many of the basic principles and had then come up with simple examples of chaotic systems and described how they behaved. He was the first person to recognize that there is a kind of order in chaotic systems: if you draw pictures of the paths traced by objects obeying differential equations, they tend to settle down into regular patterns. These patterns are called attractors, because they tend to attract the paths of the objects. In roulette, for instance, the attractors correspond to the pockets of the wheel: whatever trajectory the ball takes, in the long run it will settle down into one of these states. But for other systems, the attractors can be much more complicated. A major contribution to the study of chaos theory was the realization that if a system is chaotic, these attractors have a highly intricate fractal structure.

But despite these foundations, the subject was still young. Work had been done in fits and starts, without any real research center. Normally, graduate work in physics is a collaboration among graduate students, young postdoctoral researchers, and a professor. But chaos theory was still so new that these kinds of research groups didn’t yet exist. You couldn’t go to graduate school to study chaos theory. The Dynamical Systems Collective was an attempt to fix this, by pulling its members through graduate school by their bootstraps. Some of the faculty at Santa Cruz were skeptical about this divergence from the traditional academic curriculum. But the department was new and open to novel ideas, and enough professors were supportive that the four initial members were permitted to guide themselves, collectively, to PhDs in chaos theory.

From the very start, prompted perhaps by the roulette experience, the Dynamical Systems Collective was interested in prediction. It was a novel way of thinking about chaotic systems, which most people were interested in precisely because they seemed so unpredictable. The collective’s most important paper, published in 1980, showed how you could use a stream of data from, say, a sensor placed in the middle of a pipe with water flowing through it to reconstruct what the attractor for the system would have to be. And once you had the attractor, an essential part of trying to understand how a chaotic system would behave over time, you could begin to make some predictions. Previously, attractors were understood as a theoretical tool, something you could get only by solving equations. Packard, Farmer, Shaw, and Crutchfield showed that, in fact, you could figure out this important feature empirically, by looking at how the system actually behaved.

The Dynamical Systems Collective lasted for four years, during which time it made seminal advances in chaos theory and managed to turn years of thinking about roulette into respectable science. But the Eudaemons couldn’t stay in graduate school forever. Farmer graduated in 1981 and immediately went to Los Alamos. Packard left the following year, to take a postdoctoral position in France. Both men were on the verge of turning thirty when they left school. Eudaemonic Enterprises was making money from roulette, but it was ultimately a state of mind, not a way to earn a living.

It was a miracle that either Farmer or Packard got academic jobs, with degrees in chaos in the early 1980s, when few physicists knew what the new theory of dynamical systems was all about, and even fewer recognized it as something worth pursuing. Los Alamos, like Santa Cruz, was far ahead of its time, and Farmer was fortunate to find himself at the center of research in the new field. (Packard had similar luck. After his postdoctoral year in France, he landed positions at the Institute for Advanced Study, in Princeton, New Jersey, and the Center for Complex Systems Research, at the University of Illinois, the other two hotbeds of complex systems research.) Things got even better for Farmer in 1984, when a group of senior scientists at the lab launched a new research center devoted to the study of complex systems, including chaos. The center was called the Santa Fe Institute. Physics would play a central part in the Santa Fe Institute’s research, but the center was designed to be essentially interdisciplinary. Complex systems and chaos arose in physics, in meteorology, in biology, in computer science — and also, the Santa Fe researchers soon realized, in economics.

One theme that characterized much of the research in complexity and chaos during the early 1980s was the idea that simple large-scale structures can emerge from underlying processes that don’t seem to have that structure. To take an example from atmospheric physics, consider that the atmosphere, at the smallest scale, consists of a bunch of gas particles bumping around in the sky. And yet, when one steps back, these mindless particles somehow organize themselves into hurricanes. Similar phenomena occur in biology. Individual ants seem to behave in pretty simple ways, foraging for food, following pheromone trails, building nests. And yet, when one takes these simple actions and interactions in aggregate, they form a colony, something that appears to be more than the sum of its parts. As a whole, an ant colony even appears to be able to adapt to changes in its environment, or the deaths of individual ants. Once these ideas were in the air at Santa Fe, it was a natural leap to ask if the economies of nations and the behavior of markets could also be understood as the collective action of individual people.

The Santa Fe Institute hosted its first conference on economics, entitled “International Finance as a Complex System,” in 1986. Farmer, who at this point was the head of the Complex Systems research group at Los Alamos, was one of a small handful of scientists who were asked to speak. It was his first exposure to economics. The other speakers were from various banks and business schools. These bankers stood up and explained their models to a group of stunned scientists who found the financial models almost childishly simple. The bankers, meanwhile, walked away thinking that they had heard the siren call of the future, though they had virtually no understanding of what was being said. Excited, they urged the institute to host a follow-up conference and invite various luminaries from economics departments at top universities.

The idea behind the second conference was that even if the financiers couldn’t follow the latest advances in physics and computer science, surely the professional economists would be able to. Unfortunately, things didn’t go as planned. Farmer and Packard both spoke, as did various other Santa Fe Institute researchers. The economists, likewise, made their presentations. But there wasn’t much communication. The two groups were coming from two radically different cultures and taking too many different things for granted. The physicists thought the economists were making everything much too simple. The economists thought the physicists were talking nonsense. The great synthesis of disciplines never occurred.

Undeterred, the institute tried a third time in February 1991. This time, though, the economists stayed at home. Instead, the institute invited practitioners from the banks and investment houses that actually ran the world’s financial markets. The tone of the conference was much more practical and focused on how to create models, test them, and use them to develop trading strategies. The traders proved much less defensive than the economists, and by the end of the conference each group had gained an appreciation of what the other had to offer. Farmer and Packard, in particular, left with a clearer sense of how practical trading strategies worked. They also left with the conviction that they could do better. A month later, they gave notice to their respective employers. It was time to enter the fray.

Building a company is different from building a radio or a motorcycle engine, or even a computer to beat roulette. But many of the same skills prove useful: the vision to see how to pull the pieces together in a new way; a tolerance for tinkering with something until you can make it work; unflagging persistence. Making something new is addictive, which might be why so many entrepreneurs are engineers and scientists.

Farmer and Packard were also motivated by a strong antiestablishmentarian bent stretching back to their days as Eudaemons. The new company wasn’t designed as a first step into the financial world — it was part of a plan to upend it, to take Wall Street for all it was worth by being a little smarter, a little more conniving, than the suits. It was a company founded in much the same spirit as the roulette project, a Yippie adventure and a return to a culture of pure research and no rules. Farmer wore an EAT THE RICH T-shirt to the new company’s first formal meeting, in March 1991.

But there was more at stake here than in roulette. Farmer and Packard wanted the project to work, and they were willing to consider the possibility that real business acumen could be useful to them. So they brought in Jim McGill, a former physicist turned entrepreneur, as a third partner. In 1978, McGill had founded a company called Digital Sound Corp., which specialized in the kinds of microchips necessary to process data from electric musical instruments and microphones, and then later branched out into voice-mail devices. McGill was, at least nominally, the CEO of the Prediction Company, the business face of their Birkenstock-and-blue-jean outfit. Farmer and Packard were perfectly adept at imagining what they would do with, say, a hundred million in capital. McGill’s job was to find someone to give it to them. McGill would be the difference between the Prediction Company and a rerun of Eudaemonic Enterprises.

It quickly turned out, however, that finding would-be investors wasn’t as difficult as the founders imagined it would be. Farmer and Packard had earned reputations during the days of the Santa Fe Institute’s economics conferences. When rumors began flying that Farmer and Packard were leaving academia to take on Wall Street, some influential people took notice. Farmer had to buy a new suit to look presentable for meetings at places like Bank of America and Salomon Brothers. Things got even better after the New York Times Magazine ran a cover article called “Defining the New Plowshares Those Old Swords Will Make,” on how physicists, who had largely been absorbed into the military-industrial complex in the wake of World War II, were branching out as the Cold War came to an end. The article led with the Prediction Company — a perfect tie-in, given Farmer’s history with Los Alamos. After it appeared in print, hundreds of suitors began to call, from rich oil men to Wall Street banks.

The trouble wasn’t getting money. It was what the would-be investors wanted in exchange. Some of the Wall Street outfits were thrilled with the idea of starting a hedge fund based on the Prediction Company’s ideas. But Farmer and Packard didn’t like the idea of traveling the country trying to raise capital, which they would need to do if they were managing a hedge fund. Ideally, they wanted seed money so they could focus on developing the science. Other companies wanted to buy the Prediction Company outright — equally unappealing to a group of men who had just made up their minds to break out of the rat race and start their own business. Some companies were willing to put up capital in exchange for a portion of the proceeds, but they wanted more than just a return on their investment. For instance, David Shaw, a former computer science professor at Columbia who had started his own hedge fund, D. E. Shaw & Co., in 1988, wanted to own the company’s intellectual property in exchange for a few years’ worth of seed money.

Many of these offers were appealing. But Farmer and Packard continued to balk. Nothing felt right. Unfortunately, they couldn’t run an investment firm on the backs of their personal checking accounts forever. As the company’s one-year anniversary approached, in March 1992, the pressure was on to find a deal.

It is tempting to say that Farmer, Packard, and their Prediction Company collaborators “used chaos theory to predict the markets” or something along those lines. In fact, this is how their enterprise is usually characterized. But that isn’t quite right. Farmer and Packard didn’t use chaos theory as a meteorologist or a physicist might. They didn’t do things such as attempt to find the fractal geometry underlying markets, or derive the deterministic laws that govern financial systems.

Instead, the fifteen years that Farmer and Packard spent working on chaos theory gave them an unprecedented (by 1991 standards) understanding of how complex systems work, and the ability to use computers and mathematics in ways that someone trained in economics (or even in most areas of physics) would never have imagined possible. Their experience with chaos theory helped them appreciate how regular patterns — patterns with real predictive power — could be masked by the appearance of randomness. Their experience also showed them how to apply the right statistical measures to identify truly predictive patterns, how to test data against their models of market behavior, and finally how to figure out when those models were no longer doing their job. They were at ease with the statistical properties of fat-tailed distributions and wild randomness, which are characteristic of complex systems in physics as well as financial markets. This meant that they could easily apply some of Mandelbrot’s ideas for risk management in ways that people with more traditional economics training could not.

As far as the Prediction Company was concerned, markets might be chaotic, or not. There might be various degrees of randomness in market behavior. Markets might be governed by simple laws, or by enormously complicated laws, or by laws that changed so fast that they might as well not have been there at all. What the Predictors were doing, rather, was trying to extract small amounts of information from a great deal of noise. It was a search for regularities of the same sort that lots of investors look for: how markets react to economic news like interest rates or employment numbers, how changes in one market manifest themselves in others, how the performances of different industries are intertwined.

One strategy they used was something called statistical arbitrage, which works by betting that certain statistical properties of stocks will tend to return even if they disappear briefly. The classic example is pairs trading. Pairs trading works by observing that some companies’ stock prices are usually closely correlated. Consider Pepsi and Coca-Cola. Virtually any news that isn’t company-specific is likely to affect Pepsi’s products in just the same way as Coca-Cola’s, which means that the two stock prices usually track one another. But changes in the two companies’ prices don’t always occur simultaneously, so sometimes the prices get out of whack compared to their long-term behavior. If Pepsi goes up a little bit but Coca-Cola doesn’t, upsetting the usual relationship, you buy Coca-Cola and sell Pepsi because you have good reason to think that the two prices will soon revert to normal. Farmer and Packard didn’t come up with pairs trading — it was largely pioneered in the 1980s at Morgan Stanley, by an astrophysicist named Nunzio Tartaglia and a computer scientist named Gerry Bamberger — but they did bring a new level of rigor and sophistication to the identification and testing of the statistical relationships underlying the strategy.

This sophistication was purely a function of the tools that Farmer and Packard were able to import from their days in physics. For instance, as a physicist, Packard was at the very forefront of research in a variety of computer programs known as genetic algorithms. (An algorithm is just a set of instructions that can be used to solve a particular problem.) Suppose you are trying to identify the ideal conditions under which to perform some experiment. A traditional approach might involve a long search for the perfect answer. This could take many forms, but it would be a direct attack. Genetic algorithms, on the other hand, approach such problems indirectly. You start with a whole bunch of would-be solutions, a wide variety of possible experimental configurations, say, which then compete with one another, like animals vying for resources. The most successful possible solutions are then broken up and recombined in novel ways to produce a second generation of solutions, which are allowed to compete again. And so on. It’s survival of the fittest, where fitness is determined by some standard of optimality, such as how well an experiment would work under a given set of conditions. It turns out that in many cases, genetic algorithms find optimal or nearly optimal solutions to difficult physics problems very quickly.

Physicists in general, and Farmer and Packard especially, have developed many kinds of optimization algorithms that, by different means, accomplish the same goals as genetic algorithms, with different algorithms carefully tailored to different tasks. These algorithms are pattern sleuths: they comb through data, testing millions of models at a time, searching for predictive signals.

But there’s nothing special about physics problems, as far as these algorithms are concerned. They can be applied to any number of different areas — including finance. Suppose you have discovered some strange statistical behavior relating the currency market for Japanese yen with the market for rice futures. It might seem sufficient to observe that if yen go up, then so do rice futures prices. You would then buy rice futures whenever you noticed yen ticking upward. Or else, suppose you have an idea for a possible pairs trade, such as with Pepsi and Coca-Cola.

Notice that in these cases, the basic strategy is clear. But there are all sorts of possibilities compatible with that basic strategy. To be perfectly scientific about the problem, you would want to figure out just how closely correlated the two prices are, and whether the degree of correlation varies with other market conditions. You would also want to think about how much rice to buy and how to time your purchase to be maximally certain that yen were really going up. But trying to come up with a way of relating all of these variables in an optimal way from scratch would be an enormously time-consuming and difficult process, and you could never be sure you’d gotten it right. In the meantime, your opportunity would pass. But if you used a genetic algorithm, you could let thousands of closely related models and trading strategies based on the supposed connection between yen and rice compete with one another. You would soon arrive at an optimal, or nearly optimal, strategy. This is a variety of forecasting, but it doesn’t require you to come up with some complete chaos-theoretic description of markets. It’s much more piecemeal than that.

Another one of the Prediction Company’s ideas was to use many different models at once, each based on different simplified assumptions about the statistical properties of different assets. Farmer and Packard developed algorithms that allowed the different models to “vote” on trades — and then they adopted a strategy only if their models were able to form a consensus that it would likely be successful. Voting may not sound as if it has anything to do with physics, but the underlying idea comes right from Farmer’s and Packard’s days studying complex systems. Allowing many different models to vote identifies which trading strategies are robust, in the sense that they aren’t sensitive to the special details of a particular model. There is a close connection between searching for robust strategies and searching for attractors in a complex system, since attractors are independent of initial conditions.

This kind of modeling, where one uses algorithmic methods to identify optimal strategies, is often called “black box” modeling in the financial industry. Black box models are very different from models like Black-Scholes and its predecessors, whose inner workings are not only transparent but often provide deep insights into why the models (should) work. Black box models are much more opaque, and as a result they are often scarier, especially to people who don’t understand where they come from or why they should be trusted. Black box models were occasionally used before the Prediction Company came along, but the Prediction Company was one of the very first companies to build an entire business model based on them. It was a whole new way of thinking about trading.

Almost a year into the new company, the senior partners weren’t making any money. An investment firm needs something to invest. Farmer, Packard, and McGill could go only so long without bringing home paychecks — and to make matters worse, they had been funding their team of graduate students and computer hackers out of their own pockets for eight months, since everyone had taken up residence at the Griffin Street office in July 1991. The time for being choosy was coming to an end. The partners knew they didn’t want to sell the company so soon into the adventure, but the idea of being someone else’s hedge fund was starting to look appealing. At least they’d have capital, and they would be (more or less) independent. They had spent months interviewing possible partners, and at this point it was hard to imagine a better solution.

And then, in early March 1992, a miracle happened. Farmer had been invited to give a presentation at an annual computer conference. He had reluctantly agreed to attend, on the basis that Silicon Valley investors would be there and they might be willing to offer some no-strings-attached financing. He gave a talk on the role of computers in prediction, which generated a lot of questions. Afterward, as he was packing up his slides, a man in a suit approached him. He introduced himself as Craig Heimark, a partner at O’Connor and Associates — the firm that had made its first fortune by successfully modifying the Black-Scholes equation to account for fat-tailed distributions, under the guidance of Michael Greenbaum and Clay Struve. By 1991, it was one of the biggest players in the Chicago commodities markets, with a focus on high-tech derivatives trading. The company had six hundred employees and billions of dollars under management. O’Connor wasn’t using nonlinear forecasting, and the Prediction Company wasn’t interested in derivatives. But nonetheless, O’Connor and Associates were the Predictors’ kind of people. In fact, one of O’Connor’s recent hires had been a friend and fellow researcher back in Farmer’s and Packard’s academic days.

Shortly after Farmer and Heimark met, Farmer received a phone call from another O’Connor partner, named David Weinberger. Weinberger had been one of the very first quants, leaving a teaching job in operations research (essentially, a branch of applied mathematics) at Yale to work for Goldman Sachs in 1976, even before Black arrived. He’d moved to O’Connor in 1983, to help that company come up with new strategies as more and more companies got on the Black-Scholes bandwagon. He was one of the few people in the industry, even in 1991, who both was high powered enough to make a deal and also spoke the language of the scientists running the Prediction Company. He called on a Friday afternoon, from Chicago. On Saturday morning, he was sitting in the Griffin Street office.

O’Connor turned out to be just the kind of firm that the Prediction Company wanted to work with — in large part because the people working at O’Connor were able to understand what Farmer and Packard were doing well enough to evaluate it themselves. Under the deal they ultimately negotiated, the Prediction Company maintained its independence. O’Connor put up the investment capital, in exchange for the majority of the proceeds; it also fronted the Prediction Company the funds it so desperately needed in order to pay salaries and buy equipment in the meantime.

The deal with O’Connor seemed perfect at the time. But it turned out to be even better than the Prediction Company founders had hoped. When O’Connor came knocking on the Prediction Company’s door, it already had a long-running partnership with Swiss Bank Corporation (SBC), a nearly century-and-a-half-old Swiss bank. And then, in 1992, before the ink was dry on O’Connor’s deal with the Prediction Company, SBC announced its intention to buy O’Connor outright. The Prediction Company found itself in a partnership negotiated with its kindred spirits at O’Connor but funded by the much deeper pockets of SBC. Weinberger was given a top management position at SBC and continued as the principal liaison for the Prediction Company. It was an ideal arrangement. The Predictors had hit the big time.

In 1998, SBC merged with the still-larger Union Bank of Switzerland to form UBS, one of the largest banks in the world. Despite the size difference, however, most of the senior positions at UBS went to former SBC managers and the relationship with the Prediction Company was maintained.

The Prediction Company, following the O’Connor tradition as a secretive high-tech firm, never released any metrics of its success publicly — and none of the former principals or board members with whom I spoke were authorized to share any concrete information. This might seem suspicious. After all, if you’re successful, why hide it? Here, though, the opposite is the case: on Wall Street, success breeds imitation, and the more firms there are implementing a strategy, the less profitable it is for anyone. There are some indications, however, that the Prediction Company has been wildly successful. As one former board member I spoke with pointed out, it is still an active subsidiary of UBS, after more than a decade. Another knowledgeable source told me that, over the firm’s first fifteen years, its risk-adjusted return was almost one hundred times larger than the S&P 500 return over the same period.

Farmer stayed with the firm for about a decade before his passion for research lured him back to academia. He took a position at the Santa Fe Institute as a full-time researcher in 1999. Packard stayed with the company for a few more years, serving as CEO until 2003, when he left to start a new company, called ProtoLife. By the time they left, they had made their point: a firm grasp of statistics and a little creative reappropriation of tools from physics were enough to beat the Man. It was time to tackle a new set of problems.

Black box models, and more generally “algorithmic trading,” have taken much of the backlash against quantitative finance in the period since the 2007–2008 financial crisis. The negative press is not undeserved. Black box models often work, but by definition it is impossible to pinpoint why they work, or to fully predict when they are going to fail. This means that black box modelers don’t have the luxury of being able to guess when the assumptions that have gone into their models are going to turn bad. In place of this sort of theoretical backing, the reliability of black box models has to be constantly tested by statistical methods, to determine the extent to which they continue to do what they are intended to do. This can make them seem risky, and in some cases, if used injudiciously, they really are risky. They are easy to abuse, since one can convince oneself that a model that has worked before is a kind of magical device that will continue to work, come what may.

In the end, though, data outclass theory. This means that no matter how good the theoretical backing for your (non–black box) model, you ultimately need to evaluate it on the basis of how well it performs. Even the most transparent models need to be constantly tested by just the same kinds of statistical methods that are used to evaluate black box models. The clearest example of why this is so can be found by looking at the failure of the Black-Scholes model to account for the volatility smile in the aftermath of the 1987 crash. Theoretical backing for a model can be a double-edged sword: on the one hand, it can help guide practitioners who are trying to understand the limits of the model; conversely, it can lull you into a sense of false confidence that, because you have some theoretical justification for a model, the model must be right. Unfortunately science doesn’t work this way. And from this latter point of view, black box models have an advantage over other, more theoretically transparent models, because one is effectively forced to evaluate their effectiveness on the basis of their actual success, not on one’s beliefs about what ought to be successful.

There’s another worry about black box models, above and beyond their opaqueness. All of the physicists whose work I have discussed thus far, from Bachelier to Black, have argued that markets are unpredictable. Purely random. The only disputes concern the nature of the randomness, and whether they are well enough behaved to be treated by normal distributions. In the years since it was first observed by Bachelier and Osborne, the idea that markets are unpredictable has been elevated to a central tenet of mainstream financial theory, under the umbrella of the efficient market hypothesis.

And yet, the Prediction Company, and dozens of other black box trading groups that have sprung up subsequently, purports to predict how the market will behave, over short periods of time and under special circumstances. The Prediction Company, at least, never worked with derivatives — its models attempted to predict how markets would behave directly, in just the way that many economists (and plenty of investors) would have supposed was impossible. Nonetheless, it was successful.

It’s reasonable to be skeptical about the company’s success. Investing can often come down to luck. That markets are random is not just conventional wisdom in economics departments. There’s an enormous amount of statistical evidence to support it. Then again, perhaps the idea that markets are random because they are efficient — in the sense that market prices quickly change to account for all available information concerning the expected future performance of a stock — is not necessarily in conflict with the Prediction Company’s success. It sounds like a paradox. But think about the basis for the efficient market hypothesis. The standard argument goes something like this: Suppose that there was some way to game the markets; that is, suppose that there was some reliable way to predict how prices are going to change over time. Then investors would quickly try to capitalize on that information. If markets are always at a local high in the last week of May, or if they always drop on the Monday following a Giants victory, then as soon as the pattern gets noticed, sophisticated investors will start selling stocks at the end of May and buying them as soon as the Giants win — with the result that prices will drop at the end of May and rise on Mondays after Giants victories, essentially washing out the pattern. Sure enough, every time an economist appears to find an anomalous pattern in market behavior, it seems to correct itself before the next study can be done to confirm it.

Fair enough. This kind of reasoning might make you think that even if markets somehow got out of whack, there are internal processes that would quickly push them back into shape. (Of course, one of the major reasons to think that the efficient markets hypothesis is deeply flawed is the apparent presence of speculative bubbles and market crashes. Whether these kinds of large-scale anomalies, where prices seem to become unmoored, are predictable is the subject of the next chapter. Here I am thinking of smaller-scale deviations from perfect efficiency, supposing that such a thing exists.) But what are these internal processes? Well, they involve the actions of so-called sophisticated investors, people who are quick to identify certain patterns and then adopt trading strategies designed to exploit those patterns. These sophisticated investors are what make the markets random, at least according to the standard line. But they do so by correctly identifying predictive patterns when they arise. Such patterns might disappear quickly. But if you’re the first person to notice such a pattern, the argument about self-correcting markets doesn’t apply.

What does this mean? It means that even if you take the standard line on efficient markets seriously, there is still a place for sophisticated investors to profit. You just need to be the most sophisticated investor, the one most carefully attuned to market patterns, and the one best equipped to find ways to turn patterns into profit. And for this task, a few decades of experience in extracting information from chaotic systems plus a room full of supercomputers could be a big help. In other words, the Prediction Company succeeded by figuring out how to be the most sophisticated investor as often as possible.

Of course, not everyone buys the idea that markets are efficient. Farmer, for one, has often criticized the idea that markets are unpredictable — and with good reason, since he made his fortune by predicting them. Likewise, wild randomness can be a sign of underlying chaos — which, perhaps counterintuitively, indicates that there is often enough structure present to make useful predictions. And so whatever your views on markets, there’s a place for the Predictors. It’s no surprise, then, that droves of investors have followed in Farmer’s and Packard’s pioneering footsteps. In the twenty years since the first computers arrived at the door of 123 Griffin Street, black box models have taken hold on Wall Street. They are the principal tool of the quant hedge funds, from D. E. Shaw to Citadel. The prediction business has become an industry.