2

Swimming Upstream

MAURY OSBORNE’S MOTHER, AMY OSBORNE, WAS AN AVID GARDENER. She was also a practical woman. Rather than buy commercial fertilizer, she would go out to the horse pastures near her home, in Norfolk, Virginia, to collect manure and bring it back for her garden. And she didn’t approve of idleness. Whenever she caught one of her sons lazing about, she was quick to assign a job: paint the porch, cut the grass, dig a hole to mix up the soil. When Osborne was young, he liked the jobs. Painting and hole-digging were fun enough, and other jobs, like cutting the grass, were unpleasant but better than sitting around doing nothing. Whenever he got bored, he would go to his mother and ask what he could do, and she would give him a job.

One day, she pointed out that the ice truck had just passed. The truck was pulled by a horse, which meant that there would be nice big piles of manure on the road. “So you go and collect that horse manure and mix it up with the hose to make liquid manure and pour it on my chrysanthemums,” she told him. Osborne didn’t much like this assignment. It was the middle of the day and all of his friends were out and about, and when they saw him they yelled out and teased him. Red-faced and fuming, he dutifully collected the manure in a big bucket, then went back to his house. He pulled out the hose, filled the bucket with water, and began to liquefy the manure. It was a gross, smelly job, and Osborne was feeling irritated and embarrassed at having to do it in the first place. Then all of a sudden, as he was stirring, the liquefied manure splashed out of the bucket and soaked him. It was a major turning point: there, covered in fresh horse manure, Osborne decided that he would never ask anyone what to do again — he would figure out what he wanted to do and do that.

As far as his scientific career went, Osborne kept his pledge. He was initially trained as an astronomer, calculating things like the orbits of planets and comets. But he never felt constrained by academic boundaries. Shortly before the United States entered World War II, Osborne left graduate school to work at the Naval Research Lab (NRL) on problems related to underwater sound and explosions. The work had very little to do with astronomical observation, but Osborne thought it would be interesting. Indeed, before the war was over, he took up several different projects. In 1944, for example, he wrote a paper on the aerodynamics of insect wings. In the 1940s, entomologists had no idea why insects could fly. Their bodies seemed to be too heavy for the amount of lift generated by flapping wings. Well, Osborne had some time on his hands, and so, instead of asking the navy what he should do, he decided he’d spend his time solving the problem of insect flight. And he succeeded: he showed, for the first time, that if you took into account both the lift produced by insect wings and the drag on the wings, you could come up with a pretty good explanation for why insects can fly and how they control their motion.

After World War II, Osborne went further still. He approached the head of the NRL’s Sound Division, where he still worked, and told him that anyone working for the government could get their work done in two hours a day. Bold words for one’s boss, you might think. But Osborne pressed further. He said that even two hours of work a day was more than he wanted to do for the government. He had a problem of his own that he wanted to work on. Osborne made it clear that this new project had nothing at all to do with naval interests, but he said he wanted to work on it anyway. And amazingly, his boss said, “Go right ahead.”

Osborne remained at the NRL for nearly thirty more years, but from that conversation on, he worked exclusively on his own projects. In most cases, these projects had little or no direct bearing on the navy, and yet the NRL continued to support him throughout his career. The work ran the gamut from foundational problems in general relativity and quantum mechanics to studies of deep ocean currents. But his most influential work, the work for which he is best known today, was on another topic entirely. In 1959, Osborne published a paper entitled “Brownian Motion in the Stock Market.” Though Bachelier had written on this very subject sixty years earlier, his work was still essentially unknown to physicists or financiers (aside from a few people in Samuelson’s circle). To readers of Osborne’s paper, the suggestion that physics had something to say about finance was entirely novel. And it wasn’t long before people in academia and on Wall Street began to take notice.

Any way you look at it, Bachelier’s work was genius. As a physicist, he anticipated some of Einstein’s most influential early work — work that would later be used to definitively prove the existence of atoms and usher in a new era in science and technology. As a mathematician, he developed probability theory and the theory of random processes to such a high level that it would take three decades for other mathematicians to catch up. And as a mathematical analyst of financial markets, Bachelier was simply without peer. It is exceptionally rare in any field for someone to present so mature a theory with so little precedent. In a just world, Bachelier would be to finance what Newton is to physics. But Bachelier’s life was a shambles, in large part because academia couldn’t countenance so original a thinker.

Just a few short decades later, though, Maury Osborne was thriving in a government-sponsored lab. He could work on anything he liked, in whatever style he liked, without facing any of the institutional resistance that plagued Bachelier throughout his career. Bachelier and Osborne had much in common: both were incredibly creative; both had the originality to find questions that hadn’t occurred to previous researchers and the technical skills to make them tractable. But when Osborne happened on the same problem that Bachelier had addressed in his thesis — the problem of predicting stock prices — and proceeded to work out a remarkably similar solution, he did so in a completely different environment. “Brownian Motion in the Stock Market” was an unusual article. But in the United States in 1959, it was acceptable, even encouraged, for a physicist of Osborne’s station to work on such problems. As Osborne put it, “Physicists essentially could do no wrong.” Why had things changed?

Nylon. American women were first introduced to nylon at the 1939 New York World’s Fair, and they were smitten. A year later, on May 15, 1940, when nylon stockings went on sale in New York, 780,000 pairs were sold on the first day, and 40 million pairs by the end of the week. At year’s end, Du Pont, the company that invented and manufactured nylon, had sold 64 million pairs of nylon stockings in the United States alone. Nylon was strong and lightweight. It tended to shed dirt and it was water resistant, unlike silk, which was the preferred material for hosiery before nylon hit the scene. Plus, it was much cheaper than either silk or wool. As the Philadelphia Record put it, nylon was “more revolutionary than [a] martian attack.”

But nylon had revolutionary consequences far beyond women’s fashion or fetishists’ lounges. The initiative at Du Pont that led to the invention of nylon — along with a handful of other research programs begun in the 1930s by companies such as Southern California Edison, General Electric, and Sperry Gyroscope Company, and universities such as Stanford and Berkeley — quietly ushered in a new research culture in the United States.

In the mid-1920s, Du Pont was a decentralized organization, with a handful of largely independent departments, each of which had its own large research division. There was also a small central research unit, essentially a holdover from an earlier period in Du Pont’s history, headed by a man named Charles Stine. Stine faced a problem. With so many large, focused research groups at the company, each performing whatever services its respective department required, the need for an additional research body was shaky at best. If the central research unit was going to survive, never mind grow, Stine needed to articulate a mission for it that would justify its existence. The solution he finally came upon and implemented in 1927 was the creation of an elite, fundamental research team within the central research unit. The idea was that many of Du Pont’s industrial departments relied on a core of basic science. But the research teams in these departments were too focused on the immediate needs of their businesses to engage in fundamental research. Stine’s team would work on these orphaned scientific challenges over the long term, laying the foundation for future applied, industrial work. Stine landed a chemist from Harvard, named Wallace Carothers, to head this new initiative.

Carothers and a team of young PhDs spent the next three years exploring and exhaustively documenting the properties of various polymers — chemical compounds composed of many small, identical building blocks (called monomers) strung together like a chain. During these early years, the work proceeded unfettered by commercial considerations. The central research unit at Du Pont functioned as a pure, academic research laboratory. But then, in 1930, Carothers’s team had two major breakthroughs. First, they discovered neoprene, a synthetic rubber. Later that same month, they discovered the world’s first fully synthetic fiber. Suddenly Stine’s fundamental research team had the potential to make real money for the company, fast. Du Pont’s leadership took notice. Stine was promoted to the executive committee and a new man, Elmer Bolton, was put in charge of the unit. Bolton had previously headed research in the organic chemistry department and, in contrast to Stine, he had much less patience for research without clear applications. He quickly moved research on neoprene to his old department, which had considerable experience in rubber, and encouraged Carothers’s team to focus on synthetic fibers. The initial fiber turned out to have some poor properties: it melted at low temperatures and dissolved in water. But by 1934, under pressure from his new boss, Carothers came up with a new idea for a polymer that he thought would be stable when spun into a fiber. Five weeks later, one of his lab assistants produced the first nylon.

Over the next five years, Du Pont embarked on a crash program to scale up production and commercialize the new fiber. Nylon began life as an invention in a pure research lab (even though, under Bolton’s direction, Carothers was looking for such fibers). As such, it represented cutting-edge technology, based on the most advanced chemistry of the time. But it was not long before it was transformed into a commercially viable, industrially produced product. This process was essentially new: as much as nylon represented a major breakthrough in polymer chemistry, Du Pont’s commercialization program was an equally important innovation in the industrialization of basic research. A few important features distinguished the process. First, it required close collaboration among the academic scientists in the central research unit, the industrial scientists in the various departments’ research divisions, and the chemical engineers responsible for building a new plant and actually producing the nylon. As the different teams came together to solve one problem after another, the traditional boundaries between basic and applied research, and between research and engineering, broke down.

Second, Du Pont developed all of the stages of manufacturing of the polymer in parallel. That is, instead of waiting until the team fully understood the first stage of the process (say, the chemical reaction by which the polymer was actually produced) and only then moving on to the next step (say, developing a method for spinning the polymer into a fiber), teams worked on all of these problems at once, each team taking the others’ work as a “black box” that would produce a fixed output by some not-yet-known method. Working in this way further encouraged collaboration between different kinds of scientists and engineers because there was no way to distinguish an initial basic research stage from later implementation and application stages. All of these occurred at once. Finally, Du Pont began by focusing on a single product: women’s hosiery. Other uses of the new fiber, including lingerie and carpets, to name a few, were put off until later. This deepened everyone’s focus, at every level of the organization. By 1939, Du Pont was ready to reveal the product; by 1940, the company could produce enough of it to sell.

The story of nylon shows how the scientific atmosphere at Du Pont changed, first gradually and then rapidly as the 1930s came to a close, to one in which pure and applied work were closely aligned and both were valued. But how did this affect Osborne, who didn’t work at Du Pont? By the time nylon reached shelves in the United States, Europe was already engaged in a growing war effort — and the U.S. government was beginning to realize that it might not be able to remain neutral. In 1939, Einstein wrote a letter to Roosevelt warning that the Germans were likely to develop a nuclear weapon, prompting Roosevelt to launch a research initiative, in collaboration with the United Kingdom, on the military uses of uranium.

After the Japanese attack on Pearl Harbor, on December 7, 1941, and Germany’s declaration of war on the United States four days later, work on nuclear weapons research accelerated rapidly. Work on uranium continued, but in the meantime, a group of physicists working at Berkeley had isolated a new element — plutonium — that could also be used in nuclear weapons and that could, at least in principle, be mass produced more easily than uranium. Early in 1942, Nobel laureate Arthur Compton secretly convened a group of physicists at the University of Chicago, working under the cover of the “Metallurgical Laboratory” (Met Lab), to study this new element and to determine how to incorporate it into a nuclear bomb.

By August 1942, the Met Lab had produced a few milligrams of plutonium. The next month, the Manhattan Project began in earnest: General Leslie Groves of the Army Corps of Engineers was assigned command of the nuclear weapons project; Groves promptly made Berkeley physicist J. Robert Oppenheimer, who had been a central part of the Met Lab’s most important calculations, head of the effort. The Manhattan Project was the single largest scientific endeavor ever embarked on: at its height, it employed 130,000 people, and it cost a total of $2 billion (about $22 billion in today’s dollars). The country’s entire physics community rapidly mobilized for war, with research departments at most major universities taking part in some way, and with many physicists relocating to the new secret research facility at Los Alamos.

Groves had a lot on his plate. But one of the very biggest problems involved scaling up production of plutonium from the few milligrams the Met Lab had produced to a level sufficient for the mass production of bombs. It is difficult to overstate the magnitude of this challenge. Ultimately, sixty thousand people, nearly half of the total staff working on the Manhattan Project, would be devoted to plutonium production. When Groves took over in September 1942, the Stone and Webster Engineering Corporation had already been contracted to build a large-scale plutonium enrichment plant in Hanford, Washington, but Compton, who still ran the Met Lab, didn’t think Stone and Webster was up to the task. Compton voiced his concern, and Groves agreed that Stone and Webster didn’t have the right kind of experience for the job. But then, where could you find a company capable of taking a few milligrams of a brand-new, cutting-edge material and building a production facility that could churn out tons of the stuff, fast?

At the end of September 1942, Groves asked Du Pont to join the project, advising Stone and Webster. Two weeks later, Du Pont agreed to do much more: it took full responsibility for the design, construction, and operation of the Hanford plant. The proposed strategy? Do for plutonium precisely what Du Pont had done for nylon. From the beginning, Elmer Bolton, who had led the just-finished nylon project as head of the central research unit, and several of his closest associates took leadership roles in the plutonium project. And just like nylon, the industrialization of plutonium was an enormous success: in a little over two years, the nylon team ramped up production of plutonium a million-fold.

Implementing the nylon strategy was not a simple task, nor was it perfectly smooth. To produce plutonium on a large scale, you need a full nuclear reactor, which, in 1942, had never been built (though plans were in the works). This meant that, even more than with nylon, new technology and basic science were essential to the development of the Hanford site, which in turn meant that the physicists at the Met Lab felt they had a stake in the project and took Du Pont’s role to be “just” engineering. They believed that as nuclear scientists, they were working at the very pinnacle of human knowledge. As far as they were concerned, industrial scientists and engineers were lesser beings. Needless to say, they did not take well to the new chain of command.

The central problem was that the physicists significantly underestimated the role engineers would have to play in constructing the site. They argued that Du Pont was putting up unnecessary barriers to research by focusing on process and organization. Ironically, this problem was solved by giving the physicists more power over engineering: Compton negotiated with Du Pont to let the Chicago physicists review and sign off on the Du Pont engineers’ blueprints. But once the physicists saw the sheer scale of the project and began to understand just how complex the engineering was going to be, many gained an appreciation of the engineers’ role — and some even got interested in the more difficult problems.

Soon, scientists and engineers were engaged in an active collaboration. And just as the culture at Du Pont had shifted during the nylon project — as the previously firm boundaries between science and engineering began to crumble — the collaboration between physicists and engineers at the Hanford site quickly broke down old disciplinary barriers. In building the plutonium facility, Du Pont effectively exported its research culture to an influential group of theoretical and experimental physicists whose pre- and postwar jobs were at universities, not in industry. And the shift in culture survived. After the war, physicists were accustomed to a different relationship between pure and applied work. It became perfectly acceptable for even top theoretical physicists to work on real-world problems. And equally important, for basic research to be “interesting,” physicists needed to sell their colleagues on its possible applications.

Du Pont’s nylon project wasn’t the only place where a new research culture developed during the 1930s, and the Hanford site and Met Lab weren’t the only government labs at which physicists and engineers were brought into close contact during World War II. Similar changes took place, for similar reasons, at Los Alamos, the Naval Research Lab, the radiation labs at Berkeley and MIT, and in many other places around the country as the needs of industry, and then the military, forced a change in outlook among physicists. By the end of the war, the field had been transformed. No longer could the gentleman-scientist of the late nineteenth or early twentieth century labor under the illusion that his work was above worldly considerations. Physics was now too big and too expensive. The wall between pure physics and applied physics had been demolished.

Born in 1916, Osborne was exceptionally precocious. He finished high school at fifteen, but his parents wouldn’t let him attend college so young, so he spent a year in prep school — which he hated — before going on to the University of Virginia to major in astrophysics. The intellectual independence and broad, innate curiosity that would later characterize his scientific career were apparent early on. After his first year of college, for instance, Osborne decided he’d had enough of studying. So one day that summer, after finishing a job at the McCormick Observatory in Charlottesville, Virginia, he decided to drop out of school. Instead of going back to UVA, he would spend some time doing physical labor. He told his parents his plan, and apparently they knew better than to try to talk him out of it, because they contacted a family friend with a farm in West Virginia and Osborne went there to work for the year. But he was sent home for Christmas, followed shortly by a note from the farm’s owner saying that she had had quite enough of him. Osborne spent the rest of the year pushing a wheelbarrow around Norfolk, helping the director of physical education for the Norfolk school district regrade playgrounds. The year of hard labor convinced Osborne that academic life wasn’t so bad after all. He returned to UVA the following September.

After college, Osborne headed west to Berkeley for a graduate program in astronomy. There he met and worked closely with luminaries in the physics department, including Oppenheimer. This is where Osborne was when war broke out in Europe in 1939. By the spring of 1941, many physicists, Oppenheimer included, were beginning to think about the war effort, including the possible use of nuclear weapons. Osborne saw the writing on the wall. Recognizing that he would likely be drafted, he attempted to enlist — but he was rejected because he wore thick glasses (early in the war effort, recruiters could afford to be picky). So he sent an application to the NRL, which offered him a job in its Sound Division. He packed his bags and headed home to Virginia to work in a government lab at the moment the government was most prepared to support creative, interdisciplinary research.

Osborne began “Brownian Motion in the Stock Market” with a thought experiment. “Let us imagine a statistician,” he wrote, “trained perhaps in astronomy and totally unfamiliar with finance, is handed a page of the Wall Street Journal containing the N.Y. Stock Exchange transactions for a given day.” Osborne began thinking about the stock market around 1956, after his wife, Doris (also an astronomer), had given birth to a second set of twins — the Osbornes’ eighth and ninth children, respectively. Osborne decided he had better start thinking about financing the future. One can easily imagine Osborne going down to the store and picking up a copy of the day’s Wall Street Journal. He would have brought it home, sat down at the kitchen table, and opened it to the pages that reported the previous day’s transactions. Here he would have found hundreds, perhaps thousands, of pieces of numerical data, in columns labeled with strange, undefined terms.

The statistician trained in astronomy wouldn’t have known what the labels meant, or how to interpret the data, but that was fine. Numerical data didn’t scare him. After all, he’d seen page after page of data recording the nightly motions of the heavens. The difficulty was figuring out how the numbers related to each other, determining which numbers gave information about which other numbers, and seeing if he could make any predictions. He would, in effect, be building a model from a set of experimental data, which he’d done dozens of other times. So Osborne would have adjusted his glasses, rolled up his sleeves, and dived right in. Lo and behold, he discovered some familiar patterns: the numbers corresponding to price behaved just like a collection of particles, moving randomly in a fluid. As far as Osborne could tell, these numbers could have come from dust exhibiting Brownian motion.

In many ways, Osborne’s first, and most lasting, contribution to the theory of stock market behavior recapitulated Bachelier’s thesis. But there was a big difference. Bachelier argued that from moment to moment stock prices were as likely to go up by a certain small amount as to go down by that same amount. From this he determined that stock prices would have a normal distribution. But Osborne dismissed this idea immediately. (Samuelson did, too — in fact, he called this aspect of Bachelier’s work absurd.) A simple way to test the hypothesis that the probabilities governing future stock prices are determined by a normal distribution would be to select a random collection of stocks and plot their prices. If Bachelier’s hypothesis were correct, one would expect the stock prices to form an approximate bell curve. But when Osborne tried this, he discovered that prices don’t follow a normal distribution at all! In other words, if you looked at the data, Bachelier’s findings were ruled out right away. (To his credit, Bachelier did examine empirical data, but a certain unusual feature of the market for rentes — specifically, that their prices changed very slowly, and never by very much — made his model seem more effective than it actually was.)

So what did Osborne’s price distribution look like? It looked like a hump with a long tail on one side, but virtually no tail on the other side. This shape doesn’t look much like a bell, but it was familiar enough to Osborne. It’s what you get, not if prices themselves are normally distributed, but if the rate of return is normally distributed. The rate of return on a stock can be thought of as the average percentage by which the price changes each instant. Suppose you took $200, deposited $100 in a savings account, and used the other $100 to buy some stock. A year from now, you probably wouldn’t have the $200 (you might have more or less), because of interest accrued in the savings account, and because of changes in the price of the stock that you purchased. The rate of return on the stock can be thought of as the interest rate that your bank would have had to pay (or charge) to keep the balances in your two accounts equal. It is a way of capturing the change in the price of a stock relative to its initial price.

The rate of return on a stock is related to the change in price by a mathematical operation known as a logarithm. For this reason, if rates of return are normally distributed, the probability distribution of stock prices should be given by something known as a log-normal distribution. (See Figure 2 for what this looks like.) The log-normal distribution was the funny-looking hump with a tail that Osborne found when he plotted actual stock prices. The upshot of this analysis was that it’s the rate of return that undergoes a random walk, and not the price. This observation corrects an immediate, damning problem with Bachelier’s model. If stock prices are normally distributed, with the width of the distribution determined by time, then Bachelier’s model predicts that after a sufficiently long period of time, there would always be a chance that any given stock’s price would become negative. But this is impossible: a stockholder cannot lose more than he or she initially invested. Osborne’s model doesn’t have this problem. No matter how negative the rate of return on a stock becomes, the price itself never becomes negative — it just gets closer and closer to zero.

Figure 2: Osborne argued that rates of return, not prices, are normally distributed. Since price and rate of return are related by a logarithm, Osborne’s model implies that prices should be log-normally distributed. These plots show what these two distributions look like at some time in the future, for a stock whose price is $10 now. Plot (a) is an example of a normal distribution over rates of return, and plot (b) is the associated log-normal distribution for the prices, given those probabilities for rates of return. Note that on this model, rates of return can be negative, but prices never are.

Osborne had another reason for believing that the rate of return, not the price itself, should undergo a random walk. He argued that investors don’t really care about the absolute movement of stocks. Instead, they care about the percentage change. Imagine that you have a stock that is worth $10, and it goes up by $1. You’ve just made 10%. Now imagine the stock is worth $100. If it goes up by $1, you’re happy — but not as happy, since you’ve made only 1%, even though you’ve made a dollar in both cases. If the stock starts at $100, it has to go all the way up to $110 for an investor to be as pleased as if the $10 stock went up to $11. And logarithms respect this relativized valuation: they have the nice property that the difference between log(10) and log(11) is equal to the difference between log(100) and log(110). In other words, the rate of return is the same for a stock that begins at $10 and goes up to $11 as for a stock that begins at $100 and goes up to $110. Statisticians would say that the logarithm of price has an “equal interval” property: the difference between the logarithms of two prices corresponds to the difference in psychological sensation of gain or loss corresponding to the two prices.

You might notice that the argument in the last paragraph, which is just the argument Osborne gave in “Brownian Motion in the Stock Market,” has a slightly surprising feature: it says that we should be interested in the logarithms of prices because logarithms of prices better reflect how investors feel about their gains and losses. In other words, it’s not the objective value of the change in a stock price that matters, it’s how an investor reacts to the price change. In fact, Osborne’s motivation for choosing logarithms of price as his primary variable was a psychological principle known as the Weber-Fechner law. The Weber-Fechner law was developed by nineteenth-century psychologists Ernst Weber and Gustav Fechner to explain how subjects react to different physical stimuli. In a series of experiments, Weber asked blindfolded men to hold weights. He would gradually add more weight to the weights the men were already holding, and the men were supposed to say when they felt an increase. It turned out that if a subject started out holding a small weight — just a few grams — he could tell when a few more grams were added. But if the subject started out with a larger weight, a few more grams wouldn’t be noticed. It turned out that the smallest noticeable change was proportional to the starting weight. In other words, the psychological effect of a change in stimulus isn’t determined by the absolute magnitude of the change, but rather by its change relative to the starting point.

So, as Osborne saw it, the fact that investors seem to care about percentage change rather than absolute change reflected a general psychological fact. More recently, people have criticized mathematical modeling of financial markets using methods from physics on the grounds that the stock market is composed of people, not quarks or pulleys. Physics is fine for billiard balls and inclined planes, even for space travel and nuclear reactors, but as Newton said, it cannot predict the madness of men. This kind of criticism draws heavily on ideas from a field known as behavioral economics, which attempts to understand economics by drawing on psychology and sociology. From this point of view, markets are all about the foibles of human beings — they cannot be reduced to the formulas of physics and mathematics. For this reason alone, Osborne’s argument is historically interesting, and I think telling. It shows that mathematical modeling of financial markets is not only consistent with thinking about markets in terms of the psychology of investors, but that the best mathematical models will be ones that, like Osborne’s and unlike Bachelier’s, take psychology into account. Of course, Osborne’s psychology was primitive, even by the standards of 1959. (The Weber-Fechner law was already a century old when Osborne applied it, and much subsequent research had been conducted on how human subjects register change.) Modern economics can draw on far more sophisticated theories of psychology than the Weber-Fechner law, and later in the book we will see some examples where it has. But bringing in new insights from psychology and related fields only strengthens our ability to use mathematics to reliably model financial markets, by guiding us to make more realistic assumptions and by helping us identify situations where the current crop of models might be expected to fail.

Osborne was accustomed to working with the very finest physicists of his day, and he could not be cowed by authority. If he worked out the solution to a problem, or if he believed he understood something, he argued his case forcefully. In early 1946, for instance, Osborne became interested in relativity theory. To learn as much about the theory as he could, he picked up a book by Einstein, The Meaning of Relativity, in which Einstein offered an argument about how much dark matter could exist in the universe. Dark matter — literally, stuff in the universe that doesn’t seem to emit or reflect light, which means that we can’t see it directly — was first discovered in the 1930s, by its effects on the rotation of galaxies. Devotees of popular physics know that today, dark matter is one of the most puzzling mysteries in all of cosmology. Observations of other galaxies suggest that the vast majority of the matter in the universe is unobservable, something that is not explained by any of our best physical theories.

Einstein proposed a simple way of figuring out the lower bound for the total amount of dark matter in the universe. He argued that the density of dark matter in the universe as a whole was at least as much as the density within a galaxy (or rather, a group of galaxies known as a cluster). Osborne decided he didn’t buy the argument. For one, Einstein seemed to be making a series of bad assumptions. Worse still, the best evidence that anyone had in 1946 showed that most dark matter was restricted to certain parts of a galaxy, with basically no dark matter in empty space (this still seems to be true). So if anything, you should expect the density of dark matter to be higher in a galaxy than in space as a whole.

By 1946, most people, if they disagreed with an argument of Einstein’s pertaining to relativity and astrophysics, would assume they had misunderstood something. Einstein was already a cultural icon. But Osborne took no heed of such things. When he understood something, he understood it, and no amount of reputation or authority could intimidate him. So Osborne wrote Einstein a letter in which he very politely suggested that Einstein’s argument didn’t make any sense. Einstein replied by restating his argument from the book. So Osborne wrote again. Einstein conceded that his argument was problematic but thought the conclusion remained sound, and so he offered another argument. Once again, Osborne refuted it. At the end of a half-dozen-letter correspondence, it was clear that Einstein was unconvinced by Osborne. But it was equally clear to Osborne that Einstein’s argument in the book failed, and that he didn’t have any other good arguments up his sleeve.* [*I think most physicists today, if they read the letters, would say that Osborne got the better of the exchange.]

Osborne approached his work in economics in the same spirit. Unconcerned about his lack of background in economics or finance, Osborne presented his research with an engineer’s confidence. He published “Brownian Motion in the Stock Market” in a journal called Operations Research. It was not an economics journal, but enough economists and economically minded mathematicians read it that Osborne’s research quickly garnered attention. Some of this was positive, but it was not unambiguously so. Indeed, when Osborne published his first paper on finance, he was unaware of Bachelier or Samuelson, or any of a handful of economists who had, in one way or another, anticipated the idea that stock prices are random. Many economists pointed out his lack of originality — so many that Osborne was forced to publish a second paper just a few months after the first, in which he presented a brief history of the idea that markets are random, giving full credit to Bachelier for coming up with the idea first, but also defending his own formulation.

Osborne stood his ground, and rightfully so. Despite connections with earlier work, his papers on randomness in the stock market were sufficiently original that Samuelson later gave him credit for developing the modern version of the random walk hypothesis at the same time that Samuelson and his students were working on it. More importantly still, Osborne approached his model as a true empirical scientist, trained to handle data. He developed and applied a series of statistical tests designed to corroborate his version of the Brownian motion model. Other researchers, such as the statistician Maurice Kendall, who in 1953 showed that stock prices were as likely to go up as to go down, had done empirical work on the randomness of stock prices. But Osborne was the first to demonstrate the importance of the log-normal distribution to markets. He was also the first to clearly articulate a model for how stock market randomness worked and how it could be used to derive probabilities for future prices (and rates of return), all while providing convincing data that this particular model of the markets captured how markets really behave. And despite the early reservations about Osborne’s originality, economists soon recognized that he brought theory and evidence together in a way that simply hadn’t been done before. When Paul Cootner at MIT collected the most important papers on the random walk hypothesis for his 1964 volume — the volume that contained the first English translation of Bachelier’s thesis — he included two papers by Osborne. One was the 1959 paper on Brownian motion; the other was a paper that expanded on and generalized the earlier work.

By the time Osborne began thinking about markets, he had published fifteen papers in physics and related topics. He had held a permanent position at the NRL for a decade and a half and had rubbed shoulders with some of the best physicists of the mid-twentieth century, as both colleague and correspondent. And yet, Osborne still didn’t have a PhD, in physics or in anything else. He had left grad school in 1941 to join the NRL without finishing his degree. On one level, a doctorate didn’t mean much for a person like Osborne; he had a fulfilling career in physics even without a doctorate, and no one seemed to doubt his credentials as a researcher. His work spoke for itself. He decided, however, during the mid-fifties, that he wanted to finish his degree, at least in part because it would guarantee him a promotion at the NRL. And so Osborne followed many of his colleagues at the NRL to the physics department at the University of Maryland. There he could finish his graduate work without giving up his position at the lab.

Osborne’s first attempt at a dissertation was on a topic in astronomy. (Usually graduate students write a dissertation proposal. Osborne ignored this step. He wrote entire dissertations.) He brought the dissertation to the physics department head, who promptly rejected it because too many people were interested in the topic and Osborne’s research wasn’t original enough. So Osborne wrote a second dissertation, based on his research on the stock market. The department head rejected this, too, on the grounds that it wasn’t physics. As Osborne would later put it, “You are supposed to do original research, but if you get too original, they don’t know what’s going on.” Stock market research may have been acceptable work for a physicist within the government research community, where applied work of any stripe was highly valued. But it still wasn’t “physics” from the perspective of a traditional academic department. And so, though Osborne was received more favorably by the scientific community than Bachelier, he was still something of a maverick for working on financial modeling.

Even after having two dissertations rejected, Osborne wasn’t ready to give up. He sent “Brownian Motion in the Stock Market” off to Operations Research and set to writing a third dissertation. For this project, he returned to a problem he had been working on just before he began to think about the stock market. The third idea concerned the migratory efficiency of salmon. Salmon spend most of their lives in the ocean. But when it comes time to breed, they return to their birthplaces, often up to a thousand miles upstream of the ocean, to spawn and die. But after leaving the ocean, they no longer eat. Osborne realized that this meant that one could figure out how efficiently a salmon can swim by looking at the distances traveled and the fat lost on arrival. The idea was to think of a salmon as a boat that was traveling a certain distance without refueling.

When he finished this third dissertation and submitted it, he again received a lukewarm reaction. It was not clear that this third dissertation was any more “physics” than the second one had been. Ultimately, however, the dissertation was accepted. The university was in the process of applying for a large grant in biophysics (the study of the physics of biological systems), and the administration wanted to have evidence of expertise in that field. And so, in 1959, almost twenty years after he had first moved to the NRL and the same year that “Brownian Motion in the Stock Market” appeared in print, Osborne finally received a doctorate (and a much-deserved promotion at the NRL).

The work on migratory salmon bears a surprising connection to Osborne’s work on financial markets. His model of how salmon swim upriver included analysis at several different time scales. There were effects corresponding to how well the salmon were able to swim over short distances, which depended on things like the strength of the current in the river at a given moment. There were also effects that you couldn’t see clearly just by looking at a salmon swimming for a few feet or yards but became apparent when you looked at a salmon traveling over, say, a thousand miles. The first kind of effect might be called “fast” fluctuations in the salmon’s efficiency; the second might be called “slow” fluctuations. The trouble was that the data were much better on the slow fluctuations. It’s easy to record how many salmon, roughly, have reached a given point at a given time; it is much harder to record just how well any given salmon is making headway as a river’s current changes.

Osborne had worked out a theoretical model that tried to explain both the slow and fast fluctuations, and to show how they related to each other. And he wanted to figure out a way to test the model. Getting better data on individual salmon would have been one way to do this — but it would have been difficult, and Osborne didn’t have any idea where to start. A second possibility was to find another system that might show both the fast and slow fluctuations that Osborne wanted to study, to see if the same model described that system as well. This second option seemed much more appealing, but Osborne needed an appropriate system. When he sat down to figure out how to understand the stock quotes in the Wall Street Journal, he soon realized that markets, too, have different scales of fluctuations. Some market forces, like the details of how an exchange works or the interactions of traders, can affect how prices change over the course of a day. These are like the fast fluctuations that salmon experience from one river bend to the next. But there are other forces affecting markets, things like business cycles and government interest rates, that become apparent only when you step back and look at a longer time period. These are slow fluctuations. It turned out the financial world was the perfect place to look for data that could be used to test Osborne’s ideas about how these different kinds of fluctuations affect one another.

The process worked in the other direction, too. After developing the migratory salmon model in the context of stock market prices, and after tweaking the model to better fit the data he had used to test it, he applied it to a problem in physics. Osborne proposed a new model for deep ocean currents. Specifically, he was able to explain how the random motion of water molecules (fast fluctuations in the language of the salmon paper) could give rise to variations in apparently systematic large-scale phenomena, like currents (slow fluctuations). For Osborne, work in physics and finance were intrinsically linked.

It is tempting to overstate both the reception of Osborne’s work and his direct influence, because as we shall see, his ideas would ultimately revolutionize financial markets. Still, his work did not make the splash on Wall Street that more developed versions of his ideas would, in the hands of other researchers just a short time later. Osborne was a transitional figure. He was read widely by academics and some theoretically minded practitioners, but Wall Street was not yet ready to move firmly in the direction that Osborne’s work suggested. In part the difficulty was that Osborne believed that his model of market randomness implied that it was impossible to predict how individual stock prices would change with time; unlike Bachelier, Osborne didn’t connect his work to options, where understanding the statistical properties of markets can help you identify when options are correctly priced. Indeed, reading “Brownian Motion in the Stock Market” and Osborne’s later work, one gets the sense that there is no way to profit from the stock market. Prices are unpredictable. The speculator’s average gain is zero. Investing is a losing proposition.

Later, people would look at Osborne’s work and see something more optimistic. If you know that stock prices are essentially random, then, as Bachelier pointed out, you can figure out the value of options or other derivatives based on those stocks. Osborne didn’t take his work in this direction — at least, not until the late 1970s, when others had already made similar moves. Instead, he spent much of the rest of his career trying to figure out the ways in which stock prices aren’t random. In other words, after tying himself to the enormously controversial claim that stock prices represent “unrelieved bedlam” (his words, in many of his articles), Osborne systematically and exhaustively searched for order and predictability.

He had some limited success. He showed that the volume of trading — the number of trades that take place in any given stretch of time — isn’t constant, as one would naively assume in a Brownian motion model. Instead, there are peaks in volume at the beginning and end of a trading day, over the course of an average trading week, and over the course of a month. (All of these variations, incidentally, represent just the kind of “slow fluctuations” Osborne had explored with his migratory salmon — applied not to prices, but to numbers of trades.) These variations arise from what Osborne took to be another principle of market psychology, that investors have limited attention spans. They get interested in a stock, they make a lot of trades and send the volume of trades way up, and then they gradually stop paying attention and volume decreases. If you allow for variations in volume, you have to change the underlying assumptions of the random walk model, and you get a new, more accurate model of how stock prices evolve, which Osborne called the “extended Brownian motion” model.

In the mid-sixties, Osborne and a collaborator showed that at any instant, the chances that a stock will go up are not necessarily the same as the chances that the stock will go down. This assumption, you’ll recall, was an essential part of the Brownian motion model, where a step in one direction is assumed to be just as likely as a step in the other. Osborne showed that if a stock went up a little bit, its next motion was much more likely to be a move back down than another move up. Likewise, if a stock went down, it was much more likely to go up in value in its next change. That is, from moment to moment the market is much more likely to reverse itself than to continue on a trend. But there was another side to this coin. If a stock moved in the same direction twice, it was much more likely to continue in that direction than if it had moved in a given direction only once. Osborne argued that the infrastructure of the trading floor was responsible for this kind of non-randomness, and Osborne went on to suggest a model for how prices change that took this kind of behavior into account.

This was a hallmark of Osborne’s work, and it was one of the reasons he’s such an important figure in the story of physics and finance. The idea that prices are equally likely to move up or down was part of Osborne’s version of the efficient market hypothesis, a central assumption of his original model. When he realized this assumption didn’t hold, he began to look for ways to tweak the model to account for a more realistic assumption, based on what he had learned about real markets. Osborne was explicit from the beginning that this was his methodology, in keeping with the kinds of theoretical work he was familiar with in astronomy and fluid dynamics. In those fields, most problems are much too hard to solve all at once. Instead, you begin by studying the data and then make simplifying assumptions to derive simple models. But this is only the first step. Next, you check carefully to find places where your simplifying assumptions break down and try to figure out, again by focusing on the data, how these failures of your assumptions produce problems for the model’s predictions.

When Osborne described his original Brownian motion model, he specifically indicated what assumptions he was making. He pointed out that if the assumptions were no good, there was no guarantee that the model would be, either. What Osborne and other physicists understood was that a model isn’t “flawed” when the assumptions underlying it fail. But it does mean you have more work to do. Once you’ve proposed a model, the next step is to figure out when the assumptions fail and how badly. And if you discover that the assumptions fail regularly, or under specific circumstances, you try to understand the ways in which they fail and the reasons for the failures. (For instance, Osborne showed that price changes aren’t independent. This is especially true during market crashes, when a series of downward ticks makes it very likely that prices will continue to fall. When this kind of herding effect is present, even Osborne’s extended Brownian motion model is going to be an unreliable guide.) The model-building process involves constantly updating your best models and theories in light of new evidence, pulling yourself up by the bootstraps as you progressively understand whatever you’re studying — be it cells, hurricanes, or stock prices.

Not everyone who has worked with mathematical models in finance has been as sensitive to the importance of this methodology as Osborne was, which is one of the principal reasons why mathematical models have sometimes been associated with financial ruin. If you continue to trade based on a model whose assumptions have ceased to be met by the market, and you lose money, it is hardly a failure of the model. It’s like attaching a car engine to a plane and being disappointed when it doesn’t fly.

Despite the patterns in stock prices that Osborne was able to discover, he remained convinced that in general, there was no reliable way to make profitable forecasts about future market behavior. There was, however, one exception. Ironically, it had nothing to do with the sophisticated models that he developed during the 1960s. Instead, his optimism was based on a way of reading the mind of the markets, by studying the behavior of traders.

Osborne noticed that a great preponderance of ordinary investors placed their orders at whole-number prices — $10, or $11 say. But stocks were valued in units of 1/8 of a dollar. This meant that a trader could look at his book and see that there were a lot of people who wanted to buy a stock at, say, $10. He could then buy it at $10 1/8, knowing that at the end of the day the stock wouldn’t drop below $10 because there were so many people willing to buy at that threshold. So at worst, the trader would lose $1/8; at best, the stock would go up, and he could make a lot. Conversely, he could see that a lot of people wanted to sell at, say, $11, and so he could sell at $10 7/8 with confidence that the most he could lose would be $1/8 if the stock went up instead of down. This meant that if you went through a day’s trades and looked for trades at $1/8 above or below whole-dollar amounts, you could gather which stocks the experts thought were “hot” because so many other people were interested.

It turned out that what the experts thought was hot was a great indicator of how stocks would do — a much better indicator than anything else Osborne had studied. Based on these observations, Osborne proposed the first trading program of a sort that could be plugged into a computer to run on its own. But in 1966, when he came up with the idea, no one was using computers to make decisions. It would take decades for Osborne’s idea and others like it to be tested in the real world.