8

A New Manhattan Project

ANOTHER DEBATE. Pia Malaney put her arms on the table and leaned in to listen to her fiancé, Eric Weinstein. Weinstein was a postdoctoral researcher at MIT who had recently finished a PhD in mathematics at Harvard. They were sitting in a bar in Cambridge, Massachusetts, where Weinstein was holding forth on how the ideas used in his dissertation could be applied to hers. The trouble was that his work had been on an application of abstract geometry to mathematical physics. Her work, meanwhile, was in economics. The two projects seemed as different as could be. She sighed as she recalled, with a sense of the irony, how much easier these discussions had been before she had won him over to her side.

Malaney had met Weinstein in 1988, while he was a graduate student and she was an undergraduate economics major at Wellesley, the women’s college located just outside of Boston. Back then, Weinstein had a dim view of economics — a view shared by many of his mathematician colleagues. He thought it consisted of mathematically simple theories that couldn’t hope to capture the full complexity of human behavior. Weinstein would get a rise out of friends in the economics department by calling their field “cocktail party conversation”: unsubstantial, trivial. He would happily have admitted that he didn’t know much about economics, because, after all, there wasn’t much to know.

Malaney was not fond of the view frequently espoused by her fiancé. For years, she steadfastly defended her colleagues’ work against Weinstein’s attacks.

And then one day, she found she had convinced him. All of a sudden, he went from trying to tell her that economics was worthless to declaring that they should collaborate. All Weinstein could talk about was how, with his training in mathematics and physics and her training in economics, they could tackle all sorts of problems that had stumped economists in the past. The point had long been to get her boyfriend to read enough economics to understand that there was substance behind it. Now, though, Malaney found herself wading into the world of mathematical physics. It was not what she had bargained for.

Still, she couldn’t deny that their collaboration was already proving fruitful. They had begun to focus on something called the index number problem. The problem concerns how to take complex information about the world, such as information about the cost and quality of various goods, and turn it into a single number that can be used to compare, say, a country’s economic health and status at one time to its economic status at another time. Some familiar examples are market indices like the Dow Jones Industrial Average or the S&P 500. These are numbers that are supposed to encode all of the complicated information about the state of the U.S. stock market. Another index that one often hears about is the Consumer Price Index (CPI), which is supposed to be a number that captures information about the cost of the ordinary things that a person living in a U.S. city buys, such as food and housing. Index numbers are crucially important for economic policy because they provide a standard to compare economic indicators over time, and from place to place. (The Economist magazine has proposed a particularly straightforward index, called the Big Mac Index. The idea is that the value of a Big Mac hamburger from McDonald’s is a reliable constant that can be used to compare the value of money in different countries and at different times.)

Together, Malaney and Weinstein developed an entirely novel way of solving the index number problem by adapting a tool from mathematical physics known as gauge theory. (The early mathematical development of modern gauge theory — the topic on which Weinstein wrote his dissertation — was largely the work of Jim Simons, the mathematical physicist turned hedge fund manager who founded Renaissance Technologies in the 1980s.) Gauge theories use geometry to compare apparently incomparable physical quantities. This, Malaney and Weinstein argued, was precisely what was at issue in the index number problem — although there, instead of incomparable physical quantities, one was trying to compare different economic variables.

It was an unusual, highly technical way of thinking about economics. This made Malaney a little nervous, since she didn’t know how economists unaccustomed to such high-level mathematical analysis would react. But she decided to pursue the project for her dissertation after she showed it to her advisor, a superstar in the Harvard economics department named Eric Maskin. (He would go on to win the 2007 Nobel Prize in economics, for work he had already done before meeting Malaney.) Maskin told her the idea was great. He believed she’d made real progress on an important topic, one with long-term political and economic implications. She finished the dissertation during the summer of 1996 and began to think about applying for tenure-track jobs at top research universities. With such a groundbreaking thesis topic and the support of her advisor, she had every reason to think she’d be a competitive candidate for these highly desirable positions. She was living the academic dream.

How much is money worth? This might seem like an odd question. For most people, money doesn’t have intrinsic value. The value of money comes from what you can do with it. Perhaps money can’t buy you love, but it sure can buy you orange juice, or a pair of pants, or a new car. And over time, the amount of money it takes to buy that same orange juice, pair of pants, or new car changes. Usually, goods become more expensive over time (at least if you look at the price tags alone); grandparents the world over will tell you how little a chocolate bar used to cost, or a movie ticket. A nickel, we’re told, went a lot farther in 1950 than it does now. This decrease in the value of money over time is what we usually call inflation.

But how do you measure inflation? It’s not as though all prices go up evenly across the board. Even as some goods have become more expensive with time, others have become cheaper. Consider that the price tag for an Apple II, one of the first mass-produced personal computers, with a breakneck processor speed of 1MHz and a whopping 48KB of memory, was $2,638 when it first went on sale in 1977. Nowadays, almost thirty-five years later, you can get a desktop computer with a processor over three thousand times as fast, and with a hundred thousand times more memory, for a fraction of that — just a few hundred dollars. So what if chocolate is more expensive: computing power is now dirt cheap by 1970s standards.

One way in which economists deal with this problem is by looking at how prices change across a broad range of products. They do this by tracking the price of what is called a standard market basket: an imaginary shopping cart filled with groceries and household commodities like gasoline and heating oil, as well as services like education, medical care, and housing. This is what’s used to calculate the CPI, which is effectively the average price of the various goods and services in the cart. By looking at price changes for many different items in this way, you can get a rough estimate of how far a dollar (or a euro, or a yen) goes today, as compared to sometime in the past. Gasoline prices might spike over the course of a few months, while computer prices might drop gradually over a few years, but the change in the standard market basket is supposed to be a relatively stable indication of how much spending power changes with time.

Given the role that the CPI plays in calculating things like inflation, it’s important to get it right. Unfortunately, this is a difficult thing to do. For one, what should go into the market basket? People with different lifestyles often spend their money very differently: a family with children living in upstate New York buys very different things (for instance, winter coats and heating oil) from a single man living in Southern California (surfboards?); farmers in Iowa have different needs and preferences from coal workers in West Virginia. It is hard to see how a single market basket could reflect the full variation of these different lifestyles. For this reason, the U.S. Bureau of Labor Statistics, which calculates the CPI in America, actually produces many different indices, corresponding to people working in different industries, living in different areas, and so forth.

But this kind of variability hints at a deeper problem. If the things that a person or family buys can vary from family to family, or from place to place, these kinds of preferences can presumably vary with time, too. This can happen on large scales and small scales. Imagine the standard market basket from 1950, long before cell phones or personal computers, when relatively few people went to college or took an airplane on a family vacation. If you looked at the present prices of that standard market basket, you would not have a very good indication of today’s cost of living. But so too if you looked at the kinds of things on which someone spent money over a relatively short period of time: say, a standard market basket for someone immediately out of college, and the basket for someone a few years later, after settling down and getting married, or a few years later still, after having kids. Changes in culture, demographics, and technology can all compound to make assigning a number to inflation, or to changes in the cost of living, seem impossible. This is what makes the index number problem so difficult: you need a way to compare values at different times, and for people living very different lifestyles.

The CPI is a blunt tool. Virtually everyone in economics agrees that we need to find some way to hone it. Still, it is incredibly important for policymaking because of its central role in determining inflation, which in turn affects virtually every aspect of the budget. In the United States, for instance, the thresholds for tax brackets are tied to the stated rate of inflation. So are wage increases for government employees. Social Security outlays are also determined by inflation. Every year, these quantities are recalculated based on the inflation rate of the previous year, to adjust for changes in the cost of living. In June 1995 the U.S. Senate appointed the Advisory Commission to Study the Consumer Price Index, usually called the Boskin Commission after Michael Boskin, the Stanford economics professor who chaired it. The brainchild of soon-to-be-disgraced Senator Bob Packwood, then chairman of the Senate Finance Committee, the Boskin Commission was charged with coming up with a better way to compute the CPI, and by extension inflation.

For Malaney and Weinstein, the Boskin Commission seemed like a godsend. A Senate-appointed committee tasked with solving just the problem that they had chosen to tackle made Malaney and Weinstein’s work immediately relevant. It was the perfect opportunity for them to make a contribution — not just to economic theory, but potentially to public policy, since Packwood planned to implement the Boskin Commission’s findings immediately. Even better, one of the economists appointed to the commission, Dale Jorgenson, was a member of the Harvard economics department.

Hermann Weyl was offered the position of chair of the mathematics department at ETH Zürich (the school where Didier Sornette currently teaches) in 1913, when he was just twenty-seven years old. He arrived in Zürich from Göttingen, a German university that in the early 1920s represented the very pinnacle of international mathematics. His advisor there, David Hilbert, was widely recognized as the most influential mathematician of his day. As Hilbert’s student at Göttingen, Weyl was at the center of the mathematical world.

Things were different in Zürich. ETH Zürich had a fine reputation, but it was quite new: it was only in 1911 that ETH was restructured to become a real university, with graduate students, shedding its past as an engineering-oriented teaching school. The other university in the city, the University of Zürich, was the largest in Switzerland. But it was no Göttingen.

Weyl wasn’t ETH’s only recent hire, however. As part of the restructuring, the school had made a number of appointments to the physics department. One of these was a prominent young physicist, an undergraduate alumnus of ETH named Albert Einstein. Einstein had gone on to do a PhD in physics at the University of Zürich, graduating in 1905 — the same year that he published a mathematical treatment of Brownian motion (anticipated, of course, by Bachelier), came up with a theory of the photoelectric effect (for which he would win the Nobel Prize in 1921), and discovered the special theory of relativity, including his famous equation e = mc2. And yet, none of this led to much success for Einstein. After finishing graduate school, he moved about 150km away to Bern, where the only job he could find was as a patent clerk. Occasionally, he was permitted to teach at the local university.

Gradually, however, as more physicists came to understand the importance of the 1905 papers, Einstein’s reputation grew. In 1911, he was offered a professorship at the German university in Prague; the next year, his alma mater offered him a job. By the time Einstein returned to Zürich, he was already a shining star of the physics community. His reputation had exploded in just a few years. He didn’t stay at Zürich for long — in 1914, he was appointed director of the Kaiser Wilhelm Institute in Berlin — but the year that Einstein and Weyl spent together was enough to change the course of Weyl’s research. Though initially a mathematician in the purest sense, Weyl found Einstein’s relativity theory captivating, particularly because when they met, Einstein was just beginning to realize the importance of high-powered modern geometry to the theory.

The basic idea underlying general relativity is that matter — ordinary stuff like cars and people and stars — affects the geometrical properties of space and time. This geometry, meanwhile, determines how bodies move. It is this movement of massive objects through deformed space and time that we ordinarily think of as gravitation, the physical phenomenon that keeps us firmly planted on the surface of the Earth, and that keeps the Earth in its elliptical orbit around the sun. The general relativistic picture is as different as can be from the older, Newtonian theory of gravity. In Newtonian gravitation, space and time are static. Their properties are unrelated to the matter that’s distributed through space. Bodies gravitate toward one another via an unexplained force that acts instantaneously at a distance.

Matter affects space and time in Einstein’s theory by inducing curvature. When physicists and mathematicians say something is “curved,” they mean just what we would ordinarily mean. A tabletop or an unfolded piece of paper is flat; a basketball or a paper towel roll is curved. But from a mathematical point of view, the thing that distinguishes a tabletop from a basketball isn’t that a basketball rolls and a table doesn’t, or that it’s easier to stand on a table than on a basketball. Instead, the feature that characterizes curvature for a mathematician is how hard it is to keep an arrow pointing in the same direction as you move it around the surface. If an object is flat, it turns out to be very easy. Not so if the object is curved.

I admit that this is a weird thing to say. But it isn’t hard to see how it works in practice. First, imagine you’re standing on a city sidewalk, somewhere in midtown Manhattan, say, where the streets are laid out like a grid. Try to picture what would happen if you did a clockwise lap around the block, all the while trying to keep yourself pointed in one direction — north, say, toward the Bronx. (The direction you’re facing, here, is taking the place of an arrow.) You might begin by walking forward for a while as you head uptown. When you get to the next corner, you would head right, east on the crosstown street. But you aren’t allowed to turn your body at the corner, since you’re trying to stay pointed in the same direction all the time. This means you have to walk sideways down the cross street. And when you get to the next corner, where you should start heading south again, you have to walk backward. If you follow these instructions, never once turning your body as you do the lap, you should find yourself back at the original corner looking in just the same direction as before.

This might not come as a surprise. After all, you never turned your body — why in the world wouldn’t you be facing in the same direction? But now let’s imagine a longer journey. Instead of doing a lap around the block, imagine trying to keep yourself pointed in the same direction — it might as well be north — as you circumnavigate the globe. For the first leg of your trip, you’re going to start in New York and just head east, toward Europe. When you arrive in France, you’re going to start crab-walking your way toward Asia, all the while keeping your face firmly pointed toward the North Pole. After a very long (and probably uncomfortable) walk, you will finally reach the Pacific Ocean, and then you’ll head for California. When you finally arrive in New York, if you never turned your body, you should still be facing north.

Here’s a different itinerary that begins and ends in the same place. You start by heading east, just as before. When you get to Kazakhstan, though, you take a detour. Instead of continuing on toward China, you strike north into Russia. (Now, at least, you get to walk forward.) You head all the way to the Arctic Circle, without turning your body. When you reach the North Pole, you see that New York is directly in front of you, far to the south. You keep moving forward into northern Canada, and then work your way down the Hudson until you return to New York. But this time, when you return to the place where you began, you’re facing a different direction: due south! What’s gone wrong? You didn’t turn your body at any point of the journey, and yet at the end you’re facing the opposite way from the direction you started facing — and from the direction you were facing at the end of your first journey.

The reason you end up facing in a different direction after your second round-the-world trip is that the globe is a curved surface (see Figure 5). A city block, meanwhile, is flat. (At least to a first approximation — real city blocks lie on the surface of the Earth, which of course is curved. But you don’t see the effects of this curvature over short distances.) If you imagine an ant trying to perform the same experiment on a kitchen table, you would find that, no matter what route the ant took, it would always end up facing in the same direction. This is what a mathematician means when he says that a surface, or a shape, is flat: it exhibits “path independence of parallel transport” (parallel transport because the goal is to try to keep your body parallel to its last direction at all times). For curved surfaces, meanwhile, the direction an arrow points at the end of a journey is “path dependent.” On a curved surface, different routes can lead to different results.

Figure 5: If you move an arrow along a path on a curved surface, being careful to keep the arrow pointing in the same direction at all times, the direction that the arrow points at the end of the path will depend on the path taken. Mathematicians call this property of curved surfaces “path dependence of parallel transport.” In this figure, there are two paths around a sphere. The first path takes the arrow from point A around the equator and back to point A. At the end of the trip, the arrow faces the same direction as when it began. The second trip again starts at point A and travels around the equator, but only halfway. On the other side of the sphere, the path moves up over the North Pole and returns to point A that way. At the end of this trip, the arrow is pointing in the opposite direction from when the trip began. Weyl observed that it was possible to construct physical theories in which not only was direction path dependent, but so was the length of an arrow. The physical world doesn’t actually work that way, but in the years since Weyl first came up with his theory — which he called a gauge theory — many physicists and mathematicians have adapted the mathematics he invented to other problems, with much more success.

The connection between path dependence and curvature may be unfamiliar to non-mathematicians. But the basic idea of path dependence isn’t. It is easy to find examples in day-to-day life of things that are path dependent, and things that are path independent. If you go to the store and buy groceries, the amount of milk you have when you get back home is path independent. The amount of milk isn’t going to change if you take a different route home from the store. The amount of gasoline in your tank, however, is path dependent. If you take the direct route home, you will usually have more gasoline left when you arrive than if you had taken the scenic route. Path dependence of parallel transport is just a special case of the more general fact that sometimes, things depend not just on where you start and where you end up, but also on the road you take to get there.

Einstein’s theory of general relativity makes essential use of the fact that space and time are curved in the sense that parallel transport is path dependent. But Weyl thought that Einstein hadn’t gone far enough. In general relativity, if you begin with an arrow at one place and then move it around a path that brings it back to the starting point, it might face a different direction. But it will always have the same length. Weyl thought this was an arbitrary distinction that couldn’t have physical meaning, and so he came up with an alternative theory in which length, too, was path dependent, so that if you moved a ruler around two different closed paths, it would have different lengths when it returned to the starting point, depending on the path it took.

Weyl called his new theory a gauge theory. It was the first time the term had been used, and it was based on the idea that there was no universal, once-and-for-all way to “gauge,” or measure, the length of a ruler. Suppose you and your neighbor are both about to leave your driveway in the morning on the way to work. Imagine you drive identical cars, and you both work at the same location. What would you say if someone stopped you and asked which car would have more gasoline in the tank when you both got to work, yours or your neighbor’s? You might glance at your gas gauge and see that you have a full tank, and then ask your neighbor how much gas he has. But this isn’t enough information to answer the question. The answer will depend on the paths you and your neighbor take to work: you might take a direct route, while the neighbor takes the scenic route. Your neighbor might take a highway, while you stick to city streets. Whatever the case may be, how much gasoline each of you has left at the end of your journeys will depend on the paths you take to work. Comparing some path-dependent quantities does not yield a straightforward answer.

This was the sense in which, in Weyl’s theory, there was no universal way to measure a ruler, since there was no path-independent way to compare two rulers in different locations. But Weyl realized that this wasn’t necessarily a problem: if you wanted to compare the length of a ruler in Chicago to the length of a ruler in Copenhagen, or on Mars, all you needed to do was figure out a way to bring the rulers to the same place so you could hold them up next to each other. This wouldn’t be path independent, but that was OK, as long as you could figure out how the change in length would depend on the path you took. In other words, Weyl realized that what really mattered to his theory was identifying a mathematical standard by which comparisons of length could be made — a way of “connecting” different points in a principled way, so that you could compare rulers, even though length was path dependent. Mathematically, what Weyl accomplished was to show how to compare two otherwise incomparable quantities, by moving them to a common location where their properties (in this case, their lengths) could be compared directly.

Weyl’s theory wasn’t a success. Einstein quickly pointed out that it was inconsistent with some well-known experimental results, and it was soon relegated to the dustbins of scientific history. But Weyl’s basic idea about gauge — that to determine if two quantities are equal in a physical theory, you need a standard of comparison that accounts for possible path dependence — was destined to be far more important than the theory that led to it. Gauge theory was resurrected in the 1950s by a pair of young researchers at Brookhaven National Laboratory named C. N. Yang and Robert Mills. Yang and Mills took Weyl’s theory one step further: If it was possible to construct a theory in which length was path dependent, was it possible to construct theories in which still other quantities were path dependent? The answer, they realized, was yes. They went on to develop a general framework for much more complicated gauge theories than the one Weyl had imagined.

These theories, now known as Yang-Mills theories, spawned what is sometimes called the gauge revolution. Beginning in 1961, fundamental physics was rewritten in terms of gauge theory — a process that only accelerated when Yang, in collaboration with Jim Simons of Renaissance, realized a deep connection between Yang-Mills gauge theories and modern geometry later that decade. Gauge theories proved particularly important in physics because they proved to be a natural setting to look for “unified” theories, where what was being unified was the standard by which different quantities were compared in the theories. By 1973, it appeared that the three fundamental forces of particle physics — electromagnetism, the weak force, and the strong force — had been unified into a single gauge-theoretic framework. This framework was called the Standard Model of particle physics. Today, it is the single best-confirmed theory ever discovered, in any field. It is the very heart of modern physics.

Jobs in academia, especially the most desirable jobs as tenure-track professors, work on a fixed schedule. Toward the end of each summer, students who are close to finishing their dissertations decide whether they are going to apply that year. If the student and his advisor decide that the dissertation is far enough along, the student begins to put together a dossier, including letters of support from faculty members, examples of the work that will go into the student’s dissertation, and a statement describing the student’s research interests. Then, come fall, departments that are looking to hire new faculty members advertise their open positions, with applications due at the end of November. If you’re lucky, you will be interviewed by a hiring department, and if that goes well, you will be flown out to visit the schools that are interested in you, to give a talk on your dissertation research. In many disciplines, including economics, the process is called “going on the market,” an apt phrase for what is essentially an academic cattle call. It’s an extraordinarily stressful process. More than anything else, an academic’s success on the market is what determines the trajectory of his career.

A graduate student’s research history and the quality of his dissertation are crucial in determining whether he will get an academic job. But more important than either of these factors is the strength of the letters written in support of the student by the faculty. If famous, well-respected professors say your research is good or important, that can make all the difference. Each year, the Harvard economics department holds a faculty-wide meeting to determine which of that year’s batch of students are going to get the full-throated support of the university’s eminent economics faculty. The department goes through each of the candidates, and the student’s advisor brings the rest of the department up to speed on the student’s research and prospects. It’s a closed-door affair, and so only faculty members in the department know exactly what happens. But at the end of the meeting, some students emerge with the wind at their backs. When hiring departments start calling, these students get special endorsements. Others aren’t so lucky.

Given the importance of her work, and the strong support of her advisor, Pia Malaney had every reason to expect that she would fare well in this process. Everything was in place. But then came the October jobs meeting. Afterward, she and Maskin met to discuss her job prospects, in light of the department’s determination. Things no longer looked so good.

Going into the meeting, Maskin was convinced that her thesis was terrific. But not everyone in the department agreed. One person in particular had reservations: Dale Jorgenson, one of Harvard’s two representatives on the Boskin Commission and an expert on the index number problem. Malaney’s project covered exactly the same ground that the Boskin Commission was supposed to investigate. She had developed an elegant mathematical framework for addressing precisely the problem they were tasked with. And so, when she learned of his appointment, Malaney arranged a meeting with him. Excited, she described her work to him, showing how gauge theory could be applied to this important problem. Jorgenson replied by throwing her out of his office. “You have nothing,” he told her.

At the time, Malaney was discouraged, but she didn’t give up. So what if she couldn’t convince Jorgenson on her first try? Maskin liked the ideas and would advise the thesis. In the long run, the work would speak for itself. But then, as Malaney prepared to apply for jobs, this vision of the future began to dissolve. During the jobs meeting, it became clear that Jorgenson’s resistance to Malaney’s project ran deep. Several months later, when the Boskin Commission released its findings, the reasons for his resistance would become clear.

It took years for Malaney to convince Weinstein to take economics seriously. She tried everything: pointing to famous economists, explaining their most influential theories, describing important experimental results. But Weinstein was resistant. The mathematics, he was convinced, was too simple; the subject matter, too complex. Economics was a worthless pursuit, a pseudoscience. Finally, on the verge of giving up, Malaney tried one last tack. She gave Weinstein a challenge, a problem whose solution was equivalent to a classic result in economics known as Coase’s theorem.

Ronald Coase was a British economist who spent most of his career in the United States, at the University of Chicago. He was interested in something he called “social cost.” Imagine you are the local sheriff in an agricultural community. Two of your constituents come to you, asking you to help them settle an ongoing dispute. One of them is a rancher, raising cattle. The other, the rancher’s neighbor, farms soybeans. The dispute concerns the rancher’s cattle, which have a habit of wandering over to the farmer’s land and destroying his crop. Matters have recently become especially difficult because the farmer has learned that the rancher wants to add more cattle to his herd, and the farmer is concerned that the problem will get worse. What should you do?

When Coase tried to formalize an answer to social cost problems like this one, he came to a striking conclusion. It doesn’t matter what the sheriff does, at least from a long-term perspective, as long as three conditions are met: the damages involved must be adequately quantified, some well-defined notion of property must be instituted, and bargaining must be free. To see why this would be, consider what would happen if the sheriff told the rancher he could have as many cattle as he liked, but that he had to pay for all of the damage his herd inflicted. In essence, the rancher has incurred an additional cost to raising cows. Depending on how much damage gets done, and how much soybeans are worth, it may well make sense for the rancher to keep adding head to his herd even while paying the farmer for the soybeans that keep getting destroyed. If the rancher really is paying for the value of the soybeans, the farmer shouldn’t care whether the revenue comes from selling the soybeans himself or from the rancher’s compensation — in fact, he might as well think of the rancher as a customer buying whatever soybeans the cattle destroy. Ultimately, the rancher and farmer will reach an agreement about how many cattle the rancher will own based on what is maximally profitable for both parties. But what if the sheriff makes some other choice? If the farmer has to pay the rancher to keep his cattle from destroying the farmer’s crops, one would expect the exact same bargaining to occur. Coase’s theorem says that the endpoint will always be the same: both parties will agree on an arrangement that is maximally profitable for everyone.

When Malaney gave Weinstein this problem, Weinstein took it seriously. Making some simple mathematical assumptions, similar to the ones Coase made, Weinstein soon saw his way to a solution — just the solution, in fact, that Coase had arrived at. But this, Weinstein thought, was a surprise. At least in this case, it seemed as though the mathematics was working in the right sort of way, and indeed, it led to what seemed to be a deeply counterintuitive result that nonetheless bore weight. The process felt surprisingly similar to using mathematics in physics: one makes some simplifying assumptions and then uses mathematics to gain insights into a problem that would have otherwise remained intractable. Most importantly, if someone had told Weinstein about Coase’s theorem before he had worked on it himself, he would likely have thought that the solution was politically driven, a thinly veiled case for less government intervention, shrouded in mathematics to give the appearance of rigor. But now he saw that matters were not so simple.

His interest piqued, Weinstein began looking for other cases where mathematics was used to reach counterintuitive results in economics. He uncovered several examples. The Black-Scholes equation was one, since it makes use of fairly sophisticated mathematics to get at the heart of what it means to produce and trade an option. Another was Arrow’s theorem, a famous result in social choice theory that essentially proves that if you have a group of people trying to choose between three or more options, there is no voting system that can turn the ranked preferences of all of the individuals in the community into a fair community-wide ranking.

Weinstein realized that his criticisms of economics had been misplaced. Mathematics, he now believed, could be used productively to understand economic problems. It was an exhilarating realization, because it meant that someone with some mathematical acumen and a background in physics stood a chance at making progress on problems in economics. Soon, instead of looking for cases where mathematics had been put to productive use in economics, Weinstein and Malaney started looking for cases where it hadn’t been put to use — at least, not yet. Together, they happened on the index number problem. The mathematics underlying the CPI is astoundingly simple, given the profound difficulties associated with assigning a number to something so complicated as the value of money to a consumer. It was a perfect place to start.

Weyl’s essential innovation, conceptually speaking, was to find a mathematical theory for comparing otherwise incomparable quantities. In his theory, the incomparable quantities were the lengths of rulers at different locations. His solution was to find a way to bring the rulers to the same location, and then just hold them up next to one another to determine their relationship.

But now think of the index value problem, which, at its core, involves comparing different, apparently incomparable quantities. How can you make sense of the value of money to two different people, especially if they have radically different lifestyles? And how do you compare what might seem like a reasonable market basket in 1950 to what would seem like a reasonable market basket in 1970, or in 2010? These problems seemed insurmountable at first to Weinstein and Malaney. But in the context of the mathematical framework that Weyl and his successors had developed, at least one possible solution emerged. All they needed to do was figure out a way to take any two people — say, a lumberjack in 1950 and a computer programmer in 1995 — and put them in the same circumstances so that they could directly compare their preferences and values. It was a strange thing to propose — after all, the conversation between the lumberjack and the programmer might be a little awkward — but from the point of view of Weyl’s mathematics, it was the most natural thing in the world. To solve the index number problem, Weinstein and Malaney argued, you need a gauge theory of economics.

One day, late in 2005, Lee Smolin received an unusual e-mail. It seemed to be about economics, which was unexpected, because Smolin didn’t know the first thing about economics. Smolin was a physicist. His work was, and continues to be, in a cutting-edge field known as quantum gravity, which consists of people trying to understand how to put the two revolutionary, immensely successful innovations of early-twentieth-century physics — quantum mechanics, which describes very small objects like electrons, and Einstein’s theory of gravitation, which describes really big objects, like stars and galaxies — together into a coherent framework. This endeavor had nothing at all to do with economics. Or so Smolin thought.

A few months earlier, Smolin had published an article in the magazine Physics Today, a semi-popular publication whose goal was to explain new developments in physics to physicists who weren’t necessarily experts in the given area. Smolin’s article was an attempt to explain why quantum gravity had not produced a researcher like Albert Einstein, who successfully revolutionized physics by thinking far out of the box. The article was a preview of a book Smolin was just finishing, called The Trouble with Physics. In both the article and the book, Smolin argued that physics, or rather, quantum gravity research, faced a sociological problem. A group of physicists working on something called string theory, one approach to solving the basic problem of how to combine gravitational physics with quantum physics, had come to dominate the field. When it came time to hire new faculty members into their physics departments, or to dole out research funding, these string theorists tended to give the resources to other string theorists rather than to people working on alternative approaches to quantum gravity.

It was this Physics Today article that had prompted the unexpected e-mail. The man who had written the message was Eric Weinstein, now a hedge fund manager and financial consultant in Manhattan. Weinstein agreed with Smolin’s assessment of the physics community, based on his years working as a mathematical physicist at Harvard and then at MIT. But he had a bigger point to make, about how sociology could distort progress in academic research more broadly. As far as Weinstein was concerned, the sociology problem in physics was nothing. Economics was ten times worse.

Smolin wanted to hear more. He invited Weinstein to visit the Perimeter Institute, the research institute in Waterloo, Ontario, where Smolin was based. Perimeter was founded in 1999 by Mike Lazaridis, the entrepreneur and founder of Research in Motion, the company responsible for BlackBerry devices. Perimeter was designed as a place to foster research in fundamental physics. It has a strong reputation for open dialogue and discussion among different approaches on basic questions, in large part because of Lee Smolin’s influence on the institute from its earliest days. In some ways, Perimeter is an attempt to answer the sociological problem identified in Smolin’s book and articles. It was an ideal place for someone with Weinstein’s background and interests to present a new approach to understanding economic theory.

Weinstein visited Perimeter in May 2006. He gave a talk on the way in which gauge-theoretic ideas could be important in a new economic theory, presenting the work he and Malaney had done some years before. And then he left. Smolin and others at the institute found Weinstein’s ideas compelling. But they were inclined to be sympathetic. These were not the people who needed to be convinced.

Weinstein and Smolin remained in contact, however. Smolin read up on economics to better understand how gauge theory might help. Around the same time, he began to work with other researchers interested in bringing ideas from physics to bear on economics, including Mike Brown, the first CFO of Microsoft and a former chair of the NASDAQ board, Zoe-Vonna Palmrose, an influential accounting professor working at the SEC, and Stuart Kauffman, who had worked on complex systems theory at the Santa Fe Institute, alongside Doyne Farmer and Norman Packard before they started the Prediction Company.

In September 2008, Weinstein visited Perimeter a second time, for a conference on science in the twenty-first century. The talks focused on ways in which scientific research was changing with new funding sources, with new means of disseminating ideas, such as blogs and online conferences, and with new ideas about where research should and could happen, with places like Perimeter and the Santa Fe Institute becoming centers of study outside of the traditional university.

But the future of science was not at the forefront of Weinstein’s mind that September. Just a week after Weinstein’s talk at Perimeter, the fourth-largest investment bank in the United States, Lehman Brothers, closed its doors after a century and a half of business. At virtually the same time, AIG, one of the twenty largest publicly traded companies in the world, had its debt downgraded, leading to a liquidity crisis that would have toppled the company had the U.S. government not intervened. In early September, the world economy was already on the ropes. As a hedge fund manager and consultant, Weinstein was tuned in to the surprise and panic in the financial industry, and in economics more generally. As far as Weinstein knew, no one had seen this coming. (Sornette had, but he didn’t publicize this prediction widely.)

For Weinstein, the unexpectedly dramatic failure of the U.S. banking system was only further evidence that it was time to take the next step in the development of modern economics. It was time to reflect on what had gone wrong with the now-toxic securities and recognize that economics needed a new set of tools. As physicists had done a generation before, economists needed to broaden their theoretical framework to account for a wider variety of phenomena. Economics needed a new generation of theories and models, suited for the complexity of the modern world. Weinstein thought that the crisis should be an opportunity to set aside past differences between the various approaches to finance and economics. He called for a new large-scale collaboration between economists and researchers from physics and other fields. It would be, he said, an economic Manhattan Project.

Social Security, technically the U.S. federal Old-Age, Survivors, and Disability Insurance program, was first signed into law in 1935 as part of the New Deal, Franklin Roosevelt’s program to end the Great Depression through stimulus spending and a broad expansion of the U.S. welfare system. It was a way for the federal government to provide support to the elderly, to children whose parents had died before they were of employable age, and to people who became disabled and unable to work. It was designed to pay for itself, as a real insurance program would. Workers would contribute to the program through a mandatory tax, and the funds collected would be used to pay for the program’s costs.

The program was highly controversial. Early on, it was challenged several times in the Supreme Court (unsuccessfully). But over time, as successive generations contributed during the course of their working lives, most Americans came to count on the program as a retirement and disability benefit. By the 1960s, it had become a part of American life, something that workers nearing retirement took as an entitlement. This made matters politically difficult when, during the period of high rates of inflation and low economic growth in the 1970s, it became clear that Social Security was in trouble. Projecting forward, politicians and economists realized that over the coming decades, ever-larger numbers of aging Baby Boomers (then just coming into their own) would retire, and the costs of providing them with benefits would rapidly outstrip the program’s ability to fund itself.

And yet, there was little to be done. For a politician to draw attention to Social Security’s woes was suicidal. The two obvious solutions to the funding problem — reducing benefits and raising taxes — were equally unappealing. Social Security presented a kind of political catch-22 — that is, until Daniel Patrick Moynihan and Bob Packwood, the two leading members of the Senate Finance Committee in the mid-nineties, shared a moment of inspiration. If you wanted to come up with $1 trillion without anyone noticing the difference, all you needed to do was change the value of money.

Here’s how the plan worked. Projections for the future costs of Social Security were based on the expected rate of inflation, which in turn was based on the CPI. Moynihan and Packwood realized that if the official rate of inflation could be lowered, the income from the Social Security tax would rise, and the costs of administering the program would fall. The effect would be to raise taxes and reduce entitlements, relative to the real buying power of money, without acknowledging that you were doing so. The challenge was to find an argument for why inflation calculations should be modified. This is where the Boskin Commission came in. It was a masterful sleight of hand. Working backward from the figure of $1 trillion, which Moynihan believed would be necessary to keep Social Security solvent, he and Packwood determined that inflation would need to be reduced by 1.1%.

According to notes written by Robert Gordon, an economist at Northwestern University and one of the five members of the commission, Dale Jorgenson — the Harvard economist who had thrown Malaney out of his office — reported to the commission early on that they were aiming for $1 trillion in Social Security savings over ten years, and that this meant they needed to come up with the requisite reduction in inflation. Then the committee broke up into two teams to work on different ways in which the problems of changing preferences and changing quality could affect CPI. Gordon and the other person on his team, working together, arrived at one number. The other team, which included Jorgenson and Boskin, arrived at another. And then, “somehow” (Gordon’s word), when the two teams combined their results, the commission’s final recommendation “corrected” inflation by precisely 1.1%.

The Boskin Commission’s findings were criticized from all corners. As Gordon later reported, the project was rushed and careless. He and his collaborator finished their contribution days before the commission was due to present to the Senate. The calculations were what physicists and economists both call “back of the envelope,” little more than informal estimates. The commission’s report was never peer-reviewed before it was presented to the Senate. None of the other members of the commission ever asked how his team had come up with their number, or how the others had arrived at theirs. The answer to such questions would have been inconvenient. (Ultimately, many of the Boskin Commission’s recommendations were squashed by effective lobbying on the part of the AARP and others; about five years later, the National Academy of Sciences and the U.S. Bureau of Labor Statistics returned to the problem of how to calculate the CPI, with a more intellectually rigorous approach, and with more nuanced findings.)

Malaney approached Jorgenson with her and Weinstein’s ideas about index numbers soon after the Boskin Commission was formed. Jorgenson may have had deep criticisms of Malaney and Weinstein’s proposal. They may have even been good criticisms. But it is hard to avoid guessing that it would have caused problems for the Boskin Commission had a new and mathematically rigorous method emerged for calculating precisely what they were tasked to calculate. It seems the easiest thing was to make Malaney and Weinstein go away.

Exporting gauge theories, or other ideas from physics, to economics remains a hard sell. Weinstein was right that late 2008 presented a unique opportunity for someone inclined to change the way economists thought about the world — and the world, economics. Many people in finance, in economics, and in ordinary homes around the world were scared. Things that many people thought they understood turned out to be changing and unreliable. Meanwhile, people working in other fields, such as physics and mathematics, saw an opportunity to contribute to a field that seemed besieged. The suggestion that it was time to reevaluate some of the principal theories and methods of modern economics struck a chord with many, including Smolin and a handful of other physicists working at Perimeter.

Smolin, who previously had been reading up on economics in his spare time, began to consider working on it more seriously. He collected notes he had written on various topics, including his take on Weinstein and Malaney’s proposal, and put them together in a paper that he then posted on an online archive where physicists post new research. The paper was a kind of translation dictionary, explaining basic economics to physicists and then showing how ideas that physicists were already comfortable with could be applied to this otherwise foreign science.

Meanwhile, Smolin and Weinstein began working with Smolin's other collaborators — Mike Brown, Zoe-Vonna Palmrose, and Stuart Kauffman — to organize a conference at Perimeter. It was scheduled for May 2009.

The plan was to invite representatives from across the spectrum of economics, to bring together a diverse and heterodox group of people to discuss how to move the field forward after the recent crisis. In addition to the organizers, Doyne Farmer and Emanuel Derman participated. So did some mainstream economists, such as Nouriel Roubini of New York University and Richard Freeman of Harvard, as well as Nassim Taleb. Richard Alexander, a well-known evolutionary biologist, was invited to describe how biology and human behavior could inform economics. The plan was simple. Get a large group of smart people in a room, get them all to see that economics had clear problems, and convince them to work together to come up with a new theory. The plan was to use this conference to kick off the new Manhattan Project.

The conference itself was a success: this wide-ranging group of physicists, biologists, economists, and finance professionals found much to debate and discuss. But when the conference ended, the researchers went their separate ways. As Smolin later explained, there was too much bullheadedness even among these economics outsiders to produce fruitful collaboration. Everyone agreed that economic theory faced major problems, but it was impossible to build consensus on what the problems were, never mind how to fix them. Many of the participants in the conference — as well as other commentators from economics and finance — didn’t even agree that a concentrated effort to improve the sophistication of economic modeling was called for. In the background were questions about funding — if the project were funded, how would money be doled out to the participants? — that made the individuals involved cautious of supporting the larger project, for fear they wouldn’t receive their cut. And so with regard to the larger goal of creating a new community of interdisciplinary researchers devoted to tackling problems in economics from new directions, the conference failed. After a few months, Smolin gave up on economics and turned his attention back to physics. Now, when he finds himself with a few free minutes, he works on climate science. Economics, he has decided, is intractable — not for the subject matter, but because the field does not seem open to new ways of thinking. Weinstein was right: economics is ten times worse than physics.

Today, Weinstein and Malaney continue to work on expanding the mathematical foundations of economic theory. Sornette continues to develop his predictive tools. Farmer is back at the Santa Fe Institute, developing new connections between complexity science and economic modeling. Despite this brainpower, the world economy is in pieces, still bloodied by the 2007–2008 collapse. Can anything be done?