6 The current crisis in macroeconomic theory

Bill Gibson1


Introduction

Since the 1970s, mainstream economics has attacked Keynesian macroeconomics for not being a science in the proper sense of the term. The latter lacks realistic microfoundations, according to the orthodoxy, and is generally inconsistent with the Walrasian system. This chapter argues that the problem lies deeper than the absence of a choice theoretic framework in the Keynesian model. The main problem with macroeconomics of any theoretical flavor is aggregation and because macroeconomics aggregates ex ante it arrives at an indefensible position of using aggregates as policy instruments. Aggregation is an intractable problem and is at the root of controversies that run from Marxian value theory to the capital controversy to the negative result of Sonnenschein, Mantel and Debreu (SMD), that no coherent microfoundations for aggregate economics exists.

There have been various responses to the inability to unify macro and micro theories. The rise of “clean identification” methods in econometrics is an effort to restore scientific credibility to economics. Many of the traditional problems central to the discipline, however, were abandoned as the literature focused on “cute and clever” microeconomics.

It is argued here that it is possible that macroeconomics can be rescued by way of agent-based models. These models require no ex ante aggregation and provide a platform for policy intervention since the “representative agent” is no longer required. Outcomes can be then be measured by aggregating the heterogeneous individuals ex post. Familiar macroeconomic characteristics arise from these complex systems as emergent properties. What is sacrificed as we turn to computer simulations is the elegant formal mathematical analysis that characterized the Walrasian system of the past.

The chapter is organized as follows: the next section reviews the problems of Keynesian economics and the reaction of heterodox economists and asks why the project of the unification of micro and macro has largely been abandoned. The subsequent suggests that agent-based models may be a way to recover realistic microfoundations for macroeconomics. The final section concludes with some comments on the nature of big problems in economics and science generally.


The crisis of Keynesian economics

On a recent trip to the UK, a passport inspector noted that I had listed my profession as Professor of Economics. “Hum …” he said, as he regarded me with the mix of curiosity and suspicion required of his post. “Economics, that is … like … Keynesian economics, right?” I nodded affirmatively and after a pregnant pause he asked “What is Keynesian economics?”

The innocence of his question set me thinking: here is a public official, in the land of Keynes and where the Keynesian edifice was constructed, and yet he does not even know what it is. Is this just some form of rational ignorance? A second darker hypothesis is that what we do as economists has little “street value,” nothing of worth in a social context. Science generally does have street value, both private and public, as is made clear every day in the press. Breakthroughs are regularly reported in publications such as the New England Journal of Medicine, Science, Nature, along with a host of television programs.

Certainly at one stage of the not-too-distant past economics, and especially macroeconomics, possessed a good deal of street value. The golden age of macroeconomics, in the 1960s, was based on the widely accepted notion that the economy was a complex machine that would occasionally get out of sorts with itself and require some adjustment. Government relied on macroeconomists for advice through the Council of Economic Advisors. Most large corporations, and virtually all banks, had large and expensive econometric forecasting teams. Microeconomics was a sideshow with its cost curves, discounting formulas and welfare triangles.

Keynesian theory enjoyed almost complete hegemony, even among the most conservative members of the profession. By the late 1980s, however, micro had staged a dramatic comeback and macroeconomics was almost entirely displaced from graduate curricula across the country. Part of the reason was an inconsistency in advanced general equilibrium theory noticed by Debreu and others.2

At the policy level, it was the stagflation of the 1970s that reduced to rubble the simple Keynesian program of “if there is inflation, run a surplus and when there is unemployment, run a deficit.” At the center of the controversy was the instability of the Phillips curve:

the inflationary bias on average of monetary and fiscal policy (in the 1970s) should … have produced the lowest average unemployment rates for any decade since the 1940s. In fact, as we know, they produced the highest unemployment since the 1930s. This was economic failure on a grand scale.

(Lucas and Sargent (1978:277))

As the Phillips curve dissolved into a shapeless scatter diagram, the street value of macroeconomics and its associated macroeconometric models diminished. Lucas identified a fundamental problem in the macroeconometric literature based on the Keynesian structural model, that agents would alter their behavior in reaction to changes in policy (Lucas (1976)) The structural parameters could change in response to policy initiatives and if this were not part of the analysis, it would become impossible to predict the effects of policy. Only self-interest remained invariant to policy change.

Moreover, models that assumed no theory whatsoever, the vector-autoregression models, seemed to do as well as those that traveled with heavy theoretical baggage. As Diebold notes, “the flawed econometrics that Lucas criticized was taken in some circles as an indictment of all econometrics.” New York Times economic columnist Peter Passell, in an article titled “The model was too rough: why economic forecasting became a sideshow,” wrote that “Americans held unrealistic expectations for forecasting in the 1960’s—as they did for so many other things in that optimistic age, from space exploration to big government” (New York Times, February 1, 1996). Rather than predict interest rates with a staff of the econometricians, firms hired MBAs to hedge against its movements. Public policy, as Keynes himself predicted, is still a few decades behind the curve and references to aggregate demand and other Keynesian motifs can still be heard, whether at the Federal Reserve, Wall Street or the Congressional Budget Office. Theoretical economics, however, has by and large moved on, with the exception of heterodox economists.


The heterodox reaction

Here is a proposition: it could be that none of this talk of science, microfoundations and the like is relevant. Indeed economics, and especially macroeconomics and macroeconomic policy, is just a tool of the rich used to bludgeon the poor into accepting low wages. Economics is not a science and never was, but is rather auxiliary to the broader project of class domination by the rich and powerful. The poor and powerless are the victims of policy designed to shift resources and political power to capital. Economists are implicated in this grand scheme of domination, a band of self-referential (and self-refereeing) pseudo-scientists, who as a subsumed class take a cut of the surplus for themselves. Their main task is thus ideological, rather than scientific, jawboning. The political creed of the orthodoxy in economics is anti-progressive, essentially libertarian on domestic issues and neoliberal internationally. The scientific method is no more central to this project than it is to, say, religion or a backyard barbecue.

Fine, but isn’t this proposition contradicted by the evolution of heterodox economics? In the 1960s and 1970s Marxian economics was a professionally viable alternative to orthodox economics. As a result, some of the more broadminded neoclassical economists, Samuelson and Morishima for example, took up Marxian themes. At the same time, radical economics began to insert itself in graduate programs around the United States and Europe, graduate programs that were training young economists in the standard tools of scientific inquiry. It was in some ways natural that cross-pollination would come about and in the late 1970s a number of non-neoclassical analysts produced work that bore the imprint of their training. Sweezy, Baran and Emmanuel, and Samir Amin, gave way to Marglin, Bowles, Gintis, Gordon, Roemer and others from both sides of the Atlantic. Their work was defined by four essential features:


  1. they addressed the decidedly “nonscientific” questions of income distribution, power, racism, sexism, imperialism and inequality as opposed to the traditional efficiency of resource allocation issues;
  2. they were unafraid of “bourgeois” tools of analysis, especially mathematics, data analysis and econometrics;
  3. they were sensitive to the criticism of dogmatism in their analysis and sought to remove elements that could not be substantiated by logic or fact;
  4. they rejected the relativism of emerging post-modernism in favor of methodological individualism (at least to a considerable degree).

These writers were above all eclectic, accepting or rejecting hypotheses on their own terms as opposed to tradition. Although they were unified by themes that had traditionally been of interest to Marxists, the project as a whole was a definitive break from the traditional Marxism of the preceding 150 years.

These analytical Marxists continued to define themselves in opposition to the orthodoxy, often unclear about their positive contributions, but very clear about what they were not: neoclassicals. Ironically, much of neoclassical theory found its way into their work, but piecemeal, one component at a time. Some used the Walrasian system, others growth theory, monetary theory or computable general equilibrium models and, especially, game theory. There was no part of neoclassical theory that was completely off limits and it is probably fair to say that all of it was used one way or another at some point.

Anti-neoclassicism then flowered into many theoretic directions, surveyed by (Colander (2003) and (Gibson (2003)). The term “Marxist,” for example, began to fall out fashion, but more for substantive than stylistic reasons. The backdrop was an explicit recognition of the possibility that the scientific method could illuminate the incoherencies and irrationalities of the capitalist system. Many heterodox writers accepted the view that the nature of the analytical tools employed is not constitutive of the conclusions derived. Roemer and his associates expressed the proposition most clearly: if exploitation was a fundamental fact of capitalist society, it should be able to survive the transition to the Walrasian environment. That is, given tastes, technology and the distribution of the endowment, exploitation was logically entailed. This specialized project drew the attention of a specialized audience, certainly, but it was at the same time widely respected.

Walrasian Marxism was subject to the same SMD criticism as the orthodoxy; in short, no more macroeconomics was to come from Walrasian Marxism than from the standard approach. Certainly this feature was of little concern for Roemer, who was primarily interested in the more basic issues of traditional Marxian economic theory, such as exploitation and its relationship to social class. But from the general perspective of microfoundations of macro, the work led to a dead end.


Declare victory and withdraw

Branching out from the work of the early anti-neoclassicals was a wide range of heterodox approaches to problems of growth, distribution, and trade and finance, powered by standard analytical methods of comparative statics, dynamics, econometrics and game theory. Most of the macromodels in the tradition of Dutt, Skott, Semmler, Taylor, Setterfield and many others had no specific microfoundations, but relied instead on a demonstrated correspondence to the object of study, a particular economy at a particular time. Most heterodox economists were simply unconcerned with microfoundations (Dutt 2003). Macromodels were structural in nature and gave content to welfare propositions that hinged essentially on the level of output, employment and inflation.

The traditional problems of preference revelation and preference aggregation were not ceded any space; there was no need to aggregate the utilities of the employed and unemployed since they were incommensurate. It follows that the welfare of the system as a whole was not to be determined by an aggregate of the welfare of individuals.

Macropolicies that improved outcomes for the rich and the rich only, even if there were no change in the well-being of the poor, were not necessarily superior as they would be in standard analysis. Thus social welfare could not be mediated exclusively by private welfare no matter how it was aggregated. The aggregation problem, which dogged the traditional approach since its inception, was solved by critical acclamation. In the process, the unification project was sacrificed.

Naturally the balance of anti-neoclassical micro-oriented economists took an entirely opposite approach. For Bowles, Gintis, Skillman, Roemer and others attracted to game theory it was literally impossible to forego maximizing models with some conception of individual welfare at the core. Imagine, for example, a prisoners’ dilemma in which the detainees were indifferent to their own freedom. When it came down to micro foundations versus macroeconomics, they followed the orthodoxy in dismissing the latter. Bowles and Heinz (1996), for example, used industry level data to show that raising wages in South Africa would cause a contraction in employment despite the fact that progressive macroeconomists had compiled data showing that the economy was “stagnationist” in Bhaduri and Marglin’s infelicitous terminology (Bhaduri and Marglin (1990); Nattrass (2000)).

Keynesian theory seemed to be abandoned by the orthodoxy not necessarily because it conflicted with empirical observation, but because it was incomplete and at variance with their libertarian, individualist biases. The heterodox à la carte approach never produced a coherent alternative because it could not coalesce around a common theoretical framework. The debate seemed not to be about science and method, but about competing philosophical positions.

Granted, heterodox economists might object that their work is scientific, solid, empirically grounded, objective and replicatable. Heterodox articles are frequently peer-reviewed and this forces objectivity as it does elsewhere in the scientific community. There is certainly something to this argument. It might be possible to feign objectivity individually, but it is very difficult for a crowd of skeptically minded individuals to do so, unless as Surowiecki (2004) notes, the “wisdom” of the crowd is highly correlated. But the view that heterodox economics is not truly heterodox according to the ordinary definition of the term, but rather the name of a broad coalition of anti-capitalist researchers might still be vindicated. Indeed, heterodoxy is far more homogeneous than the economics profession it opposes. There are no personally right-wing economists who are attracted to the field of heterodox economics for purely methodological, technical or other professional reasons. The closely correlated attitudes of the heterodox clan undermines their objectivity in Surowiecki’s scheme.

Perhaps, then, macroeconomics is just a logical impossibility, like a failed state, vulnerable to take-over by anti-scientific types. If so, then the options appear to be limited. One can press on with the Keynesian model with its obvious deficiencies. Or one can decamp to something with more scientific content, as much of the profession seems to be doing.


Smart rats and clean identification

One of the greatest problems of aggregation is that one cannot often hold the composition of the aggregated variables constant. Were there a tight lattice structure preventing slippage within society, then we could be more confident about policy recommendations. Competition plays this role in economics but it cannot be relied upon to hold structure entirely constant. But since structure tends to self-organize in response to policy, it becomes ever more difficult to distinguish correlation and causality. This is, of course, an age-old problem and it pervades every branch of science. Feynman (1999) in the classic “Cargo-cult science” describes the attempts of a psychologist, identified only as Young, to hold variables constant in a experiment with rats looking for food:

The question was, how did the rats know, because the corridor was so beautifully built and so uniform, that this was the same door as before? Obviously there was something about the door that was different from the other doors. So he painted the doors very carefully, arranging the textures on the faces of the doors exactly the same. Still the rats could tell. Then he thought maybe the rats were smelling the food, so he used chemicals to change the smell after each run. Sill the rats could tell … He finally found that they could tell by the way the floor sounded when they ran over it. And he fixed that by putting his corridors in sand. So he covered one after another of all possible clues and finally was able to fool the rats so they would go in the third door. If he relaxed any of his conditions, the rats could tell.

(Feynman (1999:215))

Feynman goes on to claim that this is “A-number-1 science” because it reveals the efforts one must undertake to hold everything constant. Macroeconomics also seems to have had its smart rats.

In retrospect, it seems clear now that macroeconomics of the 1950s and 1960s was held together by spurious correlation of macro variables driven by time. When time was removed by way of co-integration techniques, much of the supposed causality in macroeconomic theory evaporated. One reason macroeconomics enjoys so little street value now is that it has been so difficult to rebuild it on a solid foundation since. Micro has to some degree risen to the challenge.

Levitt and Dubner (2005) is perhaps the most visible evidence of the restoration of the scientific method in economics, but this has been accompanied by an invasion of social science by methods from chemistry, biology, geology and a wealth of other disciplines. Diamond is another well-known architect of the interdisciplinary approach, but there is also Hoxby’s study of competition in schools as demarcated by streams and rivers in urban environments (Diamond (1997, 2005)). More streams implied more schools and better learning outcomes (Hoxby (2000)). Another well-known example is Donohue and Levitt’s (2001) claim that the legalization of abortion after Roe vs. Wade resulted in the lowered crime rates in the 1990s. Their work challenged Lott’s (2000) assertion that “shall carry” laws were responsible.

These last papers are all based on natural or quasi-experiments, differential applications of policies in an arguably random way.3 Natural experiments are second only to the gold standard of controlled experiments, such as the wellknown Star study of the effect of student teacher ratios on test scores (Mosteller (1995)). But controlled experiments, like professional football, are often expensive and sometimes dangerous and almost always imperfectly controlled.4 Still, clean identification is an attempt to more closely adhere to Feynman’s definition of A-number-1 science in the effort to distinguish correlation from causality.

The quasi-experimental approach, what Heckman has called the “cute and the clever,” is an effort to restore credibility to correlations attacked by skeptics (Scheiber (2007)). These studies range from Levitt’s “Why drug dealers don’t live with their mamas,” to point shaving in basketball and sumo wrestling.5 The methods and datasets used in some of these studies have already found their way into econometrics textbooks and serve to educate future econometricians.

To the heterodox mind, this may just be additional evidence of the complete sell-out of the orthodox establishment, hiding behind methodological refinements to avoid confronting more serious problems.6

Traditional economic issues such as poverty, inequality, business cycles, global warming and environmental racism all go unanalyzed for lack of proper instruments or experiments by which confounding factors may be eliminated or controlled for. From this optic, there are too few, not too many, interesting problems amenable to statistical analysis based naturally randomizing treatment effects afforded by quirks of nature or the whim of policymakers.

Big issues, however, almost always arise as outcomes, or ex post aggregates. They result from something more fundamental at the ground level, behavior that was not adequately captured in the aggregate models. Thus, even if they were correct, the models would not lead to any clear policy implications because they do not model the diversity of underlying agents. On the other hand, Levitt’s studies of criminal behavior does at least show that criminals are partially rational and will therefore likely respond to incentives to obey the law. Most macroeconomic indicators, by contrast, are signals that some underlying behavior in the system might be out of adjustment. What it is and how we are to get it back into sync remains unspecified by the model. This is why studies with clean identification are reviewed in scientific publications such as Nature, Science and Scientific American while macro studies undertaken by heterodox economists never are. The former have street value, clearly understandable methodologies and direct policy implications.

There is one set of macro studies undertaken by heterodox economists that does seem to draw a reaction from outside the heterodox community, agent-based or multi-agent systems models.


Agent-based models

Choi and Bowles (2007), for a recent example, published a study in Science using an agent-based framework to study the coevolution of altruism and war. Agent-based models grew out of game theoretic simulations and papers by Fehr, Basu, Axelrod and others have been regularly reviewed in the scientific press.7 They hold out the promise of separating correlation and causality because they allow experiments to be undertaken in silico, with literally everything else held constant.8 They are also relatively inexpensive and safe, except for the occasional laptop fire. The catch is convincing the scholarly community that the simulation is realistic and appropriate to the problem.

Might it be possible, then, to have a macro-theoretical framework that relies on individual self-interest, however imperfectly expressed, and at the same time addresses bigger questions than the cute and clever micro literature? Let us first specify what we mean: agent-based models blend structure and agency in a way that emphasizes the individual. This is not to say that “structure does not matter,” inasmuch as decisions made by agents in the past confront current agents as ossified structure and is, therefore, ultimately endogenous. The macro features of the model are not imposed, but rather arise out the micro specification as emergent properties (Gatti et al. (2008)). Following Jensen and Lesser (2002) for a general definition, a multi-agent system S, is composed of n agents A = {a1,a2, … ,an} and an environment E. Each agent is an object with methods that cannot generally be invoked by other agents. Agents operate on state variables and transform them according to the methods each agent employs. State variables are passed to agents and serve to define the spatial distribution of resources, information about other agents and any additions or updates of the methods by which this information is processed. The concept of an agent includes the standard notion of consumer or producer as special cases, but is broader and more general.

Agents in multi-agent systems are best thought of as heterogeneous computational entities who make decisions based in an informationally constrained environment and with limited computational means (Wooldridge (2002); Sandholm and Lesser (1997)). Agents may lack the resources to make the appropriate calculations leading to an optimal allocation and/or may not have the time to complete calculations already begun if the environment E changes. “Social pathologies” studied in the game theoretic literature, various prisoners’ dilemmas, suboptimal spending on public goods and ultimatum irrationalities can easily arise in multi-agent systems (Jensen and Lesser (2002)).

Schelling’s (1971) neighborhood model is an early example of a multi-agent system. White liberals following simple behavioral rules to generate entirely segregated neighborhoods, despite preferences for more integrated ones. Each agent is unaware of the true nature of its neighbors and only processes information about skin color. Where white neighborhoods will be in the next round is more difficult for the agents to compute, however, and in the simplest version of the model, agents move randomly, away from their current, undesirable location, without thinking about where it will land. An agent might improve the chances that it would not have to move a second time by way of a method that predicts the moves of other white liberals.

Agents are then computational entities or objects and any personality that might or might not evolve is itself an emergent property of these computations. In Gibson (2007), agents make only a decision whether to stay in their current job or leave it and interesting macroeconomic properties arise. Initially technologies, or blueprints, are randomly scattered around a grid. Both a unit of labor (an agent) and a variable amount of finance are required in order to activate the technology of a given cell. Finance is available from wealth accumulated by agents in the past and is distributed back to cells according to profitability with a random error term. Profit is the difference between wages and output and is redistributed to agents in proportion to their wealth plus a random error term.

The key to the dynamics of the model is the wage bargain between agents and the cells on which the agents reside. Cells can compute the marginal product of labor, but agents lack sufficient information. Agents can compute their own reservation wage, based on life-cycle variables as agents age, reproduce and die.

The decision variable is whether the agent is satisfied with her current job. Job satisfaction depends mostly upon whether wealth is increasing or decreasing, but there are also variables that derive from reinforcement learning models (Sutton and Barto (1998)). Agents must learn what the grid as a whole has to offer in terms of consumption possibilities. Unsuccessful agents become “stuck” in relatively low wage jobs either because they do not have the accumulated wealth to finance a move or they lack the education and skills required to take advantage of nearby opportunities.

If agents move, they must then Nash-bargain over the wage payment with the new cell. In the Nash bargain, the surplus is defined as the difference between the marginal product of labor and the agent’s reservation wage. The outcome of the bargaining process depends on relative impatience of the agent to the cell. Cells are equally impatient in that they know that unless they are profitable, they will be unable to attract capital and will fall into disuse. Agents realize if they reject the offered wage they must move again, with all its associated costs and uncertainty. If the agent’s reservation wage exceeds the marginal product, cells raise their prices, provoking inflation. As a result, they are less able to compete for finance for their operations.

Figure 6.1 Income classes.

In this simple model the economy grows with less than full employment on a track that underutilizes the available technology. There is very little that is optimal about this model in the tradition sense, but neither is it excessively prone to mass unemployment nor spiraling inflation. A skewed distribution of income is an emergent property of this simple system. Figure 6.1 is drawn from Gibson (2007).

Even if the economy begins with an egalitarian wealth distribution, it will deteriorate over time and eventually follow a power-law distribution of wealth. Educated agents who secure good jobs early and keep them for a long time end up wealthy. Those who move tend to run down their wealth but they may also succeed in finding a better opportunity.


Appeal to heterodoxy

Heterodox economists have to some extent embraced the agent-based methodology discussed here, as the citations above suggest. Foley (2003) makes the strongest case that the method of complex adaptive systems is broadly consistent with the underlying program of classical political economy. Smith, Ricardo, Malthus and Marx all analyzed the capitalist system as an ordered social structure, arising out of the chaos of individual decisions. The system is “self organized” in that no one, not even a Walrasian auctioneer, is directing the outcome. Foley notes that complex systems are “dialectical” in that the components can exhibit features that appear to contradict those of the system as a whole.

Above all, Foley sees multi-agent systems in fundamental opposition to the pseudo-dynamic neoclassical system. The latter is designed for an essentially static world in which a stable equilibrium is the principal theoretical objective. Time paths converge to a well-defined equilibrium, the characteristics of which have nothing to do with the processes by which they were obtained. He notes that complex adaptive structures are also determinant in that it is at least in principle possible to write down their equations of motion. In practice, however, is not usually possible to obtain a closed form solution that serves as a practical guide to the dynamic properties of the system.

Complex systems are globally stable in the limited sense that they have broad error tolerance and are relatively invulnerable to localized failure. They are not brittle systems subject to catastrophic collapse as some crisis theorists have foretold. On the other hand, they tend to exhibit a wide range of sub-optimalities that can be studied experimentally via computer simulations. Heterodox economists interested in policy formulation, whether in regard to racial segregation, class formation, poverty and income distribution or barriers to growth and technological change might well find agent-based models analytically suitable. It is one thing, for example, to build into a model unintended consequences to some policy or program, consequences foreseen by the analyst. It is quite another when the model itself generates unintended consequences on it own, surprising analyst and reader alike.


Conclusion

As the model of the previous section illustrates, the only way to avoid the problems of aggregation in macroeconomics is to start building the paradigm from the bottom up. Macroeconomics must become an “emergent property” of the micro, not a simple aggregation, but something surprising that was not obvious from inspection of the individual microeconomic elements. The microfoundations afforded by the agent-based approach provides a link between previously disembodied macroeconomic framework of analysis and the underlying heterogeneous and boundedly rational agents that populate the system. This framework can be calibrated empirically to given historical specific economies to ask questions about how policies might affect individual and thus aggregated outcomes.

The return to the scientific foundations of research methodology seems to be less about high theory than about better observations and in this regard, orthodox economics is a very different opponent from what it was in the 1960s and 1970s. A common criticism of the orthodoxy in the past was over-application of over-simplified theory. “Have model will travel,” intended to imply that Paladin had no concern for the broader implications. Now the reverse seems to be true: with cleaner observations and more diverse, dare say heterodox theory, the scientific method is showing its true worth, at least when applied to small, well-defined problems. The challenge is to return to the bigger questions and this chapter has provided some suggestions for how that could be done.

The conflict between big and little questions is hardly confined to the arena of economics. There are many scientists who opposed the Superconducting Super Collider which was projected to cost more than 12 billion dollars before it was canceled by Congress in 1993 (Mervis and Seife (2003)). Just as many object to the space program and in particular to the International Space Station (ISS) as a colossal waste of money. The amount of real science that is accomplished on the ISS is minimal and there has been heavy criticism of it since it crowds out smaller projects.

Consider this from Steven Weinberg, a particle physicist at the University of Texas at Austin and a co-recipient of the 1979 Nobel Prize in physics:

No important science has come out of it. I could almost say no science has come out of it. And I would go beyond that and say that the whole manned spaceflight program, which is so enormously expensive, has produced nothing of scientific value.

This is not just one opinion: in 1991, the American Physical Society issued a policy statement that “potential contribution of a manned space station to the physical sciences have been greatly overstated” (Klerkx (2004:228)).9

Macroeconomics may have in the past escaped the surly bounds of science in order to pretend to answer the big questions. The rest of science, however, remains very dismal. To take a well-known example, string theory, an attempt in physics to reconcile the macro of relativity and micro of quantum theory, has been less than fully successful. An editorial in Scientific American recently referred to string theory as “recreational mathematical theology” while Woit (2006) argues that it is Not Even. Now that is failure on a grand scale.


Notes

1 January 2008; John Converse Professor of Economics, University of Vermont, Burlington, VT 045405 and Professor of Public Policy and Economics, University of Massachusetts, Amherst, MA 01003 USA 413–548–9448. wgibson@econs.umass.edu; http://people.umass.edu/wgibson. Thanks to Diane Flaherty and Jon Goldstein for comments on an earlier version of this paper.

2 See Debreu (1974). For an interpretation of SMD theory, see Rizvi (1994, 1997). Except under restrictive conditions, the aggregate excess demand function need not be downward sloping; it could take on any shape whatsoever. The long sought after link between micro and macroeconomics seemed to be permanently out of reach. Some writers barely acknowledged the rift and continued to assert the primacy of macro over micro for a variety of reasons. But for the bulk of the profession, the unification of macro and microeconomics had been mortally wounded by SMD.

3 For the counter-argument see Helland and Tabarrok (2004).

4 And, often unnecessary. See Smith and Pell (2002) for a satirical account of control group methodology.

5 See Levitt and Dubner (2005) and http://ideas.repec.org/e/ple59.html for a more complete list of topics.

6 Clean identification is not based on an assumed superiority of the rational model. Just as often, its studies reveal that fixed costs matter or that there is asymmetry of up- and down-side risk or that individuals contribute to public goods, vote or care about fairness when neoclassical theory suggests that they should not. The rational model is interrogated on many levels, theoretical with multiple-self configurations, experimentally and in numerical simulations of neurotransmission mechanisms and in neuroscience experiments with live subjects.

7 See for example Brock and Durlauf (2005), Durlauf and Young (2001) and Gatti et al. (2008).

8 It is appropriate that in silico here only simulates Latin and is not the real thing.

9 Certainly there can be no bigger issues than the sequencing of the human genome, our place in the universe and our ability to colonize other worlds for good or evil, but far more scientific activity is devoted to much smaller questions. Why? Because the small questions are answerable and the big ones may or many not be; and if they are, it will only be the result of the expenditure of vast sums of money, time and possibly careers. Moreover, big questions may in themselves undermine the scientific method in that results that necessarily involve massive expenditure are ipso facto difficult to replicate.


References

Bhaduri, A. and Marglin, S. (1990) “Unemployment and the real wage: the economic basis for contesting political ideologies,” Cambridge Journal of Economics, 14(4): 375–93.

Bowles, S. and Heinz, J. (1996) “Wages and jobs in the South African economy: an econometric investigation.” Department of Economics, University of Massachusetts.

Brock, W. A. and Durlauf, S. (2005) “Social interactions and macroeconomics.” Available at www.ssc.wisc.edu/econ/archive/wp2005–05.pdf.

Choi, J.-K. and Bowles, S. (2007) “The co-evolution of parochial altruism and war,” Science, 318(5850): 636–40.

Colander, D. (2003) “Post Walrasian macro policy and the economics of muddling through,” International Journal of Political Economy, 33(2): 17–35.

Debreu, G. (1974) “Excess demand functions,” Journal of Mathematical Economics, 1: 15–21.

Diamond, J. (1997) Guns, Germs, and Steel: The Fates of Human Societies. New York: W. W. Norton and Company.

——(2005) Collapse: How Societies Choose To Fail Or Succeed. New York: Viking.

Donohue, J. J. and Levitt, S. D. (2001) “The impact of legalized abortion on crime,” The Quarterly Journal of Economics, 116(2): 379–420.

Durlaf, S. N. and Young, H. P. (2001) Social Dynamics, Volume 4 of Economic Learning and Social Evolution. Cambridge, MA: MIT Press for the Brookings Institution.

Dutt, A. (2003) “On post Walrasian economics, macroeconomic policy and heterodox economics,” International Journal of Political Economy, 33(2): 47–67.

Feynman, R. (1999) The Pleasure of Finding Things Out. Cambridge, MA: Perseus Books.

Foley, D. (2003) Unholy Trinity: Labor, Capital and Land in the New Economy. London and New York: Routledge.

Gatti, D. D., Gaffeo E., Gallegati, M., Giulioni, G. and Palestrini, A. (2008) Emergent Macroeconomics. Frankfurt: Springer.

Gibson, B. (2003) “Thinking outside the Walrasian box,” International Journal of Political Economy, 33(2): 36–46.

——(2007) “A multi-agent systems approach to microeconomic foundations of macro.” University of Massachusetts, Department of Economics Working Paper series.

Helland, E. and Tabarrok, A. (2004) “Using placebo laws to test more guns, less crime,” Advances in Economic Analysis and Policy, 4(1): 1–9.

Hoxby, C. (2000) “The effect of class size on student achievement: new evidence from population variation,” Quarterly Journal of Economics, 115(4): 1239–85.

Jensen, D. and Lesser, V. (2002) “Social pathologies of adaptive agents,” in M. Barley and H. Guesgen (eds.) Safe Learning Agents: Papers from the 2002. Menlo Park: AAAI Press.

Klerkx, G. (2004) Lost in Space: The Fall of NASA and the Dream of a New Space Age. New York: Pantheon.

Levitt, S. D. and Dubner, S. J. (2005) Freakonomics: A Rogue Economist Explores the Hidden Side of Everything. New York: HarperCollins.

Lott, J. (2000) More Guns Less Crime. Chicago: University of Chicago Press.

Lucas, R. E. (1976) “Econometric policy evaluation: a critique,” Carnegie-Rochester Conference Series on Public Policy, 1(1): 19–46.

Lucas, R. E. and Sargent, T. J. (1978) “After Keynesian macroeconomics,” in After the Phillips Curve: Persistence of High Inflation and High Unemployment, 49–72, Boston: Federal Reserve Bank.

Mervis, J. and Seife C. (2003) “10 years after the SSC: lots of reasons, but few lessons,” Science, 3(October): 38–40.

Mosteller, F. (1995) “The Tennessee study of class size in the early school grades,” The Future of Children: Critical Issues for Children and Youths, 5(2): 113–27.

Nattrass, N. (2000) Macroeconomics: Theory and Policy in South Africa (2nd edn). Cape Town: David Philip.

Rizvi, S. A. T. (1994) “The microfoundations project in general equilibrium theory,” Cambridge Journal of Economics, 18: 357–77.

——(1997) “Responses to arbitrariness in contemporary economics,” History of Political Economy, 29(Supplement): 273–88.

Sandholm, T. W. and Lesser, V. R. (1997) “Coalitions among computationally bounded agents,” Artificial Intelligence, 94(1–2): 99–137.

Scheiber, N. (2007) “Freaks and geeks: how freakonomics is ruining the dismal science,” The New Republic: 27–31.

Schelling, T. (1971) “Dynamic models of segregation,” Journal of Mathematical Sociology, 1(July): 143–86.

Smith, G. C. S. and Pell, J. P. (2003) “Parachute use to prevent death and major trauma related to gravitational challenge: systematic review of randomized controlled trials,” BMJ, 327(7429): 1459–61.

Surowiecki, J. (2004) Wisdom of Crowds. New York: Doubleday.

Sutton, R. S. and Barto, A. G. (1998) Reinforcement Learning. Cambridge, MA and London: MIT Press.

Woit, P. (2006) Not even Wrong: The Failure of String Theory and the Search for Unity in Physical Law. New York: Basic Books.

Wooldridge, M. (2002) MultiAgent Systems. West Sussex: John Wiley and Sons.