The Nature and Significance of Economic Science

Neoclassical economics extended Adam Smith's original formulation of how a private enterprise economy functions. Start with the behavioral assumption that individuals seek to maximize their benefits. One line of deductive reasoning concludes that if all individuals do that, the benefits to society as a whole will be maximized (Menger). A second line of reasoning analyzed demand, supply, and price in individual markets, showing how production responds to consumer demand. In the process, producers are shown to produce goods at the lowest possible cost consistent with a continuing supply at levels desired by consumers (Jevons and Marshall). A third line showed how all portions of the market system were tied together in a seamless web, creating a benefit-maximizing and cost-minimizing general equilibrium (Walras). Marginal productivity theory answered socialist critics by arguing that there was no exploitation of labor, that each person received a reward equal to the value of his or her contribution to total output. And economic stability could be assured by proper management of the monetary system.

The theory was also extended by theories of imperfect competition and monopoly in the writing of Joan Robinson (English, 1903-1983), Edward H. Chamberlain (American, 1899-1967), Heinrich von Stackleberg (German, 1905-1946) and others, who dropped the assumption of perfect competition. The results of this work showed that when the number of firms in an industry is not large enough for perfect competition to exist, the resultant market price is higher and output is less than in perfectly competitive markets, even when firms make only normal profits. In industries dominated by a few large firms (oligopoly, meaning few sellers), practices of price fixing and market sharing tended to replace competition or rivalry, resulting not only in high prices and limited output, but also high profits and a tendency to hamper innovation and development of new technologies. These findings used the same techniques and concepts developed by the earlier neoclassical theorists of perfect competition and brought a healthy dose of reality into the discipline.

The neoclassical economists began using mathematics and mathematical logic as important tools of theoretic analysis. Rigorous formal argument that could be stated in algebraic equations or illustrated by geometric diagrams was leading to a new, more accurate mode of professional discourse.

Mathematics had not been a significant analytical tool for the classical economists, with two exceptions. Antoine Augustin Cournot (1801-1877), a Frenchman trained in mathematics and employed as a university administrator, wrote Researches Into the Mathematical Theory of Wealth (1838), in which economic relationships were presented as algebraic equations. For example, the proposition that more of a commodity will be sold at a lower price than at a higher price becomes D = F (p), that is, demand is a function of price, in the language of mathematics. F can then be analyzed as an algebraic equation that specifies the factors that determine the shape and slope of the

demand curve. An English engineer who dabbled in economics, H. C. Fleeming Jenkin (1833-1885), then developed Cournot's concept of the demand function, and the related concept of the supply function (S = F (p), with a different F) into a geometric diagram of the determination of price in competitive markets that is still in use today. His paper, "The Graphic Representation of the Faws of Supply and Demand" (1870), written during a controversy over the characteristics of prices in competitive markets, showed that an equilibrium price was one at which supply and demand were equal. Sound familiar?

Not much more was done with mathematics as an analytic tool until the rise of neoclassical economics. Although William Stanley Jevons did not use mathematics in his Theory of Political Economy (1871), he was convinced of its importance. He wrote in a letter, "I wish especially to become a good mathematician, without which nothing . .. can be thoroughly done." Jevon's contemporary, Feon Walrus, took a more effective mathematical approach in his Elements of Pure Economics (1874), which is full of complex geometric diagrams and systems of algebraic equations. Although Carl Menger's Principles of Economics (1871) had only a few arithmetic tables to illustrate diminishing marginal utility, Alfred Marshall's Principles of Economics (1890) provided rigorous proofs of economic propositions in footnotes using geometric diagrams and in an algebraic mathematical appendix. Two Austrian economists, Rudolf Auspitz and Richard Fieben, published Researches on the Theory of Price (1889), which analyzed the determination of market prices in geometric diagrams, followed by mathematical appendices that repeated the argument using the differential calculus. Vilfredo Pareto, who followed Walras in the economics chair at the University of Fausanne in 1893, developed his predecessor's theory of general equilibrium in a more rigorous mathematical model in his Manual of Political Economy (Italian edition, 1906; expanded French edition, 1909). Several economists after the Second World War developed a more sophisticated general equilibrium model based on Pareto but using mathematical concepts and techniques not available in his day.

Meanwhile, in England, Frances Y. Edgeworth's Mathematical Psychics (1881) argued that the use of mathematics can eliminate "loose indefinite relations" in theory, while developing an "economical calculus" of exact meanings. Edgeworth went on to develop algebraic formulations and geometric diagrams to show how demand curves were generated from individual preferences, and how freely negotiated contracts benefited both parties to the agreement.

In the United States, Irving Fisher's Mathematical Investigations in the Theory of Value and Prices (1892) was a masterly exposition of general equilibrium theory, stated in mathematical form. It emphasized the interrelatedness of economic phenomena: a change in one variable results in changes in all other variables until a new equilibrium is established. Thus, a change in the price of one commodity ripples through the entire system of prices as consumers readjust the amounts they buy, and all other prices in the system change, even if only slightly.

These beginnings, and others like them around the turn of the century, were given greater impetus by the publication of Principia Mathematica (3 volumes, 1910-1913) by two English philosopher-mathematicians, Alfred North Whitehead (1861-1947) and Bertrand Russell (1872-1970). The title of the book invited comparison with Isaac Newton's great 1687 treatise on Philosophiae Naturalis Principia Mathematica (Mathematical Principles of Natural Philosophy). Whitehead and Russell knew they had something big. Both men were working on problems in logic when they decided to work together on the logic of arithmetic. They began by defining a few basic concepts in meaningful and unambiguous terms, then moving by logical steps to more complex concepts. They used this purely deductive method to derive and explain all the basic propositions of arithmetic—in three volumes of intricate logic and mathematical equations. It was not a book that ordinary people could fully grasp, but we can understand its larger implications.

Whitehead and Russell derived propositions and conclusions step by step from initial assumptions through purely logical analysis—from initial cause to final effect by pure logic. This deductive method had been widely used before, but there had always been problems created by the different meanings implied by words. Whitehead and Russell solved this problem by using mathematical symbols (+, —, -f-, X, =)as verbs and other mathematical symbols (1, 2, 3, or a, b, c) as nouns. Sentences looked like 1 + 1 = 2 or a 2 + b 2 = c 2 . Highly complex statements using mathematical logic could derive rigorously correct conclusions from initial assumptions in this type of formal system. It seemed as if the ancient Aristotelian ideal of perfect deduction from initial premises was at last possible. By implication, the deductive method could be applied to any phenomenon, not just arithmetic. This was the road to truth. It was as if God the Creator was a great mathematician who had created a world in which everything had an inner formal logic, and Whitehead and Russell had discovered the Tree of Knowledge.

But alas, it was not to be. In 1931 Kurt Godel, a German mathematician who later joined the Institute for Advanced Study at Princeton, detected a serious flaw in the mathematical logic of formal systems, known as Godel's Undecidability Theorem. Briefly, this theorem states that a long chain of deductive logic will contain one or more propositions that cannot be proven to be true or to be false. Pure logic was not enough. Even if the undecidable proposition were known to be true, it could not be proven within the system of logic itself. Acceptance of such propositions must come from outside the system, either through casual empiricism (real-world examples), faith, or assumption. If God the Creator was a mathematician, Godel showed that God had left room for both doubt and faith.

There is a further limitation on the use of formal systems of logic in economics. When one argues, for instance, "If A, then B" about real-world relationships, the result must be expressed as a probability. Economic data are only samples of all the possible instances. We never have all cases of A or B. Since we have only a sample, we know that the probability of "If A, then B" is something less than 100 percent. So when we have a long series of logical

statements, such as "If A, then B; if B, then C;... if Y, then Z" the probability of "If A, then Z" becomes very small. Consider this: if there is a high probability that each of the "if . . . then" statements is correct (say, 90 percent), by the time one reaches the twelfth statement, the probability that "If A, then M" is correct is less than 30 percent. If we go all the way to "If A, then Z," that statement will be correct only about 5 percent of the time, given the 90 percent probability of each statement. The formal logic may be correct, but when it is applied to the real world, the results may be quite unreliable.

Despite Godel's undecidability theorem and the difficulties of applying systems of formal mathematical logic to real-world economic problems, the use of formal systems of logic in economic theory spread rapidly during the 1920s and 1930s. This development in high theory was strongly influenced by a 1932 volume. An Essay on the Nature and Significance of Economic Science, by Lionel Robbins (1898-1984), an English economist who spent most of his academic career at the London School of Economics. Robbins argued that there was a body of economic knowledge free from value judgments. This scientific economics was derived by logical deduction from obviously correct basic assumptions. The fundamental problem of economics was the obvious "fact" of scarcity: people had unlimited wants, but the means of satisfying those wants were limited in supply. This led to Robbin's definition of economics as the "science that studies the relationship between ends and means that have alternative uses," a statement that even today is found in one form or another in the first chapter of almost every introductory economics textbook in the English-speaking world. Robbins argued that general propositions could be developed only by logical deduction from correct premises. Empirical studies were of very limited value, because the conclusions were valid only for the time and place from which the data were drawn. On the other hand, general propositions derived by logical analysis from correct assumptions could be applied to any situation, in any time and place.

Robbins sought to purge economics of value judgments. For example, arguments that a more egalitarian distribution of income was somehow "better" than the existing one were unscientific. How could one prove that taking a dollar from a rich man and giving it to a poor widow would reduce the rich man's satisfactions by less than it would add to the widow's? Robbins wanted careful logic, not value judgments or ethical and moral standards.

The method of logical deduction from correct assumptions championed by Robbins derived much of its rigor from its simple theoretical structure. The boundaries of economic activity were clearly defined in the institutional structure of a system of self-adjusting markets. There were no complications derived from complex social institutions such as family, religion, or state, which were rarely mentioned by the neoclassical economists. The driving force was also simple: the acquisitive nature of human beings, which was assumed to be a universal constraint. This gave the results of theoretical analysis an aura of universal validity and applicability. Like Newtonian

physics, it was a science of finite space in which inexorable natural forces worked out a stable equilibrium.

Critics, however, were quick to point out faults. The new methodology was criticized as being essentially static, like Newtonian physics, and not well adapted to analysis of an economy in constant flux and disequilibrium. It assumed the existence of a universal human nature—acquisitive and economic—that was attacked as a distortion. There was no room for changes in the institutional structure of the economy in a method that assumed ceteris paribus, that is, "everything else remains the same." And there was no way to determine how much of a change would occur from one position of equilibrium to another. In brief, critics argued that the analytical concepts were limited, unrealistic, and not quantified.

This criticism led to the final element in the methodology of neoclassical economics: empirical studies to verify or disprove the results of theoretic analysis. Theory would provide a hypothesis, which would then be tested by empirical studies. For example, the conclusion that a higher price for automobiles would result in reduced purchases of gasoline could be verified or disproved by statistical studies relating automobile prices with demand for gasoline. This required that theoretical concepts would have to be at least potentially testable.

One of the fundamental problems of empirical research is how to make meaningful statements about the world when information is imperfect and is available from only part of the reality under investigation. Data may also be contaminated by observational error. Any phenomenon may have multiple causes that need to be disentangled. Yet the scholar is expected to extract from imperfect data and complex situations reasonable explanations that apply not only to the specific case, but also to the general class of phenomena under study. Advances in statistical methods enabled economists to attack these problems, and a new subfield in economics was born: Econometrics, in which statistical methods are applied to analysis of economic data.

The method was complete. Theoretical analysis, refined by mathematical logic, would provide testable propositions. Statistical studies would then verify or correct the hypotheses, leading to more highly refined propositions that were a closer approximation to reality. In this way economic science could move toward greater understanding of the world, just as the physical sciences do. This method continues today to dominate the economics profession.