A Comparative View of the Political Economy of the Great Depression and Great Recession
Open the newspaper today and it looks as if the entire nation has turned Austrian. No, we are not witnessing mass migration towards the small nation in the European Alps, but a powerful drift towards a body of economic thought widely known as the Austrian school. Similar in principle to the classic economic liberalism of the nineteenth century, the contemporary followers of Ludwig von Mises and Friedrich Hayek hold steadfastly to the notion that economies grow and recover their natural balance of supply and demand on their own, if only prices are allowed to fall freely—especially the price of labor. Any effort of government that impedes the balancing mechanism of price fluctuations—such as the pledge of the business community to Herbert Hoover in 1930 not to lower wages, the Federal Reserve’s purchase of mortgage-backed securities since 2008, or the financial support for the struggling American automobile industry by the Obama administration in 2009—is nothing less than an interruption of the “natural” restoration of a market equilibrium.1 Republican presidential candidate and multimillionaire Mitt Romney took this stand when he recommended in a 2011 interview in Las Vegas, Nevada—ground zero of the imploding housing market—not to “try to stop the foreclosure process” and to “Let it run its course and hit the bottom.”2 Herbert Hoover’s Secretary of the Treasury Andrew Mellon summarized this economic view deftly over eighty years ago when he urged his president to “liquidate labor, liquidate stocks, liquidate the farmers, and liquidate real estate.” Never too concerned about the fate of ordinary Americans, the multimillionaire treasurer was convinced that this would “purge the rottenness out of the system,” by which he meant weak banks and businesses. Such bloodletting, which would come at a high price for most Americans, was part of Mellon’s cure. “High costs of living and high living will come down. People will live a more moral life. Values will be re-adjusted, and enterprising people will pick up the wrecks from less competent people.”3 This is economic theory as a morality play—or an exercise in social Darwinism.
The Austrian school notion is that economic recession is a cure rather than an ill. This “wreckage of false expectations” has recently gained a large following among libertarians, Tea Party enthusiasts, and many whose wealth is secure enough to ride through any slump.4 Historians like Amity Shlaes have given new credence to this return to nineteenth-century economic thinking with her history of the New Deal, The Forgotten Man. Shlaes’s account blames government intervention for the scope of the Depression and claims that the New Deal did not create new employment, a charge that ignores both the employment numbers for the New Deal years and gross domestic product (GDP) growth rates that averaged 7.7 percent annually between 1933 and 1941.5 As the nation remains deeply divided over its response to the lingering effects of the Great Recession, and as policymakers determine what lessons are to be learned from the Great Depression, it is hardly surprising that such historical revisionism gains the attention of some pundits and politicians. Shlaes’s book is a favorite among Republicans, and Newt Gingrich and the author have a strong mutual affection.6
Even Americans who have never heard of Friedrich Hayek, the Austrian school, or Amity Shlaes frequently remember the Great Depression as a “readjustment of values,” to paraphrase Andrew Mellon. Local journalists who have turned to survivors for an authentic comparison of the 1930s with the Great Recession have gathered countless anecdotes that contrast past suffering as a lesson in virtue and the fallout of the recent Great Recession as the proper comeuppance for a spoiled generation.
Widespread are the stories of family solidarity and neighborly support, of the stoic endurance of deprivation with humble gratitude and ingenuity. In almost every story, survivors took away the lessons of hard work and frugality. In turn, welfare recipients and the poor tend to get poor marks from Depression survivors who are confused by the different face of poverty today. As stories of perseverance and determination, these accounts of the Great Depression can certainly inspire a younger generation. But as much as Americans lived through the Great Depression as individuals, as families, and as neighbors, they also existed in an economy much larger than their own worlds. It would be too much to ask of Americans to recall, say, the impact of the Federal Deposit Insurance Corporation, which no one notices, since it got rid of the most dramatic memory of an economic downturn, the bank panic of 1933. Nor should we expect the present generation to attribute the standardized thirty-year, self-amortizing home mortgage, and thus the expansion of American homeownership after World War II, to the New Deal’s Federal Housing Administration. We should not even begrudge the fact that these memories understate federal relief programs, the lifeline many received through the Civilian Conservation Corps or the Works Progress Administration. And we should not be surprised that they give little mention to lasting institutions such as Social Security or the postwar prosperity that was built on New Deal legacies and was marked less by hard work and frugality than by a new age of consumption. Personal memories are exactly that—personal. They owe the historical context nothing. It is for that reason that any comparison of the Depression era with the early twenty-first century needs to stress the features of the larger political economies. As seductive as the idea of the nation’s economy as a morality tale may be, the story is much bigger than that of our grandparents or great-grandparents and different from what Austrian school economists would have us believe.
This chapter compares the political economy of the periods of the Great Depression and Great Recession, highlighting telling similarities and crucial differences in the causes of the two economic downturns. What do we mean when we characterize the economy of the interwar years as a “national industrialized consumer economy,” and how does this compare to our globalized service and information economy of today? What infrastructures, technological systems, and scientific standards drove productivity and growth prior to the Great Depression, and what has been behind economic expansion in the last twelve years? What place did the United States assume in the global flow of labor, capital, and goods in the interwar years, and how does this compare to the United States in the world economy today? Finally, how did social, monetary, and fiscal policies shape growth and the distribution of income in the 1920s and today? I begin with a brief summary of the key similarities and differences between the economic collapse of 1929 to 1933 and will conclude with some observations about the different responses of government in the two eras.
For young urban Americans in the 1920s, the world looked brand new. There was the jazz that gave the age its name, marathon dancing, body building, and crossword puzzles. They visited speakeasies that illegally sold alcohol in defiance of Prohibition, ate bagels, attended “petting parties,” and flirted on “lovers’ lanes” in the privacy of their cars. Young women wore low-cut gowns and lipstick, bobbed their hair, and spent their own money. African-Americans discovered a new sense of community and culture in Northern urban neighborhoods such as Harlem, Chicago’s Southside, or Los Angeles’s Central Avenue. To many rural folks, small town residents, and new urban dwellers, on the other hand, the “Roaring ‘20s” suggested a corruption of family values, prostitution, racial mixing, bootlegging, and crime. For them, modern life seemed to push traditions and standards off their foundations, and they turned bitterly against what they considered the causes of these undesirable changes—immigrant cultures, labor unions, women’s autonomy, science, and the teaching of evolution.7
The unbridled enthusiasm for a new age and economy in the 1920s reminds us of the more recent hype about a new online marketplace and community, one in which old traditions and customs no longer count and in which new patterns of consumption and leisure shape a new generation. The conservative reaction—from the surge of religious fundamentalism and anti-immigrant sentiment to the rejection of science—is equally familiar. But to be sure, the changes of the interwar years had been in the making for a generation, just as those of the post-9/11 era did not arise suddenly. The transformation of American life into that of the 1920s consumer society had begun in the late nineteenth century and accelerated remarkably in the 1920s. This transformation extended deep into Americans’ culture of work, family life, international relations, and the nation’s political culture. At its center, however, stood a surge in mass production, mass consumption, and a new infrastructure geared towards middle class consumerism. Consider the fact that housing began to sprawl into suburbs outside metropolitan centers during that time and that the increasing availability of electricity there as well as in city apartments made possible the use of consumer durables like vacuum cleaners, washing machines, and other household appliances. Only 20 percent of Americans had indoor flush toilets in 1920, but 51 percent enjoyed this amenity ten years later. Central heating was a rarity at the beginning of the decade (one percent of households), but existed in 42 percent in 1930. Radios did not exist in 1920, but four out of ten families owned one by the end of the decade. Most importantly, Americans in the 1920s bought automobiles whenever they could. By 1929, one in every five Americans owned a car, compared to only one in 135 Germans. Large cities like New York, Chicago, and Los Angeles were already familiar with big traffic jams. For the first time in history it seemed that a standard of living once only available to a small elite, if that, was within reach for a large share of the middle-class.8 A growing number of Americans hoped to ascend to a middle-class lifestyle through education that would qualify them for skilled white-collar work. High school became an American institution, and high school graduation rates rose steadily over the decade.9
Part of the change in the national culture of the 1920s was a new embrace of consumer credit, promoted by carmakers and other durable goods producers. A small down payment allowed consumers to “buy now, pay later.”10 The motivation behind this financial innovation was simple: mass consumption did not keep pace with mass production, and only by lowering the thresholds for large household purchases and enticing customers to spend ahead of their earnings could durable goods producers sustain their growth rates. Magazines and the radio broadened Americans’ access to information, and advertising campaigns for make-up, ready-made food, and gadgets like personal cameras offered consumption as a lifestyle choice.
Move forward to the early 2000s, and we can observe a very different transformation of consumption and popular culture driving a very similar increase in productivity and output. And both were shaped significantly by new credit-financed consumer experiences. Not cars, radios, telephones, and toasters, but wireless computing and communications, social media, and new online multimedia formats spread rapidly through American households. General Motors and General Electric shaped consumers’ lives and tastes in the 1920s; in the early 2000s, Apple and Google did. The growing demand for consumer durables of the 1920s was part of the political economy of an industrial consumer society in which cars and home appliances furnished the American dream of middle-class family life—much of it dependent on a public infrastructure of transportation, energy, and education. Apple products, Google services, and other agents of the Web revolution have improved the digital access to information, increased worker productivity, and fostered online communities and e-commerce at the expense of traditional communal ties and brick and mortar retail, transcending or bypassing existing public infrastructures rather than reinforcing them.
In the 2000s as in the 1920s, Americans and their economists believed that economic growth derived from increases in worker productivity. The expansion of the Fordist regime of mass production in electrically powered factories certainly increased the output per worker. And there is a good case to be made that the technological change in information and communications in the 1990s reaped significant rewards in the 2000s. Add to that the incorporation of China, India, and the former Soviet bloc into the global post–Cold War economy, and it seems only sensible that economic growth was accelerating. In both cases, however, the evolution of the financial sector altered the scenario considerably, and in both cases consumers relied on credit at an accelerating rate to partake of this growing economy, giving producers and sellers the impression that all was well indeed.11
Striking also is the contrast in the way the flagship industries fit into national and global economies. Electricity mobilized industrial power sources in the 1920s and allowed for an increase in continuous flow process methods and the assembly line. The result of such increased industrial productivity was a place like Detroit—the quintessential American city of industry. Since the 1970s, however, revolutions in communications, air travel, and cheap cargo shipment via container vessels have made urban-industrial concentrations like the “motor city” a relic of the past. Global flows of capital, goods, finance, and to some extent even labor have blown a hole in the economic clusters of mid–twentieth century industrial cities. Employment, as a result, takes place everywhere. Apple, the largest U.S. company ever, as measured by its stock market evaluation, has 47,000 employees in the United States, but probably employs up to 700,000 through a network of suppliers that make iPhones, iPads, and other products overseas. By comparison, General Motors employed 77,000 people in the United States in 2011 with market capitalization less than 10 percent of that of Apple.12
Between 1921 and 1929, employment in manufacturing industries accounted for much of the 9 million new jobs created. Economic historians have estimated that the average unemployment rate was just 3.3 percent between 1923 and 1929, an average that admittedly obscures the high degree of employment uncertainty and frequent short-term periods of unemployment among factory workers. Workers’ productivity grew significantly as a result of technological innovation during this period.13 Back then, wages and working hours also improved, although the average increase here, too, obscures unequal progress for skilled and unskilled workers. Real earnings between 1900 and 1910 had increased 20 percent, about 12 percent in the following decade, but a full 23 percent between 1920 and 1930.14 Not all sectors benefited equally, however. Urban industrial workers tended to fare better than their counterparts in rural areas. Women had been part of the American industrial labor force since the early 1800s, but in the 1920s the majority still worked in domestic service or in “pink collar” jobs—the gender-segregated bottom rung of white collar work that comprised secretaries, switchboard operators, and the like.15 Membership in labor unions declined significantly in the 1920s, from more than 12 percent of the civilian labor force to less than 8 percent on the eve of the Great Depression.16 Courts were commonly on the side of employers and granted frequent injunctions that temporarily forbade boycotts or picket lines. As a rule, the government did not interfere in these uneven labor relations. Kindled by the Russian revolution, widespread fear of Communism and labor radicalism after World War I undermined public support of unions, while welfare programs at new companies such as Eastman Kodak in Rochester, New York, reduced workers’ incentives for organizing their own unions.17 Finally, the most powerful union, the American Federation of Labor, showed little interest in organizing the unskilled workers of the growing mass-production industries. Employers also exploited religious, ethnic, and racial divisions within the working classes to prevent large-scale unionization.18
Similarly, in the first decade of the twenty-first century, the environment for unions was not a friendly one. Large employers like WalMart have worked aggressively to prevent unionization in their stores.19 A heavy reliance on undocumented immigrant labor in agricultural and some food-processing industries has weakened the ability of unions to fight for workers’ rights. At 37.5 percent in 2000, government workers had the highest rate of union membership in the American labor force. In contrast, only 9 percent of workers in the private sector were unionized. Whereas 24 percent of transportation and public utility workers were represented by unions, 18.3 percent of construction workers and 14.8 percent of manufacturing workers were organized. A mere 1.6 percent of employees in finance, insurance, and real estate were union members.20 That said, basic New Deal labor protections persisted to prevent employers from the openly brutal suppression of labor activism Americans had witnessed in the 1920s and first half of the 1930s.
These differences in labor protections and unionization levels are significant, but there is a more profound difference between the labor force of the Depression era and today: in regard to the type of goods and services produced. In today’s globalized economy, manufacturing increasingly takes place abroad, most famously—or notoriously—in China. Cheap-labor competition from developing nations has reduced economic growth in the United States to what Vanek has called “the non-transported goods industries,” such as construction, restaurant and hospitality industries, or government goods and services, including the military. These were precisely the sectors that “‘flourished’ in recent years or decades.”21
Not that the economy of the 1920s was without its weak spots. New automotive and electrical industries grew profitably, but other sectors stagnated. The “golden age of agriculture” had passed with the recovery of international commodity markets after World War I and a related drop in crop prices. Stranded with heavy debt and low rates of return, farmers foreclosed at five times the rate in 1929 than in 1923. While the average earnings for all employees in the United States rose, farm income fell from an average of $1,196 to $945 by the end of the decade (comparable to $12,726 in 2013).
Farmers were not the only ones left behind by the new era. In fact, the growth in consumer durables went hand in hand with the stagnation or shrinkage of industries in what Joseph Schumpeter has described as a process of “creative destruction.” The telephone replaced the telegraph; the internal combustion engine changed transportation patterns and spelled the ruin of many urban trolley lines.22 Passenger miles on railroads—the nation’s economic engine during industrial development—declined from 47 million in 1922 to 34 million in 1927, and profits remained small. With the exception of oil tankers and some special-purpose vessels transporting fruit from Central America, most ocean shipping depended on government subsidies to remain viable. The expansion of the oil and chemical industries reduced reliance on coal, the fuel on which previous economic fortunes had been built. The coal industry’s share of national income shrank from 1.7 percent in 1922 to 0.7 percent in 1929.23
The nation’s economy of the 1920s did not exist in isolation, of course—although many Americans wished that it did. In the wake of World War I, Americans grew tired of Progressive idealism. Doubtful that foreign diplomacy could “make the world safe for democracy,” many subscribed to the notion of isolationism—minimal political involvement with foreign powers. At the same time, America’s role in the world and global markets had changed dramatically as a result of the war. The United States had always been a debtor nation, owing some $3.7 billion to foreign investors in 1914. By 1920, the United States had become a creditor nation with $12.6 billion in investments abroad on its balance sheet. Much of this was financial aid the U.S. had provided to its European allies (particularly England and France) during their fight against Germany and the Central Powers. The Allies decided to recover their debt from defeated Germany through reparations, which ultimately led to that nation’s monetary collapse in 1923. Under the Dawes Plan, the United States negotiated an international payment system whereby Wall Street and the Federal Reserve provided Germany with loans to be used to pay reparation demands to Allies. This allowed the Allies in turn to meet their obligations to the United States. At the same time, Americans sold more goods abroad than they bought. Congressional tariffs in 1921 and 1922 made it more difficult for Europeans to sell goods to Americans and earn dollars. Without that currency, Europeans had no choice but to pay with the international means of exchange—gold.
The war-related debt and credit triangle between the United States, the Allied Powers, and Germany was one challenge in international finance during that time. Closely tied to this was the burden the gold standard imposed on national economies. Wartime inflation had strengthened a broad desire in much of the world’s economies to restore international economic and financial stability through resumption of the gold standard. The volume of a gold standard currency was fixed to its gold reserves and shrank or grew with the amount of gold in the nation’s coffers. Some European countries chose exchange rates well below those of the prewar days to give themselves an advantage in trade, while the United Kingdom restored 1913 exchange rates in order to maintain London’s position as the center of global finance. This was only accomplished through a major deflationary squeeze, and it pushed Britain’s already struggling export industries further into contraction. To add to the struggling British Empire’s troubles, international trade was favoring the United States after World War I, increasing the flow of bullion across the Atlantic. In an act of economic nationalism and petty rivalry with the neighbor across the channel, France began to actively buy gold to amass reserves. By 1929, France and the United States had amassed 60 percent of the world’s gold reserves. Britain imposed high interest rates to attract foreign investors at the expense of domestic investors in search of credit. And on top of it all, the Bank of England relied on a $500 million commitment by the New York Federal Reserve, whose head, Benjamin Strong, prophetically warned his English counterpart, Montagu Norman, that “domestic considerations would likely outweigh foreign sympathies” in times of “speculative tendencies in the economy.”24
Under normal conditions, an inflow of payments in gold to the United States would have raised prices, making American goods less competitive, European rivals more successful, and restoring the trade balance. Economists know this as the price specie-flow mechanism. It works at the expense of price and market instability in domestic economies—if it is allowed to work. The Federal Reserve wanted gold as well as price stability and stable markets at home and chose instead to manipulate the gold–currency relationship. This prevented inflation in the United States, but it also made banking systems in gold-starved currencies vulnerable.25
Isolationism as an ideological position has experienced something of a revival recently, although the resentment against foreign involvement has changed significantly in the late twentieth century and the early twenty-first century. Americans started to home in on the dangers of exporting manufacturing jobs to Mexico and then China during the post–Cold War recession of 1990 to 1992, simultaneously blaming China and American venture capitalists for shipping jobs overseas and depending increasingly on cheaper consumer goods from the growing Chinese manufacturing sector. Progressive critics of globalization were less interested in vilifying Chinese workers or their autocratic government than in speaking out fiercely against the new regime of free trade—represented by such international institutions as the International Monetary Fund (IMF) and the World Trade Organization (WTO). The mission of today’s critics of globalization has not been one of economic nationalism, but of restraints on global capital in favor of international collaboration on pressing social and environmental issues. In other words, the critics of globalization wanted to reduce the power of the IMF, to make the WTO less a tool of international capital, and to achieve binding international resolutions to curb greenhouse gas emissions. Beginning with the war on terror under George W. Bush, a new critique of American foreign entanglement focused on the overly ambitious and self-serving efforts of “nation-building” in Iraq and Afghanistan. Since the onset of the Great Recession, however, the inability of European nations to resolve their currency and debt crises and the increasing significance of China, not only as the world’s workshop but as a geopolitical force, international creditor to the U.S. bond market, and emerging consumer society, has fostered a popular economic nationalism in fierce denial of U.S. economic interdependence. Although fair-minded economists have also warned about the dependence of the U.S. consumer economy on Chinese capital—the United States has become the world’s biggest borrower while China has risen from loser of the Cold War to the world’s biggest lender. This fierce neo-isolationism is most pronounced among followers of the Tea Party movement and libertarians. It has even included calls for an end to the Federal Reserve System and a return to the gold standard—something Nobel Prize economist Paul Krugman confessed, in an interview with National Public Radio’s Terri Gross, that he would never have thought possible in his wildest dreams.26
The gold standard was the monetary regime that governed the U.S. and the international economy of the 1920s. The fiscal regime designed by the élite of the Republican Party was the other. Presidents Warren G. Harding and Calvin Coolidge led most frequently with inaction, but their Secretary of Commerce, Herbert Hoover, put his stamp on Republican economic governance in the 1920s. Hoover effectively invented the role of the modern Secretary of Commerce during his time in the Harding and Coolidge administrations. He revolutionized relations between business and government, playing a central role in the effective regulation of radio broadcasting, aviation, and street traffic. Few would have doubted his capacity to master the nation’s most difficult economic crisis, and his solid record as a humanitarian might have led many to expect that Hoover would be the first president to put relief of poverty over the principle that relief was not the province of the federal government.
Tightly connected to their commitment to make the federal government aid the development of American business was the Republican belief that tax cuts at the top could increase federal revenues. While Secretary of the Treasury Andrew Mellon cut taxes for Americans of all income groups in this first installment of trickle-down fiscal policy, his cuts had the biggest impact on those earning $1 million and more, and those who inherited wealth. Tax revenues rose proportionately with GNP, but not more. The share of disposable income for the top 1 percent increased from 14.2 percent to 19.1 percent, which is comparable to the share earners received in 1990 after a decade of President Reagan’s policies.27
Both the Republican policy makers of the 1920s and advocates of trickle-down Reaganomics since the beginning of Reagan’s presidency in 1981 have insisted that less progressive tax rates increased rewards for the owners of capital and thereby stimulated investment and industrial development. They have also claimed that the economic growth that would result from such tax cuts would increase fiscal revenue overall and reduce tax fraud. After World War I, when the United States was the largest manufacturing nation in the world, the latter claim was not unreasonable. Higher earnings on the top could very well have been spent on new manufacturing establishments in the United States—its fastest growing economic sector.28 How much eventually trickled down to unskilled workers in those new industries is another matter—wage gains for the least skilled workers in industry were minimal during that time. And while the expected revenue increases did not materialize in the 1920s, the spending restraints under Calvin Coolidge meant that the federal government was able to retire some of its debt. Overall, the nation’s debt shrank in the 1920s, from $24 billion to $16 billion, or by one-third.
Trickle-down since the Reagan administration has worked in a very different context. At the time, Ronald Reagan’s economic advisor, Arthur Laffer, hypothesized that job gains would result from invested income at the top of the economic ladder. But American businesses that were already investing in manufacturing capabilities overseas were simply parking their money offshore. Top earners would have had no reason to let notions of economic nationalism trump their motivation for better gains and business ventures overseas, so whatever “trickled” came “down” many places, and not necessarily in the U.S. labor market. Equally problematic was the claim that tax cuts at the top would stimulate economic recovery and bring in higher levels of fiscal revenue. Even if that had been the case, soaring deficits resulting from increased military spending and war during the Reagan and George W. Bush years created fiscal crises for succeeding administrations.
As was the case with Mellon’s original tax cuts, those of the Reagan years were not simply one fiscal policy, but the expression of a set of economic beliefs Peter Temin has termed the “Washington Consensus,” a bipartisan economic policy of the post–Cold War years that embraced privatization and deregulation, stable exchange rates, and moderate fiscal policies.29 This Washington Consensus included the belief that the era of big government was over, that the global economy of the late twentieth century required free-market solutions, and that the firewall between commercial and investment banks in the form of the 1933 Glass-Steagall Act was an obstacle to modern financial markets. Its repeal during the Clinton administration in 1999 marked a turning point in U.S. financial history, the end of an era, and the beginning of new experimentations with structured finance and collateralized debt obligations.
Stock prices began to move up in 1926 and 1927, and shot upward with increasing speed in 1928 and 1929. A decade earlier, few ordinary Americans would have chosen the stock market over conventional savings accounts, but the marketing of Liberty and Victory bonds during World War I introduced some 22 million Americans to the securities market. The successful bond drives encouraged more corporations to “go public” by offering their shares on Wall Street. A growing number of brokers and investment firms like Goldman Sachs Trading Corporation offered buyers professionally managed investment “portfolios” that contained a diverse range of company shares. Harper’s Magazine concluded the stock market was no longer an exclusive marketplace for “hard-boiled knights” but a place “for the butcher and the barber and the candlestick maker.” The number of shares traded provides a good insight into the increasing activities in the stock market: in 1919, which had been the biggest boom year of the century, a total of 317 million shares were traded. In 1927, the New York Stock Exchange traded 577 million shares; in 1928, a full 920 million. In 1929, Wall Street traders made 1.1 billion share transactions. By the beginning of 1929, new investment trusts emerged at a rate of one a day, doing nothing but selling paper shares in paper portfolios.30
American consumers had learned from car dealers and department stores how easy it was to “buy today, and pay later.” So, when stockbrokers offered similar deals on their products—paper shares in investment portfolios—it required no giant leap to understand the appeal of buying on margin.
With $100 down and a $900 loan from one’s broker, a buyer could purchase 100 shares of a company such as Commercial Solvents at $10 apiece. Assuming that the company’s share price rose to $20 in half a year—something that happened frequently in the booming market of 1928–1929—the investor could reap a profit of $1000 on his $100 investment, minus interest payments on the loan and commission fees. Spectacular gains in stock prices made it increasingly difficult for investors to resist margin buying. Buying on margin became so popular that commercial banks began to loan money to brokers, and corporations, too, pumped their own money into brokers’ loans. By October 1929, brokers owed $6.6 billion to lenders such as Bethlehem Steel, Standard Oil, and the Chrysler Corporation, as well as $1.8 billion to regular banks.31
Many economic indicators at this time suggested that stock prices were increasing for good reasons. Gross national product and per capita income were growing steadily, productivity was increasing, and corporations were reporting profits. But after 1927, the stock market surge was driven by fantasy and speculation rather than by economic facts. Floor traders on Wall Street designed pools—schemes to artificially inflate prices by selling shares back and forth amongst each other, thus creating the allusion of intense market activity around an attractive stock. When buyers outside the pool bought the stock and their demand drove the price higher still, members of the scheme sold quickly and made handsome profits while the manipulated stock fell into a slump. Investors who wanted to make informed decisions had only limited access to information, since Wall Street required very little disclosure from listed companies, and investment bankers produced brochures good for advertising rather than careful assessments of the value of securities. Amidst the general exuberance, voices of caution dissipated. American business, with free rein from the federal government, seemed to have provided the solution to the economic and social problems reformers and unions had struggled with for decades. Typical of the confidence of the time was Democrat John Jacob Raskob, who titled his article in the Ladies’ Home Journal “Everybody Ought to Be Rich.”32
Partly because the 1920s witnessed many true stories of economic success, partly because people believed that technological innovations were truly inaugurating a new era, and partly because advertisers sold the illusion of an overall growing prosperity, more Americans were willing to invest with higher levels of risk. During the 1920s, speculation emerged as a major preoccupation of Americans. In 1920, Charles Ponzi of Boston—a former vegetable peddler, forger, and smuggler—convinced thousands of credulous investors that he could deliver a 50 percent return on their investment in his Old Colony Foreign Exchange Company, paying out just enough dividends to allay suspicions of fraud. This “Ponzi scheme” cost its victims everything, and earned its inventor millions—as well as a long prison sentence when the fraud came to light a few months later.33 This was hardly the last trap for speculators eager for quick riches. The construction boom of the early 1920s produced not only urban sprawl, but also real estate booms in California and in Florida. At the height of Florida’s land speculation mania in the summer of 1925, the Miami Daily News printed a 504-page issue crammed with real estate advertisements—the largest newspaper issue in history. A hurricane in 1926 brought an end to this euphoria and left many investors stuck with acres of swamp.34 Speculators then began to look for new opportunities to turn quick profits and moved away from real estate and into the stock market.
The Federal Reserve had been concerned with the irrational exuberance on Wall Street for some time before 1929, believing that speculation drained capital from more productive investments. Fearful of taking more drastic measures such as limiting banks’ access to credit, which would have curbed both broker loans and legitimate business loans, “the Fed” tried instead the strategy of “moral suasion”—with little effect. In December 1928, it increased the “discount rate,” the interest rate at which banks could borrow from the Federal Reserve, from 4.5 to 5.5 percent. This increase did not make the broker loans unprofitable, but it signaled future restrictions in credit. Other central banks in Europe followed this example. But the stock market’s following remained loyal to the bubble. Powerful bankers like Charles A. Mitchell of National City Bank balked at the Federal Reserve policy and promised to pump additional money into the broker’s loan market. In the summer of 1929, the Index of Industrial Production headed downward, largely because homebuilding slumped farther for the third year in a row. The Federal Reserve decided to cool the heated stock market by increasing its bond sales in the open market. This meant that money in circulation increasingly went into Treasury savings bonds rather than into stocks.35
On September 7, 1929, the Standard and Poor’s Composite Stock Index had peaked after the first break in the stock price rally. From September to October, trading volumes increased dramatically, and overall prices declined slowly. On October 24, 1929, panic selling hit the market: thirteen million shares changed hands that day, and the ticker technology was so overwhelmed that buyers and sellers did not know the prices of their afternoon trades until 7:00 PM that night. A group of bankers, including Charles A. Mitchell and J. P. Morgan & Co., tried to stem the tide of selloffs with a $20 million buying pool, and the Rockefellers similarly tried to keep up the price of their Standard Oil stock with a $50 million purchase. It was to no avail. National City Bank’s and Standard Oil’s stocks dropped precipitously.36 Trading and the panic resumed on October 28th (Black Monday) and October 29th (Black Tuesday).
The market’s decline continued until mid-November, by which time stock prices had fallen to half of their August value. Much of this had to do with the panic selling of stocks that brokers had purchased on margin. News from Wall Street raced around the world and triggered crashes at the London exchange, then in Berlin, in Paris, and finally in Tokyo. President Hoover, economist Irving Fisher, and other market experts assured the public that the American economy stood on solid footing. Such frequent incantations tried to separate the stock market from the American economy like froth on a drink, but they could not prevent the decline. News that industrial production had declined in the third quarter in the United States and that foreign economies were collapsing pushed more investors to cut their losses and bail out of the stock market.
Well into 1930, most stock prices remained above the levels reached in 1926. In the past, observers would have described such market behavior as a “technical adjustment.” But in the 1920s, Americans had come to believe that they had entered a “New Era,” and the stock market was one of its most illustrious symbols. Thus, falling stock prices hurt the optimistic view of the future and the power of capitalist enterprise. Pessimism spread rapidly. This crippled consumer spending and confidence, without which few were willing to buy goods on credit on the installment plan, the new American custom.37 And experts at the Federal Reserve still believed that banks failed first and foremost because of poor management, and that the bankruptcies were part of the healthy process of competitive selection in the financial marketplace. Federal Reserve officials thus failed in their most important role, and the collapse of banks continued unabated. Previous market crashes had also produced bankruptcies and unemployment, but their effect had always been most pronounced on the fringes. In 1929 and the ensuing years, the economic plight unseated those who had thought themselves most firmly in the saddle.
In the 1920s, the risks of stock market speculation were significant. In the years after World War II, by contrast, securities fraud became more difficult, thanks largely to the creation of the Securities and Exchange Commission during the New Deal. And yet, it was a stock market crash that ended the twentieth century and inaugurated the new millennium. The burst of the “dot-com bubble” in 2000 shared some significant characteristics with 1929—it depended on an unbridled enthusiasm for new technologies that promised a “new era” seemingly unfettered by the dynamics that brought about economic failures in the past. And it was fed by millions of small middle-class investors rushing into the market hoping to cash in on a trend that seemed to churn out millionaires and hoping to make the financial gains that had eluded them through much of the hollow boom of the 1990s.38
Those who had not lost faith in the stock market in 1999 and 2000 might well have lost their nerve in the wake of the Enron, WorldCom, and other accounting scandals that followed soon after. Past the courtrooms and criminal prosecutions, little reckoning followed these corporate corruption cases, and Americans in search of a wiser and safer investment increasingly looked for tangible and seemingly safe assets in real estate. Early in October 2006, the conservative National Review celebrated the Bush Boom and compared it favorably to the hollow boom of the Clinton years. This boom “[was] different,” explained Jerry Bowyer, since it was “driven by something tangible—profits.” “Those who bet on the Bush boom have done well,” Bowyer concluded, and “Those who bet against it, lost out.”39
The occasion for Bowyer’s gross mischaracterization of the Bush economy was a record high Dow Jones—an indication that the dot-com bubble had not completely spoiled Americans’ appetite for private securities. But even though Bernard Madoff—the Charles Ponzi of the twenty-first century–drew Americans’ attention back to the risks of the stock exchange, the biggest boom and bust of the Bush era would happen in the bond market. And American homebuyers were at the heart of it, without their knowledge.
What had made American homebuyers both the agents and the victims of the Great Recession was the proliferation of the sub-prime mortgage industry. Mortgage debt among American consumers had risen since Congress had deregulated the financial industry in 1980, lifted a ban on adjustable mortgage rates in 1982, and made home mortgage interests tax deductible in the Tax Reform Act of 1986.40 This incentivized a home-loan business model that developed into a predatory lending practice, misleading borrowers about the real cost of their loans with manipulated and hypothetical “teaser rates.” Overall, a long-term decline in the regulation of the mortgage industry made possible the explosion of the sub-prime mortgage market. In the 1990s, the biggest year for this segment of the home-financing sector had been a balance of $30 billion. In 2000, it had grown to $130 billion, and by 2005, Americans had borrowed $625 billion in sub-prime mortgage bonds. Seventy-five percent of this loan volume came with floating rates after the first two years. Worse, more than $500 billion of this loan volume had been repackaged and sold on the bond market. The securitization of mortgages had begun in the 1970s to give the government-sponsored enterprise the Federal Home Loan Mortgage Corporation (“Freddie Mac”) access to more capital to finance home mortgages. By 1996, almost two-thirds of new home mortgages were traded in the bond market. But the secret to the sub-prime industry’s success was an “originate and sell” model that allowed those who signed the loan to sell the debt and the associated risk as a repackaged mortgage bond. It was the packaging into collateralized debt obligations (CDOs) that obscured the true risk of the mortgages and made selling risky assets easy.41
Shielded from the risks of default through the securitization of home loans, the financial industries lured both prudent and unqualified buyers towards homeownership, who interpreted the rising housing prices as evidence that the American dream of owning their own home might soon be out of reach and that unconventional loans were both signs of a new age and their lucky break. Those Americans who signed up for “no doc” or “low doc” loans that required no income verification have often been maligned as calculating con artists, but most of them were immigrants and people of color who saw a chance at overcoming their biggest hurdle to ownership—a down-payment—and who could not imagine why anyone would loan them money if they were almost certain to default a few years down the line.42 As was the case in the 1920s, leading voices in popular culture encouraged Americans to accept a new type of financial risk as the trend of the times. A surge of get-rich-quick literature did the job in 1920s; in the 2000s, reality TV and a rapid cable news cycle multiplied narratives and anecdotes of real estate wealth that were difficult to resist.
Of course, get-quick-rich financial schemes are nothing new and should be expected in a capitalist economy. But what allowed this industry to proliferate was not only the deregulation of the mortgage industry, but of the financial market as well. The Securities and Exchange Commission in particular had loosened the existing regulations for asset-backed securities (ABS) in 1992. In 2003, ABS became exempt from the fraud protections included in the Sarbanes-Oxley Act of 2002, and shortly thereafter ABS was relieved from registration requirements.43 The purpose was to stimulate the bond market, and that was exactly what happened. Investment banks expanded into asset-backed securities hoping to rebuild their profitability after they had lost significant business in stockbroking to online trading services. Structuring finance meant that asset-backed securities were packed into different tranches of “risk” to be traded as collateralized debt obligations—the now-notorious CDOs. The repeal of the Glass-Steagall Act under the Clinton Administration in 1999 also allowed commercial banks to buy these new papers in large quantities. This meant that loan originators could package their home loans in asset-backed securities and sell them in highly processed form as CDOs to investors at, say, the Bank of America, where there was little understanding of the actual default risks hidden in these assets. Many investors relied heavily on the recommendations of publicly traded ratings agencies whose measure of success was the number of deals they rated for investment banks and the fees associated with it. In order for ratings agencies to keep the business of the mortgage industry, it had to accept the mortgage industries’ projections of risk.44
In 2005, the Federal Reserve Chairman Alan Greenspan confidently concluded that the sound economic growth was “not altogether unexpected or irrational.” To the public, it seemed as if central banks had indeed mastered the art of harnessing the business cycle.45 But their conviction that the financial system was just a “transmission mechanism” for their monetary policy ignored the ways in which this mechanism had developed a life of its own—one that deregulators had had in mind all along.46 Consider the fact that between 2000 and 2006, median wages grew by just about 1.7 percent, whereas the sub-prime–driven demand for housing had raised real housing prices by 22 percent.47 It is difficult to escape the conclusion of Damon Silvers and Heather Slavkin that the deregulation of the mortgage and financial sectors was meant to bolster consumer spending that had stagnated because real income remained flat or was actually falling. With the expansion of credit card debt, mortgages, and home refinancing, American households had leveraged themselves heavily on the bet that growth was now permanent.48
Yet the discrepancy between median wages and median housing prices also meant that the bubble of this particular asset was unsustainable. The sub-prime mortgage industry could conceal the poor credit risk of its mortgage holders as long as housing prices maintained a steady growth rate and inventory sold quickly, since that allowed mortgage holders to move on to new property and new adjustable rate mortgages (ARMs). But in 2006, home prices ceased to increase, and in 2007, one of the nation’s largest sub-prime mortgage lenders, New Century Financial, had to file for Chapter 11 bankruptcy after the investment banks that had bought their securitized mortgages exercised their right to turn these loans back to this broker firm because borrowers had ceased to make their payments within twelve to eighteen months. New Century’s inability to buy back the mortgages ushered it into bankruptcy, but it also left its investors stranded. Bear Stearns was such an investor, and by August 2007, it teetered on the verge of bankruptcy. Around the same time, Countrywide Financial collapsed, and so did a similar outfit in Great Britain: Northern Rock. They were acquired by the Bank of America and Bank of England, respectively. The downturn accelerated in September 2008, when the government-sponsored mortgage insurance enterprises the Federal National Mortgage Association (“Fannie Mae”) and Freddie Mac were pulled under by the failure of the secondary mortgage market and became subjects of a federal takeover in September 2008. The two had held the credit risk of more than 50 percent of the U.S. home mortgages, and shareholders lost all their money. One week later, Lehman Brothers went bankrupt, triggering the largest credit crisis in a century, since a large number of firms drew short-term funding for long-term securities from this investment bank. One day later, the Federal Reserve decided not to let the same thing happen to American International Group (AIG) and bought 80 percent of the company’s stock at $85 billion.49
Fed chairman Ben Bernanke later explained this decision and the commitment to bailouts and the “too big to fail” principle as a lesson learned from the Great Depression. Bernanke credited Milton Friedman with the insight that central bankers bore considerable blame for the Depression, although his own work in economic history had underscored the severe consequences the nation suffered in the 1930s as a result of the disintegration of its financial infrastructure.50 But this lesson had come to the Federal Reserve chairman only halfway into the crisis. Worse, its own unwillingness to prevent the bubble in the first place by using the powers Congress had provided or by asking Congress for the necessary powers had allowed the economy to boom and bust. To use Joseph Stiglitz’s metaphor, the Federal Reserve under Greenspan had grown confident it could easily fix the wreck and never thought about preventing the accident in the first place.51
As banks shut their doors and left their clients out in the cold in the wake of the great crash of 1929, small and large businesses, too, lost their assets and had to declare bankruptcy. An economic historian, Ben Bernanke researched these bank failures and how they affected the credit available to small businesses and how many saw long-term relationships with their lenders end for good. Other banks approached these small businesses far more cautiously and could not evaluate the creditworthiness of new clients easily.52 Even businesses with access to credit became cautious. Gross investment in the United States declined by over a third between 1929 and 1930, and did so again the following year. By 1932, depreciation of capital goods exceeded investment level.53
The Crash of 1929 turned the lives of many Americans upside down, but few Americans probably saw their worldview shaken as much as did the President himself. Hoover’s biography reveals that he was convinced that individual self-reliance and voluntarism were the only correct approaches to overcoming the crisis. After the crash in October 1929, Hoover urged his Cabinet members to act as if the panic had not occurred. Well into 1930, he insisted that the downturn was temporary, that the foundations of the economy were solid, and that the source of economic instability had everything to do with the European financial system of reparations payments and nothing with the American economy. He secured pledges from business leaders, governors, and mayors to keep up public spending and investment levels in return for the president’s pledge to lower corporate and income taxes to stimulate consumer demand. Fearing declining consumer confidence, businesses reneged on the pledge and began layoffs. Quickly, the remaining parties to the voluntary pact retrenched. The sanguine spending spree based on consumer credit was a thing of the past.
Only in 1931, when it had become clear that the economic crisis was not simply a matter of the financial imbalance between Europe and the United States, did Hoover react more forcefully. The Federal Farm Board, which Congress created under Hoover’s guidance, tried to stem falling prices in agriculture by buying up surplus crops, but they did not restrict production. The Board ended up owning several hundred million dollars worth of wheat, and prices continued to decline. In October 1931, Hoover created the National Credit Corporation (NCC), which recruited private bankers to use $500 million for buying up the questionable assets of troubled banks, maintaining their liquidity, and reining in the bank panic. But the bankers at the NCC simply could not bring themselves to buy dubious assets and never made use of the corporation’s capital; almost 2,300 banks failed right in front of them. The President’s Organization of Unemployment Relief tried to aid existing charities in their efforts, and Hoover tried to lead by example with generous donations. But he balked at direct federal aid to the unemployed, arguing that this would create a class of dependent citizens. He insisted that Americans were sufficiently protected from hunger and cold, but the rising hospital statistics of malnutrition-related deaths said otherwise. In 1932, Hoover’s Reconstruction Finance Corporation marked the first significant departure from his voluntary principles. Modeled after government agencies established during World War I, the RFC was authorized to use $2 billion in taxpayer money to loan to banks, the boldest federal anti-depression measure in U.S. history to that point. When most money went to big institutions, however, labor advocates complained that the very economic elite that decried unemployment relief as socialist corruption depended most heavily on government assistance. Hoover still refused emergency funds for food, clothing, and shelter, but he eventually agreed to let the RFC loan money to states for profitable public works projects. Hoover also began to rethink the labor issue and signed into law the Norris-La Guardia Act, which severely restricted the use of injunctions against strikers, something Republican administrations had made ample use of throughout the 1920s.
The impact of federally funded public works on the national economy during the Hoover Administration was negligible. If these projects put additional money into workers’ pockets, the administration took it out again with the largest peacetime tax-hike in American history. The 1932 Revenue Act illustrated the conventional political wisdom of fiscal responsibility that the government could not spend more than it received in revenue. Two weeks after passing the Revenue Act, Congress committed another act of fiscal responsibility. It refused to pay out “adjusted compensation certificates”—bonuses—for World War I veterans ahead of their due date in 1945. Congress had granted veterans this bonus in the form of 20-year savings bonds in 1925. Outraged over Washington’s thrift at the same time that the government spent public funds on farmers, banks, and railroads, approximately 20,000 veterans from across the country converged on the capitol. President Hoover refused to meet with delegates from this “Bonus Army,” at the same time that he received courtesy visits from sports stars and student fraternities. In an effort to disband the mass protest in the heart of the nation’s capital, Hoover offered an advance payment of five to twenty dollars per veteran to support their travel home. At the same time, his Secretary of War, Patrick Hurley, announced the clearance of several occupied buildings. When veterans defended themselves against violent police actions, the president ordered the complete removal of the protesters by federal troops under the command of General Douglas MacArthur. Most veterans and their accompanying families fled this show of overwhelming force and the tear-gas attacks. President Hoover defended his general’s actions without reservation, convinced that the veterans had threatened the very existence of government. The War Department derided the protesters as a “mob of tramps and hoodlums” and “Communist agitators,” and claimed that McArthur had acted with “unparalleled humanity and kindness.”54 Those who saw the photographs of the event did not agree, and veterans’ groups—hardly organizations with Communist sympathies—expressed bitter resentment over the Red-baiting. At a time when authoritarian regimes moved with military force against poor and destitute civilians in the waning democracies of Europe, Hoover’s harsh reaction to the Bonus Army was unforgivable to most Americans—more so than any blunder in fiscal and economic policy.
Today, we often explain Franklin D. Roosevelt’s New Deal as a response to the Great Depression. But the Great Depression only in part explains the New Deal and its popularity. Just as important was Herbert Hoover’s administration and its failures. For three successive Republican administrations, Americans had generally appeared to agree with former President Calvin Coolidge’s famous dictum that “the business of government is business.” By 1932, however, both business and government seemed to have reached their wits’ end. The nation had tumbled not only further into economic crisis, but also into a deep political crisis that was the immediate outgrowth of the failed policy responses of the Hoover Administration. Before Roosevelt’s New Deal began to turn the tide, the failure of government in the United States and other capitalist nations had shaped the world economy and global instability. The nation’s confidence was so shaken by November 1932 that Republican governor Alf Landon of Kansas could speculate aloud that only “the hand of a dictator” could turn the country around. Pennsylvania Senator David A. Reed (R) warned that “if this country ever needed a Mussolini,” referring to the Italian fascist dictator, “it needs one now.”55
The 1932 election was a powerful rejection of Herbert Hoover and an expression of hope in Franklin D. Roosevelt. Not that New York’s former governor had an answer to the economic troubles of the time. Like Hoover, he attacked extravagant government spending and actually promised a 25 percent cut in the federal budget. He described the gold standard as a sacred covenant and mocked suggestions by the Farm Board that the answer to agricultural overproduction was plowing under crops in return for government payments. The famous and highly respected columnist Walter Lippmann expressed disappointment with the Democratic challenger. “Franklin D. Roosevelt is no crusader,” he wrote. “He is no tribune of the people. He is no enemy of entrenched privilege. He is a pleasant man who, without any important qualifications for the office, would very much like to be President.”56
But Roosevelt had learned from his predecessor that he had mustered the political will for experimentation. In a nationwide radio address he advocated “persistent experimentation” in the fight against the Depression and for a “wiser, more equitable distribution of the national income.”57 Historians have noted the many disappointments with Roosevelt’s pragmatism, but none can point to a president with a bigger portfolio of accomplishments in real social change.
Roosevelt wasted no time, and the day after his inauguration, he summoned Congress into an emergency session for the coming week to address another round of bank failures. Conservative in nature, the Emergency Banking Act extended government assistance to private bankers to allow them to reopen their banks, authorized the issue of new Federal Reserve bank notes, and penalized the hoarding of cash reserves. Critics on the left were aghast at Roosevelt’s adoption of a plan proposed by Herbert Hoover’s advisors.58 But when Roosevelt explained to approximately 60,000,000 Americans in his “Fireside Chat” that it was now safe to return their savings to the banks, they believed him. The next day, cash deposits in banks far exceeded withdrawals in every city. “Capitalism,” Raymond Moley later marveled, “was saved in eight days.”59
Roosevelt’s first one hundred days in the presidency had been a whirlwind. Press conferences followed the biweekly cabinet meetings. The president delivered a dozen speeches and guided fifteen major laws through Congress. Roosevelt had promised “action,” and he clearly delivered. Regardless of the laws’ different impacts, Americans were mostly convinced that the president cared about them and was willing to do whatever it took to bring about economic recovery. Walter Lippmann, the man who had discarded Roosevelt as merely a “pleasant man” a few months prior, now mused: “At the end of February we were a congeries of disorderly panic stricken mobs and factions. In the hundred days from March to June we became again an organized nation confident of our power to provide for our own security and to control our own destiny.”60 Roosevelt’s government provided the relief President Hoover had denied. And the New Deal also included legislation intent on reforming the pillars of the American economy: agriculture, industry, banking, and Wall Street.
There is much confusion today about the underlying economic theory that propelled Roosevelt and his Brain Trust. His policies have often mistakenly been associated with Keynesianism—a deliberate federal budget deficit to compensate for the declining private demand with a public demand for goods and services. Many have pointed to the public works projects and relief efforts as evidence of this desire to restore flagging demand with government funds. However, historians have learned from the exchanges between the president and his Cabinet that relief was always the primary objective, and the expansion of demand merely a secondary effect, which Roosevelt would have sacrificed. And he often did, so that a deliberately countercyclical fiscal policy only became common practice after World War II.
Regardless of the conflict over the importance of demand-management in the Roosevelt administration, the historical data provide strong evidence that the impact of additional government dollars in the economy was, on one hand, unprecedented, and at the same time very weak. This was not only the result of Roosevelt’s conservative approach to the federal budget and his conviction that a large deficit was as immoral for government as for individual households. Individual states also shaped the impact of government spending with their own fiscal policies. The additional spending provided by Washington, D.C., was almost entirely canceled out by the shrinking budget of individual states. States had already been frugal in the face of economic crisis before the New Deal, but as the new administration channeled relief funds to state governments, these often decided to cut their own spending even further and let federal monies carry the burden. Economist E. Cary Brown demonstrated in 1956 that in only two years out of seven between 1933 and 1940 did federal expenditures exceed the contracted spending on state and local levels. When it came to fiscal policy, Brown concluded that demand management “seems to have been an unsuccessful recovery device in the thirties—not because it did not work but because it was not tried.”61 In the opinion of Roosevelt advisor Alvin Hansen, the New Deal was best described as “a salvaging program.”62 But this characterization undersells the scale of economic recovery. Between 1933 and 1941, the nation’s gross domestic product grew by 7.7 percent per year on average—growth rates this nation’s economy has not witnessed since.63
As the Great Recession built momentum late in 2007 and early in 2008, the key decision makers in the United States economy were quite familiar with this record. And while none tried very hard to convince Congress of the urgency of the situation to produce a forceful legislative response to the looming asset crisis in the nation’s leading investment banks, they all—from Secretary of the Treasury to Ben Bernanke and Timothy Geithner at the Federal Reserve—knew what emergency actions to take. More than anyone else, one of the best students of the financial collapse of the Great Depression, chairman of the Federal Reserve Ben Bernanke, not wanting to repeat the failures of his predecessors in the 1930s, stretched the authority of his institution to the hilt. President Obama followed Henry Paulson’s initial stimulus bill with a massive public spending program that directly applied Keynesian economic theory and was far larger as a share of GDP than any spending increase during the New Deal. Although weakened by a compromise with Republicans to turn some of the stimulus into tax cuts—which translated into increased taxpayer savings rather than increased consumer spending—the combined monetary and fiscal response to the financial crisis of 2008 reduced the economic fallout to a recession, rather than a depression. And this recession proved significantly less devastating for Americans who could fall back on unemployment insurance—one of the New Deal legacies and economic stabilizers we take for granted today.64 In addition, European and Asian economies did not respond to their own entanglement in the financial crisis with hectic reductions in spending but provided generous bailout funds and reduced interest rates.65 The one lesson that decision-makers around the world learned from the Great Depression was not to allow the implosion of the financial system to happen, and to restore “market confidence.”
The reward for the concerted emergency response at the national and international levels was a recession rather than an economic and political calamity. However, the price for averting the breakdown of the economy and political system was a lack of political will for significant interventions on behalf of credit consumers, homeowners, and the poor, and a growing discontent over the slow pace of recovery. It took Republicans fourteen years to recover from the damage their brand had suffered from the Great Depression. It only took Tea Party activists two years after President Obama’s victory in November 2008 to overpower the nation’s political discourse with a debate about fiscal responsibility and calls for a return to Hoover economics. A similar trend developed overseas, where the seemingly quick aversion of a global economic catastrophe misled policymakers in Germany, England, and other European countries to think that the Continent was experiencing, not an economic slump, but a debt crisis. So, in an ironic twist that only history can deliver, the lessons learned from the Great Depression have helped re-popularize “Depression Economics” in the United States and Europe and revived the very Austrian economics the Great Depression as well as the Great Recession had proven wrong.
1. Lee Ohanian, “Understanding Economic Crises: The Great Depression and the 2008 Recession,” The Economic Record, 86 [special issue] (September 2010):2–6; Andrew Leonard, “Herbert Hoover: The Working Man’s Hero,” Salon.com, August 28, 2009, accessed March 12, 2013, available at http://www.salon.com/2009/08/28/pro_labor_herbert_hoover.
2. “Five questions with Mitt Romney,” Las Vegas Review Journal, Video, October 17, 2011.
3. Herbert Hoover, Memoirs (New York: Hollis and Carter, 1952), 30.
4. Lionel Robbins, The Great Depression. (New Brunswick and London: Transaction Publishers, 2009 [1934]), 43.
5. Amity Shlaes, The Forgotten Man: A New History of the Great Depression. (New York: Harper, 2008). Shlaes accuses FDR of not believing in the capitalist economy, and, of course, his critics on the left think just the opposite.
6. “Lessons of the 1930s: There Could Be Trouble Ahead,” The Economist, December 10, 2011, accessed March 12, 2013, available at http:www.economist.com/node/21541388/print.
7. Robert S. McElvaine, The Great Depression: America, 1929–1941 (New York: Times Books, 1984), 18–20.
8. Stanley Lebergott, The American Economy: Income, Wealth, and Want (Princeton, NJ: Princeton University Press, 1962), 248–299; Kenneth Jackson, Crabgrass Frontier: The Suburbanization of the United States (New York: Oxford University Press, 1985), 163; David M. Kennedy, Freedom from Fear: The American People in Depression and War, 1929–1945 (Cambridge, MA: Oxford University Press, 1999), 21.
9. Claudia Goldin, “America’s Graduation from High School: The Evolution and Spread of Secondary Schooling in the Twentieth Century,” Journal of Economic History, 58, no. 2 (1998): 345–374.
10. Martha L. Olney, Buy Now, Pay Later: Advertising, Credit, and Consumer Durables in the 1920s (Chapel Hill: University of North Carolina Press, 1991).
11. Thomas F. Huertas, Crisis, Cause, Containment and Cure (New York: Palgrave MacMillan, 2010), 9.
12. Nick Wingfield, “Apple’s Job Creation Data Spurs an Economic Debate,” New York Times, March 4, 2012, B1; “Apple: We Made 514,000 Jobs,” CNN Money: Economy Blog, by Charles Riley, March 5, 2012, accessed July 26, 2013, available at: http://economy.money.cnn.com/2012/03/05/apple-we-made-514000-jobs/; General Motors Company Report to the Securities and Exchange Commission, Washington, D.C., 2012, 15.
13. Stanley Lebergott, Manpower and Economic Growth: The American Record since 1800 (New York: McGraw-Hill, 1964); R. M. Coen, “Labor Force Unemployment in the 1920s and 1930s: A Re-examination Based on Postwar Experience,” Review of Economics and Statistics, 55 (1973): 46–55; Alexander J. Field, “Technological Change and U.S. Productivity Growth in the Interwar Years,,” The Journal of Economic History, 66, no. 1 (March 2006): 203–236.
14. W. Elliot Brownlee, Dynamics of Ascent: A History of the American Economy (New York: Knopf, 1979), Chapter 12.
15. Mary P. Ryan, Womanhood in America: From Colonial Times to the Present, 3rd. ed. (New York: Franklin Watts, 1983), 229–231, 249.
16. Historical Statistics, Series D4, D7, D8 (Washington, D.C.: Government Printing Office, 1975).
17. Sanford M. Jacoby, Modern Manors: Welfare Capitalism Since the New Deal (Princeton, NJ: Princeton University Press, 1997), 57–58.
18. Lizabeth Cohen, Making a New Deal: Industrial Workers in Chicago, 1919-1939 (New York: Cambridge University Press, 1990), 94–97.
19. John Dicker, “Union Blues at Wal-Mart,” The Nation, 275, no. 2 (July 8, 2002): 14–19.
20. Employment by major industry sector, 2000, 2010, and projected 2020, Bureau of Labor Statistics, 2000, accessed July 26, 2013, available at http://www.bls.gov/news.release/ecopro.t02.htm.
21. Jaroslav Vanek, “From Great Depression to Great Recession,” International Review of Economics and Finance, 20 (2011): 131–134, 133.
22. Roger W. Ferguson Jr. and William L. Wascher, “Distinguished Lecture on Economics in Government: Lessons from Past Productivity,” Journal of Economic Perspectives, 18, no. 2 (Spring 2004): 3–28, 9; Martin Wachs, “Autos, Transit, and the Sprawl of Los Angeles: The 1920s,” Journal of the American Planning Association, 50, no. 3 (Summer 1984): 297–310, 302.
23. Barry Supple, “The Political Economy of Demoralization: The State and the Coalmining Industry in America and Britain Between the Wars,” Economic History Review, 41, no. 4 (1988): 566–591.
24. Liaquat Ahamed, Lords of Finance: The Bankers Who Broke the World (New York: Penguin Press, 2009), 228–240; Nicholas Crafts and Peter Fearon, “Lessons from the 1930s Great Depression,” Oxford Review of Economic Policy, 26, no. 3 (2010): 285–317, 289–290.
25. Milton Friedman and Anna Schwartz, The Monetary History of the United States, 1867 to 1960 (Princeton, NJ: Princeton University Press, 1963), 265–295.
26. Paul Krugman, “The Economic Failure of the Euro,” Fresh Air, Boston: WHYY-NPR, first aired on January 25, 2011; Peter Temin, “The Great Recession and the Great Depression,” Daedalus 139, no. 4 (Fall 2010): 115–124, 117.
27. Jeremy Atack and Peter Passell, An Economic View of American History (New York: W.W. Norton & Company, 1994), 576–577.
28. Alexander Field, Great Leap Forward: 1930s Depression and U.S. Economic Growth (New Haven: Yale University Press, 2011), 69. According to Field, 83 percent of the economic growth of the 1920s came from manufacturing.
29. Temin, 115–124.
30. T. H. Watkins, The Great Depression: America in the 1930s (Boston: Back Bay Books, 1993), 38; Harper’s Weekly quoted by Gordon Thomas and Morgan Witts, Day the Bubble Burst: A Social History of the Wall Street Crash of 1929 (New York: Doubleday 1979), 191; Steve Fraser, Every Man a Speculator: A History of Wall Street in American Life (New York: Harper Perennial 2006), 390; Lawrence E. Mitchell, The Speculation Economy: How Finance Triumphed over Industry (San Francisco: Berrett-Koehler Publishers, 2007), 269.
31. Michael E. Parrish, Anxious Decades: America in Prosperity and Depression, 1920–1941 (New York: W.W. Norton & Company, 1992), 229–230.
32. John Kenneth Gailbraith, The Great Crash of 1929 (Boston: Mariner Books, 1961), 57.
33. Robert Sobel, The Great Bull Market: Wall Street in the 1920s (New York: W.W. Norton & Company, 1968).
34. William E. Leuchtenburg, The Perils of Prosperity, 1914–1932 (Chicago: University of Chicago Press, 1958), 183–184.
35. Parrish, 232.
36. Jeremy Atack and Peter Passell, A New Economic View of American History from Colonial Times to 1940 (New York: W.W. Norton & Company, 1994), 590.
37. Christina Romer, “The Great Crash and the Onset of the Great Depression,” Quarterly Journal of Economics, 105 (1990): 597–624.
38. John Cassidy, Dot.con: How America Lost Its Mind and Money in the Internet Era (New York: Harper Collins, 2002).
39. Enron Corporation was a Houston-based energy and commodities trading firm that was widely celebrated during the 1990s for its innovation and its generous compensation and benefits packages for its employees. In October 2001, it became apparent that the company had hidden billions of dollars in debt through accounting practices; it also emerged that the company had willfully manipulated energy prices to extreme levels in California the previous year. The bankruptcy eviscerated $11 billion for shareholders, cost 20,000 employees their jobs and Enron-stock dominated 401ks, and led to the dissolution of Arthur Anderson, the accounting firm that had approved Enron’s accounting manipulations. A year later, news broke that Worldcom CEO Bernard Ebbers had inflated the value of the company by $3 billion with accounting tricks to uphold the company’s stock value. Ebbers was sentenced to 25 years in prison, and Worldcom went through Chapter 11 bankruptcy protection and later emerged as MCI, to be acquired by Verizon. Jerry Bowyer, The Bush Boom: How a Mis-Underestimated President Fixed a Broken Economy (New York: Allegiance Press, 2003); Jerry Bowyer, “Perfectly Rational Exuberance,” The National Review Online, October 4, 2006, accessed July 26, 2013, available at http://www.nationalreview.com/articles/218896/perfectly-rational-exuberance/jerry-bowyer.
40. Alex M. Azar II, “FIRREA: Controlling Savings and Loan Association Credit Risk Through Capital Standards and Asset Restrictions,” Yale Law Journal, 100 (1990): 149, 153; Patricia A. McCoy and Elizabeth Renuart, “The Legal Infrastructure of Subprime and Nontraditional Home Mortgages,” in Nicolas P. Retsinas and Eric S. Belsky, eds., Borrowing to Live: Consumer and Mortgage Credit Revisited (Washington D.C.: Brookings Institution Press, 2008), 110; Baher Azmy, “Squaring the Predatory Lending Circle,” Florida Law Review, 57 (2005): 295, 310–311.
41. For an insightful and engaging exploration of the sub-prime mortgage industry and the people who bet on the implosion of this bond market with credit default swaps, see Michael Lewis, The Big Short: Inside the Doomsday Machine (New York: W.W. Norton & Company, 2011), 22–27; Kathleen C. Engel and Patricia A. McCoy, “A Tale of Three Markets: The Law and Economics of Predatory Lending,” Texas Law Review, 80 (2002): 1255, 1273.
42. Susan E. Hauser, “Predatory Lending, Passive Judicial Activism, and the Duty to Decide,” North Carolina Law Review, 86 (2008): 1501, 1509–1510; Bob Tedeschi, “Subprime Loans’ Wide Reach,” New York Times, August 3, 2008, RE10.
43. Damon Silver and Heather Slavkin, “The Legacy of Deregulation and the Financial Crisis–Linkages Between Deregulation in Labor Markets, Housing Finance Markets, and the Broader Financial Markets,” Journal of Business & Technology Law, 4, no. 2 (2009): 304–347, 334.
44. Michael Lewis,. 156–158.
45. Alan Greenspan, “Testimony on the Federal Reserve Board’s Semiannual Monetary Policy Report to the Congress,” Committee on Banking, Housing, and Urban Affairs, U.S. Senate, February 16th, 2005, accessed July 26, 2013-, available at http://www.federalreserve.gov/boarddocs/hh/2005/february/testimony.htm.
46. Thomas F. Huertas, Crisis: Cause, Containment and Cure (New York: Palgrave MacMillan 2010), 7–9.
47. Eric Stein, “Turmoil in the U.S. Credit Markets: The Genesis of the Current Economic Crisis,” Testimony of Eric Stein, Center for Responsible Lending, Before the U.S. Senate Committee on Banking, Housing, and Urban Affairs, 110th Congress, Washington, D.C., October 16, 2008, accessed March 11, 2012, available at http://www.responsiblelending.org/mortgage-lending/policy-legislation/congress/senate-testimony-10-16-08-hearing-stein-final.pdf.
48. Damon Silver and Heather Slavkin, “The Legacy of Deregulation and the Financial Crisis—Linkages Between Deregulation in Labor Markets, Housing Finance Markets, and the Broader Financial Markets,” Journal of Business & Technology Law, 4, no. 2 (2009): 304–347. In 2007, the United States personal (household) savings rate was below 2 percent. Cinzia Alcidi and Daniel Gros, “Great Recession versus Great Depression: Monetary, Fiscal and Banking Policies,” Journal of Economic Studies, 38, no. 6 (2011): 673–690.
49. Charles P. Kindleberger and Robert Z. Aliber, Manias, Panics, and Crashes. A History of Financial Crises (New York: Palgrave McMillan, 2011).
50. “Lessons of the 1930s: There Could Be Trouble Ahead,” The Economist, December 10, 2011, 1–7; Milton Friedman and Anna Schwartz, A Monetary History of the United States, 1867–1960. (Princeton, NJ: Princeton University Press, 1971); Ben S. Bernanke, Essays on the Great Depression (Princeton, NJ: Princeton University Press, 2000).
51. Joseph Stieglitz, Freefall: America, Free Markets, and the Sinking of the World Economy (New York: W.W. Norton & Company, 2010), 270–271.
52. Ben Bernanke, “Nonmonetary Effects of the Financial Crisis and the Propagation of the Great Depression,” American Economic Review, 73 (1983): 257–276.
53. McElvaine, 73–74.
54. Roger Daniels, The Bonus March: An Episode of the Great Depression (Westport, CT: Greenwood Publishing Co., 1971).
55. Michael Schaller, Robert Schulzinger, John Bezls-Selfa, and Janette Thomas Greenwood, American Horizons: U.S. History in a Global Context, Volume II: Since 1865 (New York: Oxford University Press, 2012), 882.
56. Walter Lippmann, “The Candidacy of Franklin D. Roosevelt,” New York Herald Tribune, January 8, 1932, quoted in Sally Denton, The Plots Against the President: FDR, a Nation in Crisis, and the Rise of the American Right (New York: Bloomsbury Press 2012), 18.
57. Davis W. Houck, Rhetoric as Currency: Hoover, Roosevelt and the Great Depression (College Station, TX: Texas A&M University Press, 2001), 121.
58. William Lemke to the Farmers’ Union Convention in Omaha, Nebraska, quoted in “Lorena Hickock to Harry Hopkins, November 23, 1933”; Hopkins MSS. See William E. Leuchtenburg, Franklin D. Roosevelt and the New Deal; 1932–1940 (New York: Harper and Row, Publishers, 1963), 44, note 7.
59. Raymond Moley, After Seven Years (New York: Harper & Brothers Publishers, 1939), 155.
60. Parrish, 297.
61. E. Cary Brown, “Fiscal Policy in the ‘Thirties’: A Reappraisal,” American Economic Review, 46 (December 1956): 857–879.
62. A. H. Hansen, Fiscal Policy and Business Cycles (New York: W.W. Norton & Co., 1941), 84.
63. Historical Statistics of the United States: Colonial Times to 1970, Part 2 (Washington, D.C.: U.S. Bureau of the Census, 1975), 217–218.
64. Temin, “The Great Recession,” 115–124, 122–123; Price Fishback, “US Monetary and Fiscal Policy in the 1930s,” Oxford Review of Economic Policy, 26, no. 3 (2010): 385–413, 386–387.
65. Felda Chay and Quah Chin Chin, “The Great Depression 2.0? Given the Scale of the Current Financial Turmoil, Will the World See a Repeat of the Depression of the 1930s?” The Business Times Singapore, October 27, 2008, 1; Paul Krugman, The Return of Depression Economics and the Crisis of 2008 (New York: W.W. Norton & Company, 2009); Andrew Leonard, “Paul Krugman’s Depression Economics,” Salon.com, December 8, 2008, accessed July 26, 2013, available at http://www.salon.com/2008/12/08/paul_krugman_2; “Feature: How the World Works; Lessons of the 1930s: There Could Be Trouble Ahead” The Economist, December 10, 2011, accessed July 26, 2013, available at http:www.economist.com/node/21541388/print.