10.1 Introduction
The economy of the early twenty-first century is not just a larger version of the economy of the nineteenth century. It is fundamentally different. This chapter views the development of the American economy from the middle of the twentieth century through the financial crisis and recession of 2008 and beyond. in Barack Obama was elected president of the United States with a great deal of optimism. But 2010 saw a conservative resurgence based on poor economic growth and by 2016 the election of a right-wing “populist” Donald Trump. To answer this crucial question, we need to look carefully at the patterns of history as well as carefully examine the scientific data, which we do with the remainder of this chapter.
The years following the end of the Second World War were a time when the wealth and power of the United States were on the rise. After the stagnation of the depression and the sacrifice of the war years, there was, once again among a large proportion of the American population, a belief in abundance. From the depths of the depression was born the “golden age” of the American economy. The era was characterized by the growing international power of the United States, both economically and militarily. The wealth that flowed in from the rest of the world was shared more broadly, and with a greater segment of the working population, than at any time since the industrial revolution. Home ownership became a reality for a greater share of the population, and it could be achieved upon a single income. Disney’s “Tomorrowland” showcased “the house of the future” replete with all-electric appliances, futuristic design, and virtually no attention to insulation or energy conservation. The days of conservation and sacrifice were gone. Spacious automobiles traversed newly constructed freeways to arrive at Disneyland in Anaheim, California, from far-flung suburbs. And they brought kids, lots of them, as the “baby boom” was just gaining headway. The future looked promising. It was a future based on cheap oil and economic growth.
But the year following the opening of Disneyland in 1955 was a year of warning. In 1956 the nationalization of the Suez Canal by Egypt halted briefly the shipments of cheap and abundant oil to Europe and threatened the existing international order. Roger Revelle and Charles Keeling first began to measure carbon dioxide concentrations in the atmosphere, and M. King Hubbert wrote his famous paper predicting the peak of domestic oil production a mere 15 years in the future. But the academics were ignored and the crisis in North Africa was quickly brought under control. It was a time when Americans could seemingly do anything, including building the dream of happiness through material abundance and perpetual growth.
To recap: It was a time of peace, and peace on American terms, Pax Americana. The capabilities of other industrialized nations were decimated. But the war rekindled US industry from the depths of the depression. No other nation could match US industrial output. Rather than seeing the European nations as serious competitors, national and international policy sought to shore up their devastated infrastructures and their demands for goods, particularly US goods. In addition, the US dollar replaced the repudiated gold standard. The rest of the world was willing to give the US interest free loans in their own currencies to hold the dollar, stemming from its surging strength. Since the world’s resources, including oil, were denominated in dollars, the country could buy in a buyer’s market and sell in a seller’s market as the terms of trade (or ratio of export price to import price) consistently favored the United States. Most of the revenues of oil-producing companies were recycled back into the US economy as foreign nations used their petrodollars to by bonds from the US Treasury.
US business could look at the world as its oyster. Little foreign competition existed to threaten the nation’s large oligopolies, the dollar was the international currency, and US demand was stable and rising. The government pledged to use its economic policies to limit the kind of ruthless cutthroat competition that characterized the earlier industrial era, and the war mobilization itself was highly favorable to business. Antitrust policy seemed to be more directed toward keeping new firms from upsetting the industrial balance than breaking up the older concentrated industries that had just helped win the war. Industry after industry such as automobiles, breakfast cereals, and petroleum refining settled comfortably into “Big Threes” or “Big Fours.” In fact, a new merger movement was about to begin, based on the conglomerate merger of concentrating firms from seemingly unrelated industries. Finally, it was the age of cheap oil, and the United States was still the dominant oil producer in the world. The great finds of the 1930s found little use during the depression and indeed allowed the United States to supply 70% of the oil for the allied war effort. Cheap oil, in conjunction with the aforementioned structural changes, helped fuel the mass consumption, economic growth, and military muscle for years to come.
The “good times” were about to end. By 1970 Hubbert’s ominous prediction turned out to be accurate as US oil production for the “lower 48” peaked. (This did not include the Alaskan reserves, because Alaska did not become a state until 1959 and therefore did not figure in Hubbert’s calculations.) Oil price shocks buffeted the economy in 1973 and 1979 threatening both the mobile lifestyle and economic growth. The US producers no longer had the spare capacity to keep foreign producers from using “the oil weapon.” This was the era that saw the rise to power of OPEC, the Organization of the Petroleum Exporting Countries. The 1970s and early 1980s were the time of stagflation or simultaneous inflation and recession. Under mainstream Keynesian theory, inflation was supposed to occur only if demand continued to expand past the level that would support full employment. But prices rose even in the presence of substantial unemployment. Keynesian policies no longer seemed to work. If the government pursued an expansionary policy, inflation worsened. If it cut its spending or raised taxes to reduce budget deficits, or made money harder to come by, unemployment soared to politically unacceptable levels. Moreover, the international monetary accords conceived and born in Bretton Woods, New Hampshire, in 1944 collapsed under their own weight and US policy. The accords had been built upon the willingness of the United States to convert holdings of dollars to gold at $35 per ounce. By the early 1970s, foreign claims exceeded the magnitude of the gold supply. President Richard Nixon closed the gold window in 1971, ushering in a new era in international monetary politics: one that was far less favorable to the growth of the United States.
Part of the expansion of foreign dollar holdings was based on the expansion of American business abroad, and part was attributed to increased military expenditures such as rental payments for bases and wages and profits paid to local workers and firms, plus the expenditures of military personnel in foreign countries. The war in Southeast Asia was not going well. Rand Corporation analyst and respected neoclassical economist (for his work on decision-making under conditions of uncertainty) Daniel Ellsberg expressed dismay after briefing high-level government officials as to the conditions on the ground only to have them turn around and tell a far more optimistic story to the nation. Ellsberg released the “Pentagon Papers” showing the disconnect between the assessment of war planners and public officials to the New York Times in 1971, earning a spot on Richard Nixon’s “enemies list” and the honor of being called “the most dangerous man in America” [1]. Ellsberg was correct in his assessment, and by May Day 1975, North Vietnamese tanks broke down the gates of the American Embassy heralding the end of America’s longest war to date. By 1979 the “friendly” government of the Shah fell in Iran. Hopes for a Democratic government were dashed as the Mullahs seized power and proclaimed the “Islamic Republic.” Oil prices soared and Business Week lamented “the decline of US power” in their special issue of March 12, 1979 [2]. Terms of trade began to favor the United States less and corporate profits fell [3].
America, along with much of Europe, took a more conservative turn. Ronald Reagan in the United States and Margaret Thatcher in the United Kingdom gained power and began to develop new economic policies, which departed from the typical Keynesian ideology, based on lowering taxes, remilitarization, anti-union campaigns, the reduction of domestic spending and the decreasing of regulations on business and finance, and restrictive monetary policies to reduce inflation. Things were changing in other nations as well. Social Democratic governments in Germany, Sweden, France, and Italy were replaced by conservatives as well. The Soviet Union, crippled by falling oil prices and cold-war military spending, did not achieve the state of advanced socialism called for by the politbureau in the post-Brezhnev days, and the openness (Glasnost) and restructuring (Peristroika) called for by Mikhail Gorbachev led to the dismantling of the USSR. The Chinese Communist Party began to court openly entrepreneurs. The cold war was won, and there were no viable alternatives to multinational capitalism. Yet economic growth did not respond over the long term, despite great new finds of oil in the North Sea and the North Slope of Alaska. Debt swelled as well, from less than $5 trillions in 1980 to approximately $15 trillions in 1990. The United States went from being the world’s greatest creditor to the world’s greatest debtor in less than a decade. The Clinton Administration completed the work of the “Reagan Revolution,” deregulating fully the US financial services industry, trading proposed environmental legislation for a North American Free Trade Agreement, and ending “welfare as we know it.” Eight years of the administration of George W. Bush saw two inconclusive wars and the explosion of a debt economy that ended with the financial meltdown and housing crisis of 2008. Oil prices rose to historic highs in the same summer. As the first edition of this book went to press, the financial crisis has turned into the worst economic downturn since the Great Depression.
In 2008, Barack Obama was elected president of the United States with a great deal of optimism. But 2010 saw a conservative resurgence and the 2016 elections resulted in conservatives controlling all three branches of the U.S. government. The decade since the election of Barack Obama witnessed the rise in the techniques of extracting oil and natural gas from tight shale formations by means of hydraulic fracturing, along with a decrease in oil prices. Rates of economic growth rose from a depression-level 1.3% in the decade of the 2000s to an anemic 2.1% in the decade of the 2010s. Unemployment fell from 10% to less than 5% by 2016. Yet wages did not rise as productivity increased. The hallmark health-care legislation, the Affordable Care Act, was subject to constant attack, and was not as affordable as envisioned. This is not surprising to us. Health care remained in the hands of concentrated pharmaceutical and insurance companies, along with monopolizing hospitals. Monopolies restrict output and raise prices. That is their fundamental business model. To answer that question, we need to understand historically how we got to where we are now. To this question the rest of this chapter turns.
10.2 Historical Antecedents: Depression and War
It is our contention that the events of economic history cannot be explained by social and economic forces alone. The role of energy must be included. Neither does a pure analysis of energy availability and use explain our current situation by itself. Rather they should be analyzed in conjunction. Historically the United States economy has experienced three major depressions, in the 1870s, 1890s, and 1930s. All came after the discovery and exploitations of fossil hydrocarbons. The ability to acquire and use energy allowed the dramatic expansion of production, as the concentrated and highly dense new sources of energy could transcend the strength and often the skill limitations of humans. However, the economy is still limited by the capacity to sell the products at a profit, expand markets, and realize the gains of productivity. When this does not occur, the economy slips into depression. “Overinvestment at the end of economic booms has characterized the economy since the middle of the nineteenth century.” Another factor came into play in the oil industry. 1930, the first year of the depression, was the peak year of oil discoveries. With a limited market due to depressed conditions the oil was merely stored. Severe economic downturns may result from a lack of crucial resources such as oil, but they may also result from an overabundance of them. The end of the twentieth century, from the 1950s until the present, was characterized as an age of economic growth. The 1950s and 1960s were “golden years,” while the 1970s were an age of stagnation. Economic growth revived somewhat in the 1980s, but the burden of debt soared. The long-term consequences of the creation of a casino economy came due in 2008. But what does the future portend? Will we, through social reorganization, transcend our current problems, or will a set of external, biophysical, limits augment the preexisting social ones to produce an age of austerity. To answer this crucial question, we need to look carefully at the patterns of history as well as viewing carefully the scientific data.
The world economy collapsed into depression for the entire decade of the 1930s. In the United States the presidential election of 1932 pitted two candidates with opposite opinions as to the depression’s origins. Incumbent President Herbert Hoover believed the cause stemmed from the Great War and subsequent Treaty of Versailles that ended the war. The victorious allied powers redrew the map of the Middle East as they dismembered the Ottoman Empire, which had sided with Germany and the Austro-Hungarian Empire in the war. The new map showed a curious phenomenon. Places with large populations had little oil, and places with abundant oil reserves had very few people. The Austro-Hungarian Empire was also broken up, creating new nations of Austria and Hungary. While Germany did not possess an empire, it was stripped of its African colonies, forced to accept sole responsibility for the war, and pay some $33 billion in reparations to Britain and France. Germany was also deindustrialized, and the area to the west of the Rhine River was demilitarized. Without the industrial wherewithal to pay the reparations, the German economy was essentially crippled. To pay the reparations, they borrowed money from banks in the United States (the Germans had also borrowed heavily from US banks in the decade preceding the First World War). The British and French then used the reparations payments to repay their wartime loans from the United States, who had emerged from the war as an international creditor. The US banks then loaned the money back to Germany. The stock market collapse of 1929 and subsequent banking collapses of the early 1930s disrupted this precarious and unstable system. Unable or unwilling to continue, US banks stopped the loans to Germany, who then defaulted on their reparations payments to England and France. The British and French no longer had the funds to repay their loans to US banks. Without the infusion of funds from the United States, the system collapsed and world trade evaporated. The United States Congress passed high protective tariffs of up to 67% on selected agricultural commodities to protect their own markets. President Hoover reluctantly signed the Hawley-Smoot Tariff despite the opposition of the nation’s most prominent economists. The British created an Imperial Preference System, and Germany contemplated a policy of economic self-sufficiency. World trade, which stood at $36 billion in 1929, dropped to $12 billion in 1932 [4].
The tariff and trade situation was exacerbated by the international gold standard. Under its provisions, a nation was obligated to pay off any trade deficit in gold on an annual basis. However, since gold also functioned as a domestic currency, nations had to drain their domestic currencies in order to square their international balances. Theoretically this was supposed to reduce prices and make a nation’s exports more attractive to potential importers. In practice, the reduction of money touched off not only falling prices (deflation) but also unemployment, recession, and international speculation of debtor nation’s currencies. Panicked investors in the United States withdrew their deposits, precipitating a banking panic in 1930. Faced with just such a gold drain, the British suspended the gold standard in 1931, adding to the predicament of banks with the withdrawal of international deposits. In addition Hoover advanced legislation to increase tax rates in order to enhance revenue and balance the domestic budget. He believed that balancing the nation’s budget would provide the banking system with desperately needed liquidity. However, the economy slipped deeper into depression as wealth creation declined, along with tax revenues. The Federal budget slipped into a deficit of $2.7 billion, which was the largest peacetime deficit in American history. Much of this deficit resulted from Hoover-era policies to stimulate the economy by means of injecting funds into the struggling sectors of the economy.
Congress passed the Glass-Steagall Banking Act in 1931 which not only made the banking system safer by separating speculative securities trading (investment banking) from taking deposits and making loans (commercial banking) but also made possible for the nation’s central bank (the Federal Reserve) to release large amounts of gold from its holdings thereby expanding the monetary base. In 1932 Congress passed the Federal Home Loan Bank Act which allowed banks to present mortgage paper for rediscounting at the Federal Reserve and allowed banks to use mortgages for collateral in obtaining loans of badly needed capital. Finally Hoover proposed the creation of the Reconstruction Finance Corporation (RFC) which was designed to allow the government to loan taxpayer dollars directly to struggling financial institutions. Congress initially capitalized the RFC at $500 million and authorized it to borrow up to $1.5 billion. The RFC was the progenitor of the Troubled Assets Relief Program (TARP) created in the waning days of the administration of George W. Bush to deal with the financial collapse of 2008. The reaction in 1932 was as mixed and varied as was the reaction in 2008–2009. Progressives called it “socialism for the rich.” Business Week hailed the RFC as “the most powerful offensive force that governmental and business imagination has, so far, been able to command” [5].
However, given Hoover’s position that the depression was of foreign origin, his domestic policies were both tepid and hamstrung by his view of how the international economy functioned. Hoover remained committed to the principle of voluntarism and had to begrudgingly accept institutions such as the RFC. But more importantly he was more strongly committed to two of the most sacred principles of classical economics: the belief in balanced budgets and an unwavering fealty to the gold standard as the lynchpin of the international economy. He raised interest rates and taxes when the system cried out for increased credit and increased spending, largely because of his belief that not doing so would increase the gold drain and jeopardize the position of allies and trading partners such as Great Britain.
Hoover’s Democratic rival, New York Governor Franklin Delano Roosevelt, had an entirely different conception of the causes of the depression. He believed its cause was primarily domestic. While a candidate FDR surrounded himself with a number of Columbia academics that was branded “the brains trust” by a New York Times reporter. Chief among his economic advisors was Rexford Tugwell, who was an adherent of the “stagnation thesis” advocated by economists such as Alvin Hansen and Paul Sweezy. Roosevelt came to accept Tugwell’s perceptions that the mature economy has reached its frontiers and that no great epoch-making innovations would be forthcoming. The problem was one of overproduction of capital and not a shortage of it, along with the flip side of underconsumption. Roosevelt enunciated his belief in underconsumption in two speeches while a candidate one at the Commonwealth Club of San Francisco in September of 1932 and another at Oglethorpe University in Atlanta, Georgia, on May 22. The Commonwealth Club speech is worth quoting at length, as it foreshadowed the tenor of New Deal programs to come. The new Deal was to be about consumption instead of production, and equity instead of growth. Roosevelt’s main theme was how to deal with the generalized overproduction he thought was the cause of the depression. This overproduction characterized the oil industry as well:
Our industrial plant is built; the problem just now is whether under existing conditions it is not overbuilt. Our last frontier has long since been reached, and there is practically no more free land…We are not able to invite the immigration from Europe to share our endless plenty. We are now providing a drab living for our own people. Clearly this calls for a reappraisal of values. A mere builder of more industrial plants, a creator of more railroad systems, an organizer of more corporations is as likely to be danger as a help. The day of the great promoter or financial Titan, to whom we granted everything if only he would build, or develop, is over. Now our task is not discovery, or exploitation of natural resources, or necessarily producing more goods. It is the sober, less dramatic business of administering resources and plants already in hand, of seeking to reestablish foreign markets for our surplus production, of meeting the problem of under-consumption, of distributing wealth and products more equitably [6].
The New Deal was neither a well-enunciated program nor a manifesto for economic growth. Rather it was a set of sometimes contradictory experiments to pursue the goals of rescue, recovery, reform, and restructuring. FDR’s lieutenants, acting on incomplete information and in collaboration with Hoover’s financial advisors, declared a national bank holiday, closed insolvent banks, recapitalized them through the RFC, and reopened them for a trusting and newly confident public. FDR’s “fireside chats” themselves helped to restore confidence among a battered and beleaguered public. The chief advisor and organizer of the Brains Trust, Raymond Moley held the belief that the efforts essentially saved capitalism in 8 days [7].
Year | Unemployment rate |
---|---|
1929 | 3.2 |
1930 | 8.7 |
1931 | 15.9 |
1932 | 23.6 |
1933 | 24.9 |
1934 | 21.7 |
1935 | 20.1 |
1936 | 16.9 |
1937 | 14.3 |
1938 | 19.0 |
1939 | 17.2 |
1940 | 14.6 |
In addition to the banking bill, the first 100 days saw the Beer and Wine Act which was designed to raise revenue in anticipation of the repeal of Prohibition and the Economy Act designed to cut $500 million from the Federal budget. FDR advanced two bills to deal with the stubbornly persistent problem of unemployment. The Civilian Conservation Corps (CCC) put a quarter million youth to work beautifying the nation’s countryside and working on flood control and forestry projects. The Federal Emergency Relief Act injected Federal money directly into depleted state coffers for the purpose of unemployment assistance. Concerns over energy were also a crucial component of the legislation of the first 100 days when Congress created the Tennessee Valley Authority (TVA). The Federal government had built a dam at Muscle Shoals, Alabama, to provide power for the production of nitrates, which are the basis of not only explosives but fertilizer. The dam was completed too late to be of use for the war effort, and a cohort of private utilities successfully blocked the efforts of progressive Republican George Norris to have the Federal government operate the dam. The act created not only the authority to operate the dam but also charged the TVA with flood control, the combating of soil erosion and deforestation, and the construction of additional dams to bring electricity to the depressed rural South.
Faced with a 95% decline in home construction since 1929, Congress created the Home Owners Loan Corporation, rather than committing to the large-scale expansion of public housing, as recommended by New York Senator Robert Wagner. The HOLC stopped the surge of defaults (up to 1000 per day) and introduced standard accounting practices into mortgage lending. This was followed by the creation of the Federal Housing Administration in 1934. Traditionally mortgages required a 50% down payment and a short-term, interest-only loan. If the homeowner was diligent with his or her payments, the note would be refinanced for another 5 years. But when the banking system crashed repeatedly from 1929 to 1933, banks were simply not in a position to refinance the loans even if those homeowners who had retained their jobs were able to make the interest payments. The FHA replaced these traditional mortgages with low-down-payment, long-term (up to 30 years), low-interest, amortized loans where both principal and interest were repaid in equal monthly payments. Moreover, the FHA insured these mortgages from default. Despite the insurance, bankers were reluctant to write FHA loans. Some were worried about government intrusion, while others were concerned about holding on to a low-yield asset for some 30 years. To allay the fears of the bankers, Congress subsequently created the Federal National Mortgage Association (FNMA—better known as “Fannie Mae”) to bundle the mortgages into securities which could be sold on short-term markets. FNMA functioned successfully as a government corporation until it was privatized in 1968 [8].
Congress passed the Agricultural Adjustment Act on underconsumptionist grounds. The bill was designed to restore the balance between industry and agriculture and raise farm incomes by restricting crop output in order to raise agricultural prices. Increased rural incomes would provide the wherewithal for the purchase of the output of industry. The bill was paid for by increased taxes on agricultural processors. The hallmark of the first 100 days was the passage of the National Industrial Recovery Act. The NIRA, along with the AAA, was aimed not just at recovery but also restructuring of the economy on the basis of rational economic planning to replace the newly-failed market system as the basis for regulation of prices and output. However, the Supreme Court found the NIRA and AAA unconstitutional in 1935. The conservative bloc was joined by liberal anti-monopoly crusader Louis Brandeis, who objected to the suspension of the antitrust provisions.
The National Industrial Recovery Act (NIRA) established the National Recovery Administration (NRA). It provided a series of complex codes by which business would comply with the need to combat overproduction in order to receive funds. The act also allowed for labor unions to bargain collectively, and it established minimum wages and maximum hours. The law virtually suspended antitrust laws. Economic theory holds that monopolies restrict output and raise prices, a strategy tailor-made for remediating falling prices and overproduction. This allowed the Federal government to plan rationally the output and prices for whole industries. The law also established the Public Works Administration (PWA) designed to administer a large-scale and ambitious infrastructure construction agenda. The PWA was charged not only with the construction of energy-related projects, but it also assumed the duties of stabilizing the near-anarchy of the oil fields of the Southern Plains [9]. After the First World War, fears of oil shortages reared their head. These fears were allayed by two large oil discoveries. In 1926, interestingly enough the peak of the 1920s automobile boom, oil was discovered in the Permian Basin in West Texas and Oklahoma. As is common, large new additions to the supply of oil depressed prices. Oil that was selling at $1.85 per barrel in 1926 averaged only about $1 per barrel in 1930. Then, in 1930, another huge discovery was made in East Texas, one that dwarfed the combined output of Pennsylvania, Spindletop, and Signal Hill in California. The East Texas wells added another half a million barrels per day to the oil supply. Consequently prices dropped again to as low as 10 cents per barrel in the glutted market, adding to the already falling price level precipitated by the Depression. The Texas Railroad Commission, established in the Populist era to exert control over railroads, assumed the responsibility (despite dubious legality) of regulating oil production by regulating its transport. The strategy of the Railroad Commission was one of “pro-rationing” or limiting oil shipments to a fraction of oil reserves. Problems arose in Texas and Oklahoma (where the Interstate Commerce Commission employed a similar strategy), when producers exceeded their allotted shares, shipping illegally what came to be known as “hot oil.” The problem became so pronounced that Texas Governor, Ross Sterling, declared that East Texas was in a state of insurrection and called upon the Texas Rangers and the National Guard to quell the problem.
The NRA was first called upon to impose its codes to reduce competition and stimulate economic recovery. The problem was severe enough, however, that newly appointed Secretary of the Interior Harold Ickes brought the regulation of the East Texas fields under the aegis of the interior department when he was informed, in August 1933, that oil prices had fallen to three cents per barrel. The Oil Code, established under the NRA, gave Ickes the power to set monthly quotas for each state. The anarchy in the oil fields abated under the auspices of the NRA and Interior Department. However, when the NIRA was declared unconstitutional in 1935, a separate law, the Connally Hot Oil Act, was established to maintain price stability [10]. The Texas Railroad Commission remained effective at reducing cutthroat competition and stabilizing prices until the 1970s. Petroleum Geologist Kenneth Deffeyes, a colleague of Hubbert, realized that the US oil supply had indeed peaked when in 1971 he read in San Francisco Chronicle that the Commission instructed oil companies that they could produce at 100% of capacity! [11]. The Roosevelt Administration responded to the Supreme Court’s decision that the NIRA and AAA were unconstitutional by launching a broad and progressive agenda of reform, restructuring and redistribution in 1935, often called “The Second New Deal.” The year of 1935 saw the passage of the Social Security Act, providing pensions for the elderly. It was ostensibly devised to reduce unemployment by removing the aged from the labor force to reduce unemployment and was constructed on the principle of private insurance rather than as a dole. Once again, FDR’s fiscal orthodoxy necessitated that the program be funded by regressive payroll taxation rather than from the Treasury. The increase in taxes precluded any large-scale stimulative effect. The Social Security Act also provided for Aid to Dependent Children, later modified to become Aid to Families with Dependent Children (AFDC) soon to become the backbone of the Great Society welfare programs of the 1960s. The government also became an employer with the creation of the Works Progress Administration (WPA). The WPA created jobs for construction workers who built miles of highways, public buildings, and university campuses. The WPA also employed engineers writers and artists. In the first year of the program, the WPA employed more than 3 million people and 8.5 million over the life of the agency [12].
Further provisions were advanced to reform structurally the nation’s financial system. The Federal Reserve was given increased powers to conduct open-market operations which entail the buying and selling of preexisting Treasury securities, needed now that the gold standard was abandoned. Moreover, a tax bill created a strongly progressive income tax in order to achieve the goal of fairness embodied the New Deal philosophy. Roosevelt’s program was certainly eclectic, with a blend of progressive and regressive taxes in conjunction with increased spending. It depended upon no clearly enunciated economic theory, such as that of John Maynard Keynes. These rates, up to 79% for the top incomes, were accompanied by high inheritance taxes which were designed to reduce the intergenerational transmission of wealth. Perhaps the most important law of the reform era was the creation of the National Labor Relations Board. The senator Robert Wagner also inserted a provision (Section 7A) into the NIRA. This section established labor unions, formerly seen as “conspiracies in restraint of trade,” as the legitimate representatives of workers in the process of collective bargaining would increase wages and serve the goals of redistribution, but it would also bring about labor peace. The new board would replace the organizational strike with a monitored election. It was also the vehicle that enabled the development of the capital-labor accord that would become a crucial pillar of postwar prosperity. The New Deal ostensibly came to an end in 1938 with the passage of the Fair Labor Standards Act. This act established the 40-hour standard work week and further solidified minimum wages [13]. While the New Deal was successful in establishing significant structural reforms and developing a faith in government that has not been seen since, it was never successful in eliminating the stubborn specter of unemployment. Moreover, New Deal policies were not directed toward economic growth. Contrary to public opinion. He would try contradictory policies to see if they would work. He also believed in a balanced budget, so most spending programs were accompanied by tax increases to pay for them. As Keynes would later tell us, this reduced the “multiplier effect” and led to a very tepid recovery, that is, until the Second World War. The focus of government policy would change significantly with the advent of the Second World War.
10.3 The Second World War and the End of the Depression
The United States entered the Second World War on December 8, 1941. However, the country had been providing food, armaments, and much-needed oil to embattled Britain for more than a year, as President Roosevelt officially declared the United States to be the “Arsenal of Democracy” in December of 1940. The country had been supplying war materiel to the allies since 1939. Historian David Kennedy states the matter concisely: the war was won with Russian lives and American machines. “…the greatest single tangible asset the United States brought to the coalition in World War II was the productive capacity of its industry” [14]. While the war ended the depression, the conditions of the depression were also instrumental in mobilizing for the war. At the war’s onset, nearly 9 million workers were unemployed, and half of the nation’s productive capacity lay idle. By war’s end the impressive economic machine produced nearly 300,000 aircraft, 5777 merchant ships, 556 naval vessels, nearly 90,000 tanks, and over 600,000 jeeps. Of the 7.6 billion barrels of oil used during the war, 6 billion came from the United States. Given the tremendous finds of the late 1920s and early 1930s, the United States possessed an enormous surplus of 1 million barrels per day out of a total production of 3.7 million barrels per day. By war’s end oil production had risen to 4.7 million barrels per day. Moreover, the technological change making 8-ringed gasoline in a circle (octane) so that higher compression ratios could be used, along with a guaranteed market for this expensive process, allowed petroleum engineers to refine 100 octane aviation gasoline. This allowed American planes to fly farther and maneuver more agilely with up to 30% more speed and power than their German and Japanese rivals. The United States supplied more than 90% of the 100 octane aviation fuel. The development of long-distance warplanes allowed for escort cover in the all-important trans-Atlantic tanker routes, which previously had been decimated by German U-boat activity. In addition, the new long-distance bombers destroyed the German coal gasification (Fischer-Tropsch) plants. By war’s end German commanders were ordered to move troops and equipment with horses and mules, saving precious gasoline only for battle. Aviation victories so crippled the Japanese war machine of fuel that they had to leave the world’s largest battleship in port for lack of fuel and they attempted to fly their technologically advanced Mitsubishi fighters (the famed Japanese Zero) on turpentine [15].
The United States was to be much-changed by the war. It was the only belligerent nation in the history of the world to see its standard of living rise during wartime. Economic concentration would increase, labor union militancy would be tamed in support of the war effort, and women and African-Americans would enter the ranks of industrial production and clerical work in unprecedented numbers. In 1939 the unemployment rate stood at more than 17%. By 1944 it fell to 1.2%. Not only did the rate fall, but the economic prowess of the country absorbed an additional 3 million new labor force entrants along with more than 7 million workers who were previously excluded from active labor force participation, mainly women. Perhaps most importantly, from a perspective of economic policy, the agenda of the Roosevelt Administration turned from one of stability and social equity to one of more and more production. The Second World War saw the birth of growth economics.
Industrial concentration increased during the war, abetted by government policy. Two-thirds of all procurement contracts were given to 100 corporations. The thirty-three largest accounted for half of all government contracting. After-tax corporate profits rose from $6.4 billion in 1940 to $11 billion in 1944. At war’s end, the government turned over some $17 billion of publicly funded plant and equipment to private industry at bargain-basement prices. Two-thirds of it was purchased by 87 companies. Changes in production techniques accounted for a great deal of the increase in output. Everything from tanks to planes to Liberty Ships was constructed using the mechanized division of labor, which eliminated the need for overall skill that had been used so successfully by the automobile industry in the 1920s. In an attempt to deal with rising prices occasioned by shortages of crucial inputs and a plethora of money, the Office of Price Administration (OPA) would impose comprehensive wage and price controls. Nonetheless the inflation rate during the war was 28% and farm prices rose by 50%. Things had not been so good on the farm since the early days of the Republic. Organized labor would receive a reward for their slowly growing wages and no-strike pledge in the form of “maintenance of membership” provisions, guaranteeing that business accepts the closed shop requiring union membership as a condition of employment. Union dues were collected by firms themselves through payroll deductions. Gasoline was rationed. The owner of the standard “A” coupon would receive somewhere between 1.5 and 4 gallons per week, depending upon their location. The lucky few with an “X” coupon (e.g., doctors, clergy) still received unlimited supplies. Gasoline consumption fell by 30% from 1941 to 1943 [16].
To fund the war the Roosevelt Administration raised taxes. The income exemption at the bottom was lowered bringing some 13 million new taxpayers into the system. They paid at work, as the innovation of withholding tax made its first appearance. The top marginal tax rates were increased to 94%, so that the wealthy paid most of their income to taxes. Despite the tax increases, the new revenues were able to cover only 45% of the war’s cost, as the United States devoted fully half of its gross national product to war spending. The rest was borrowed. Working people bought war bonds, sold to them by celebrities such as Hollywood actors (including Ronald Reagan) and popular musicians as a matter of patriotic duty. Commercial banks did their part, increasing their purchases of Treasury bonds from less than $1 billion in 1941 to more than $24 billion in 1945 [17].
The long and destructive war, started mostly because of a quest for oil and land to grow food for a rising German population, ended in August of 1945. The sheer might of American productive capacity was too much for the beleaguered axis powers to withstand. The Red Army had stopped the Nazi advance toward the Caspian oil fields. Rommel’s tanks ran short of gasoline, losing North Africa and opening up Italy for allied invasion and victory. Japan’s objective of control over Indonesian oil supplies was never realized. Short on fuel to run their war machine and pummeled by incendiary and atomic bombs, Japan surrendered on August 14, 1945, thereby ending the war. Their great Admiral Isoroku Yamamoto, the designer of the very successful Pearl Harbor attack, had studied in the United States and understood that Japan could never win the war because of America’s enormous industrial potential.
10.3.1 The Postwar Economic and Social Order
The United States emerged from the war in an unprecedented position of economic, political, and military power. The nation was the only industrial power in the world, as those of its traditional rivals were decimated in the war, and it supplied the majority of the world’s oil. European cities lay in ruins. The allies were deeply in debt, while the United States was the world’s greatest creditor. At war’s end the allies met in Bretton Woods, New Hampshire, to reconfigure the international monetary system. Unlike the aftermath of the last Great War, no pretense was made of returning to the gold standard which had worked so poorly and helped create the conditions of poverty that helped precipitate the next war. The dollar was “as good as gold” and tremendous advantages flowed toward the United States, consolidating its dominant position. Basic commodities were priced in dollars, and the country did not have to contend with international price fluctuations. Sufficient money was available for the expansion of American business into the devastated markets of Europe and Asia, and American exports soared, as did foreign direct investment. The dollar alone was denominated in gold, and the rest of the world’s currencies were pegged to the dollar. The US agreed, in return, to redeem foreign currencies in gold at $35 per ounce. To rebuild war-torn Europe, the International Monetary Conference created the International Bank for Reconstruction and Development, better known as the World Bank. They were to make large-scale loans for the rebuilding of infrastructure—roads, bridges, power plants, refineries, office buildings, and factories. To provide adequate liquidity, or readily available money, the International Monetary Fund was created. In addition the Fund was charged with buying and selling currencies in order to keep them in balance with the dollar at the agreed-upon rate. Since the use of protective tariffs and beggar-thy-neighbor policies dried up world trade and helped transmit the depression internationally, the conference also created a General Agreement on Tariffs and Trade (GATT) to encourage free and open trade. The belief was that nations that trade with one another do not go to war. While the Conference proceeded on Keynesian lines, the plan of British delegate John Maynard Keynes, for an international clearing union was not accepted. Keynes’ plan provided a framework whereby nations with large trade surplus would redistribute money to nations with large trade deficits in order to keep trade balances within reasonable bounds. The United States was not only the world’s most powerful nation; it was also the world’s largest creditor. American representatives were in no mood to adopt Keynes’ plan, and they had the power to prevent its implementation. The GATT would have to suffice, although those present hoped for a more fully functional World Trade Organization. The WTO was finally created in 1995. However, the United States did supplement the World Bank funds with its own initiative known as the Marshall Plan.
10.3.2 The Marshall Plan
The theoretical ideas behind the Marshall Plan, conceived by General George C. Marshall, were economic and political. Many political parties in Western nations such as Italy, West Germany, France, the Netherlands, and even Britain found socialism and social democracy appealing in the chaos that followed the war. In a sense the Marshall plan was an attempt to save capitalism in the industrialized world.
The United States provided almost $9 billion into the European economies to ward off the growth of indigenous socialist movements by strengthening the financial markets and production capacity of European democracies. Most of that money (up to 80%) was used to purchase US exports. The framers of the Marshall Plan realized that no single market economy could thrive in a sea of economic stagnation. The Marshall Plan brought countless young scholars to be educated in “the American way of life.” It also insured that American corporations would gain entry into formerly protected colonial markets. The United States also agreed to sacrifice some of its domestic declining industries to the greater good of free trade. At the time, this was highly favorable to the expansion of American business. US foreign direct investment increased from $11.8 billion in 1950 to $76 billion in 1970. The share of total profits from foreign operations also rose from 7% in the early 1950s to 21% by the early 1970s. At the same time, up to 46% of all deposits in major New York banks were derived from foreign sources [18].
Back at home the economic scene changed on the domestic front. With the international economy serving as a lucrative source of income and profits, large corporations began to share more with labor in order to achieve labor peace and create a domestic source of demand for their products. They could have both rising profits and rising wages. After the “Treaty of Detroit,” productivity bargaining became the pattern in large industry. Since wages increased with productivity, labor had a strong incentive to increase productivity. Since a modicum of democracy was written into work rules, and wages were supplemented with retirement pensions and health-care benefits, once militant workers now had a strong stake in maintaining the system they once struggled against. Consequently productivity (or output per worker) grew at 2.9% per year in the 1950s and 2.1% per year in the 1960s. In contrast it would fall to 0.3% in the stagnant 1970s and “recover” to a tepid 1% per year in the supposedly prosperous 1980s. Wages rose by an average of 2.9% per year in the 1950s and 2.1% per year in the 1960s, while gross national product grew at an annual rate of 3.8% and 4.0% in the same time period. Corporate profits remained strong.
From the late 1940s when the Marshall Plan was implemented until the oil boycott of 1973 after-tax profits grew at 7% annually. During the epoch of stagflation of the 1970s, they fell to 5.5% [19]. The American public exited the war with the greatest accumulation of savings relative to income in any time in the country’s history. Wages rose and unemployment fell, prices were controlled, and consumption was held in check by tax increases, patriotism, and the fact that so many crucial materials were requisitioned for the war effort. The prominent economist John Maynard Keynes reasoned, in The General Theory of Employment, Interest, and Money, that the buildup of excess savings was a primary cause of the Great Depression. But such was not the case in the postwar United States. Deprived of consumption by 10 years of depression and 5 years of war, Americans were once again, like they were in the decade of the 1920s, on the verge of being consumers once again. Economists called this “pent-up demand.”
Accrued savings plus the additional worker and business income translated into growing consumption expenditures, especially with regard to gasoline, automobiles, and housing. Capital formation grew as well growing at 3.5% per year from war’s end until the mid-1960s and 4.3% per year from the mid-1960s until the beginning of the economic crisis in 1973. Horsepower in manufacturing grew from 49,893,000 in 1939 to 151,498,000 in 1962. Total consumption expenditures increased dramatically from $70.8 billion in 1940 to $191 billion in 1950 to $617.6 billion in 1970. Spending on gasoline and automobiles increased as well. In 1943, the year the last automobile was constructed for the duration of the war, only 100 cars were sold in the United States, but by 1950 more than 6.6 million cars received new tags. The pre-stagflation-era figure peaked in 1965 when more than 9 million cars left the showroom floor. One could tell something ominous was happening for the automobile-crazed population. By 1970 passenger car sales declined to less than the 1950 level. A similar pattern existed in housing. In the depths of the depression, only 221,000 new dwellings (public and private) were started. In 1950 the nation’s building contractors and trade workers constructed close to 2 million homes. After that a high level, exceeding 1 million new homes per year, existed in every year whether prosperity or recession. However, by 1970 only 1.5 million new homes were started. The new suburban homeowners motored to their new dwelling, many made possible by Federal Housing Administration mortgages, or the even more attractive mortgages offered by the Veterans Administration (no money down and the mortgages were guaranteed not just insured). Spending on gasoline soared from $332,000 in the war years of rationing to nearly five and a half trillion dollars in 1970 [20]. Gasoline prices remained cheap, as the United States, which at the time still produced 52% of the world’s oil, was relatively unaffected by world events and price spikes such as the one caused by the Suez Crisis of 1956. In 1950 the price per barrel of oil was $2.77 or an inflation-adjusted price of $25.10. The real price of oil did not exceed this level until 1974, during the first oil crisis of the 1970s [21]. Thus the general progress of industrialization was accelerated by the incredibly cheap source of its fuel.
10.3.3 Emergence of the Importance of the Middle East
US oil companies strengthened their position in the all-important Arabian Peninsula, soon to become the world’s largest source of crude oil. The original concession was given to Standard of California in 1933 for an up-font payment of $175,000 and an additional $500,000 to be given to King Ibn Saud if oil were to be found. Standard of California was soon to bring Texaco into the consortium to form Aramco (the Arabian-American Oil Company). In 1933 Gulf Oil, headed by Hoover’s Treasury Secretary Andrew Mellon, received a 50% share of the oil newly found in Kuwait, a concession they would share with Anglo-Iranian Oil Company (soon to become British Petroleum). After the war Aramco found that they had insufficient marketing operations to dispose of all the oil being pumped from the Saudi fields. They entered into a broader consortium with Standard Oil of New Jersey (soon to become Exxon) and the Standard Oil Company of New York (soon to become Mobil). Aramco overcame the stranglehold of Shell and Anglo-Iranian for marketing in Europe, and fears of overproduction were allayed. Gulf Oil, which was long on crude and short on markets, entered into a consortium with Shell, which was long on markets and short on crude. The basic conditions for expansion, increased production, and increased marketing capabilities were in place. The era of economic growth, based on a social structure of accumulation amenable to business ascendency and lots of cheap oil, was in place [22]. In 1991 the testimonial given on the back cover of The Prize by Nobel Laureate in Economics Paul Samuelson put the matter succinctly. “Dan Yergin lucidly and with grace explores the dynamics of the global business that has helped shape the modern economy and fueled the economic growth on which we have come to depend” (emphasis added).
The immediate postwar period was also the era of decolonization. Throughout Africa and Asia nation after nation gained independence. Oil-producing nations moved to increase the share of Ricardian rents, or return to pure ownership, for their precious resource. The original concessions of the late nineteenth and early twentieth centuries gave the international oil companies ownership rights of the oil for initial payments and an agreed-upon royalty per barrel. Countries that granted concessions were interested in having the oil companies pump as much oil as possible as it enhanced their revenues. The oil companies, however, were ever mindful of the industry’s history of gluts and falling prices. The companies, therefore, had an incentive to limit production to what they could market, and the companies were in charge of production. The aforementioned oil deals resulted in a tight oligopoly which Italian industrialist, and head of Azienda Generali Italiana Petroli (AGIP), Enrico Mattei dubbed “The Seven Sisters” (Standard of New Jersey, Standard of New York, Standard of California, Gulf, Texaco, British Petroleum, and Royal Dutch Shell). Oligopolies, as you may recall, pursue a strategy of maximizing profits in the long term by means of limiting output, maintaining stable prices, and enhancing control over production, marketing, and distribution. Fearing nationalization of their Venezuelan concession, Standard Oil of New Jersey agreed to split the rents on a “fifty-fifty” basis. The deal was soon transmitted to the Middle Eastern producers, and the potential instability abated, albeit at higher costs to the oil companies. Royalties were to be paid at an official “posted price” that could differ from the market price. At the time of the deal, the posted price generally exceeded the market price, which was kept low by the tremendous surplus capacity of oil. This transmitted an even greater share of the rents to the producing countries. However, US oil companies were aided by their government, as cost increases were softened by a provision in the US tax code that allowed them to count the new rent payments as taxes and deduct them from their US obligations. Essentially the stability of the oil industry was paid for by US citizens. But oil was cheap and plentiful and incomes were rising. There was no tax rebellion in the United States. However, as we saw in ► Chap. 6, new forms of competition can destabilize an oligopoly structure. Independent oil companies wishing to break into Middle Eastern production such as Getty Oil in the United States and Enrico Mattei’s AGIP simply offered a greater share of the rents as the price of entry. The era of colonial subservience on the part of producing nations was beginning to end. Yet the acquiescence of oil companies and governments to the new rent sharing plan provided stability for years to come [23].
10.3.4 The Age of Economic Growth
At the end of the war, all the pieces for a renewed era of prosperity were in place. American companies gained vast and profitable international markets. Few if any foreign corporations were in a position to compete effectively. The United States was the most powerful nation in the world, economically and militarily. The world monetary system was based on the dollar. American workers received wages that grew with productivity. As a result, productivity growth, much of it derived from the application of cheap oil [24], fueled increased profitability, and the increased wages, along with historically unprecedented savings, served as the basis for an explosion in consumption. The war showed more than anything that Keynesian economics, based on deficit spending and public funding of infrastructure, worked. In this era American Keynesians, now calling their approach “The New Economics,” began to transform the works of Keynes from a theory based on the problems of uncertainty and speculation into a herald call for economic growth.
The year 1945 saw more workers out on strike than any year in American history as labor unions sought to recoup the perceived losses from wage controls and from signing a no-strike pledge during the war. Part of this dilemma was solved by the generalized acceptance of productivity bargaining and limited capital-labor accord following the collective bargaining agreement between the United Auto Workers and General Motors, known as “Treaty of Detroit.” Congress also restricted labor union rights by passing the National Labor Management Act (better known as the Taft-Hartley Act) over the veto of President Truman. In addition Congress moved, on the advice of the New Economists, to deal with the fears that large-scale unemployment would emerge once the stimulus of the war ended by passing the Employment Act of 1946. The measure started originally as Senator Robert Wagner’s “Full Employment Bill.” Wagner’s proposal gave every American the statutory right to a job. If they could not find one in the private sector, the government would create one for them, as they had during the depression under the auspices of the Works Progress Administration (WPA). The bill was to be paid for by a tax on employers. Not surprisingly American Business opposed the bill. Not only did they dislike the taxes to be levied on them, but the general belief was that the absence of the power to dismiss workers would make labor discipline and productivity increases impossible. The eventual legislation was the result of political compromise. The act directed the government to pursue policies that would result in “reasonably” full employment, stable prices, and economic growth. Growth would be the mechanism that enabled the other two goals. Economists Samuel Bowles, David Gordon, and Thomas Weisskopf argued that this stalemated the traditional goals of the labor movement, those of full employment and income redistribution, and replaced them with the imperatives of economic growth [25]. The act also obligated the president to give an annual economic report to the Congress, as well as mandating the creation of a Council of Economic Advisers.
The movement toward a strategy of economic growth, which was not at all apparent in the work of Keynes, began in earnest with the work of the Council of Economic Advisers (CEA), especially after Leon Keyserling advanced to the chairship in 1949. The philosophy of secular stagnation was banished to the past as population grew with the baby boom, military technologies began to impact the civilian world, and new frontiers emerged as former farmland was converted to suburban homes. In Keyserling’s imagination, growth could achieve two goals beyond the attainment of reasonably full employment. If the economy grew, more could be given to those at the bottom of society’s income distribution without raising taxes and taking it from those at the top, which might adversely impact production and profits. The council firmly believed that only growth could “reduce to manageable proportions the ancient conflict between social equity and economic incentives which hung over the progress of enterprise in a dynamic economy” [26]. With a growing economic pie, the benefits could be shared more easily with more sectors of the economy. The other imperative for growth lie in the needs of the cold war. In 1949 the Soviets detonated an atomic bomb, and in 1950 President Truman directed the departments of State and Defense to devise a new set of priorities for the new realities of the world. The resulting document prepared by the National Security Council was NSC-68. Economic growth was at the heart of the strategy. Only through economic growth could the United States meet its domestic priorities of achieving reasonably full employment and stable prices yet, at the same time, fund its new military objectives of arming “friendly” client states that was the heart of the “Truman Doctrine.” However, Truman was somewhat tepid in his acceptance of economic growth, and the subsequent president, Dwight Eisenhower, was rather indifferent, preferring a strategy of price stability. The true era of the liberal growth agenda would come during the presidencies of John Kennedy and Lyndon Johnson.
The “New Economists” of the era believed they had conquered the business cycle such that recessions and depressions would be a thing of the past. By means of fiscal policy (taxing and spending) and monetary policy (money supply and interest rates), the New Economists could fine-tune the economy as if it were a well-oiled machine. If the economy performed sluggishly, the government could stimulate the economy and the increased spending would translate into an expansion of output and jobs. If prices rose to uncomfortable levels, inflation would be controlled by subtle downward adjustments in spending or the amount of money available to the economy. Moreover, inflation only occurred once full employment had been achieved and resulted from demand that was in excess of what the economy could produce at full employment. So any reduction in demand would decrease prices but not employment, at least in theory. In terms of policy, the liberal growth agenda rested upon three main pillars.
Current production had to be balanced with existing productive capacity. This was accomplished by expanding demand via the Kennedy-Johnson tax cuts. Costs were kept in line with wage-price guideposts and the use of presidential authority to convince union leaders to mediate their wage demands. This was known as “jawboning.” Finally, growth was stimulated by encouraging investment. Policy instruments included accelerated depreciation and Kennedy’s famous investment tax credit. In the 1960s their policies resulted in impressive outcomes. Unemployment rates were less than 4% by 1966, real (or inflation-adjusted) gross national product grew at 5% per year, and the inflation rate remained low. The number of Americans in poverty fell from 22.4% of the population in 1960 to 14.7% in 1966. The boom was driven by an increase in investment, with inflation-adjusted gross private fixed investment rising from $270 billion in 1959 to $391 in 1966. The only stubborn inconsistency was the degree of inequality, with the US distribution of income being more than four times as unequal as that of Sweden and twice as unequal as that of the Soviet Union. But policies of growth were to take precedence over those of distribution. President Lyndon Johnson believed that redistribution policies were doomed to failure because they were counter to the Puritan work ethic, they would be a political disaster, and they were counter to the growth agenda. Consequently the direction of the war on poverty was toward productivity enhancement of the poor rather than toward income maintenance programs. Despite the lingering inequality, conditions for many of those traditionally left out of prosperity did begin to improve with growth. Before the war Black men earned only 41% of the incomes of White men, and Black women earned only 36 percent of White women. By 1960 the figures had risen consistently with Black men now earning 67% of White men and Black women earning 70% of the wages of White women. The postwar prosperity was built on a series of growth coalitions with organized labor, the civil rights movement, and the women’s movement basing their strategies of reaching the top on economic expansion sufficient to include them. It was a time when a far greater proportion of the population believed that the wise actions of the government could benefit them than is commonplace in the early twenty-first century [27].
As long as the material conditions of prosperity, international hegemony labor peace and rising productivity, cheap oil, and the domestic limitation of cutthroat competition, remained in place, expansionary monetary and fiscal policy could produce growth with stable prices. However, by the 1970s the very success of earlier action led to the demise of the postwar social structure of accumulation. By the 1970s domestic oil production peaked, Europe and Japan caught up in terms of productivity, inflation gripped the nation in conjunction with rising unemployment, wages began to fall, and jobs began to leave as the economy became both globalized and more competitive.
10.3.5 Peak Oil and Stagflation
A great deal has already been written about the era of stagflation, some of which will be reviewed in this chapter. However, what tends to be missing in an economics literature that concentrates primarily on social forces and the internal limits to accumulation and growth is the advent of external biophysical limits. It was in the 1970s that the biophysical limits, in the form of peak oil, began to affect world economics and politics. As per M. King Hubbert’s prediction, domestic oil production peaked in 1970. Yet demand for oil to fuel transportation, heating and continued to grow at about 3% per year. The era of rapid and sustained economic growth based upon cheap oil came to a temporary end, giving rise to a decade of malaise in the United States and elsewhere, characterized by not only economic stagnation and high unemployment but rising prices as well. The oil shock did not come all at once, but in 1973 a series of events that had been building throughout the postwar period came to crescendo in the first energy crisis that affected the United States seriously.
In the 1950s the world oil industry was destabilized by the same force that destabilized historically the oil industry in the United States: large new discoveries, glutted markets, and falling prices. Crude oil production in the non-socialist world rose from 8.7 million barrels per day in 1948 to 42 million barrels per day in 1972, mostly as a result of discoveries in the Persian Gulf area. Consequently, although US production increased, the share of US production fell from 64% to 22% in the same time period. World proven reserves increased from 62 billion barrels to 534 billion barrels, excluding the socialist nations. In addition, greater quantities of Soviet oil entered the world market. By 1960 Soviet production was nearly 60% of the Middle East. This exceeded domestic demand, and the oil entered the world market, putting additional downward pressure on market prices. In April 1959 huge new discoveries of high-quality, low-sulfur oil (light sweet crude) were made in Libya, and by 1965 Libya was the world’s sixth largest oil producer. The result was more cutthroat competition and falling prices. But oil companies had to pay royalties to producing nations on the basis of the official posted price, which was not falling with the increased supply. Consequently, their profit margins fell. In August of 1960 Standard of New Jersey unilaterally cut the posted price by 7%, enraging the oil-producing nations. Spurred on by the oil ministers of Saudi Arabia and Venezuela, the producers met with the intention of forming a body similar to the Texas Railroad Commission which would prorate shipments and allow them to control the decrease in prices. In September, the Organization of the Petroleum Exporting Countries (OPEC) was born [28].
Political turmoil hit the Mideast in the late 1960s. In 1967 Israel invaded Egypt. Saudi Arabia withdrew their oil from the world market in an attempt to create shortage and economic discord among Israel’s supporters in Europe and the United States. However, the strategy was ineffective and led primarily to declining revenues for the Saudis. There was still sufficient spare capacity in world oil production and in the United States to make up for the difference. That was soon to change. Moreover, in 1969 a coup led by Colonel Muammar al-Quaddafi overthrew King Idris. The new government demanded a large increase in the posted price and ordered oil companies to cut production. With the Suez Canal still out of service, the quick trip across the Mediterranean enhanced the power of Libya. Furthermore the Trans-Arabian Pipleline (or Tapline) was ruptured by a bulldozer making oil transportation even more difficult. This set the stage for competitive price increases among producing nations. Iran increased its price in 1970 followed by Venezuela and Libya again. By the time negotiations came to an end, the posted price had increased by 90 cents per barrel. By 1970 the United States was essentially powerless to control the situation, as it no longer possessed the spare capacity to overcome events in the Middle East. US oil production peaked in 1970 at slightly more than 11 million barrels per day, never again to increase, despite increased drilling effort, new discoveries, and tremendous political pressure.
10.3.6 The Fateful Year of 1973
In September, Colonel Quaddafi nationalized 51% of the remaining oil companies not expropriated in the original coup. He worried little about retaliation as the spare capacity to overcome his moves no longer existed. Europe was simply too thirsty for Libya’s light sweet crude. But this effort was dwarfed by the events of the following month. Still hurting from the humiliation of the 1967 defeat, new Egyptian President Anwar Sadat, in conjunction with Syria, launched a surprise attack on Israel during the holy month of Ramadan in the Islamic World but also on the highest religious holiday of Yom Kippur in Israel. Sadat’s forces were on the verge of defeating the Israelis, who were running short of munitions and materiel. If they were not resupplied, they would lose the war militarily. The United States attempted to keep the resupply effort low key, but cold-war logic called for resupply, seeing as the Soviet Union had armed, and was resupplying, the Egyptian and Syrian forces. The plan was to land their huge transports under the cover of darkness. However, adverse weather in the refueling station in the Azores delayed the operation, and the American planes landed in broad daylight. Israel regrouped and staved off defeat. But events were soon to grow in scale. Outraged by the American resupply efforts, Saudi Arabia called for a boycott of oil to the supporters of Israel, particularly the United States and the Netherlands. The Saudis called for production cuts of 5% per month for the entire world and a complete cutoff to the Americans and the Dutch. They threatened the partners in Aramco with loss of the concession if they sent as much as one drop of oil home. So interestingly enough, it was the US oil companies themselves who carried out the mechanism of the boycott, not the Saudi State. As recently as 1967 the removal of oil from the world market had not worked as a political weapon for the Arab States, as sufficient spare capacity existed in the world market to overcome their efforts. This was no longer the case once the production of the world’s major swing producer, the United States, had peaked. The Saudis withdrew about 16 million barrels per day from the world oil supply, and other producers had insufficient spare capacity to make up the difference. Iran increased oil exports by some 600,000 barrels, but they, and some others, could not compensate for the Saudi withdrawal. All in all, the world’s oil supply fell by about 14%.
In the United States, gasoline prices quadrupled as the world price of oil increased with success of the Saudi boycott. To begin with, the nation’s oil imports nearly doubled from 3.2 million barrels per day when domestic production peaked in 1970 to 6.2 million barrels per day in 1973. Before the October war, the posted price was $5.40 per barrel. By December oil was selling for as much as $22 per barrel. Gas lines became a feature of American life, as motorists would wait for hours to buy gasoline often to find the station had run out by the time they reached the pump. Calls for action to increase the supply abounded from all corners of the nation. However, the oil companies were no longer just American enterprisers but multinational corporations who tried to apportion the hardship equally among their various markets. There would be no special treatment for any particular nation, especially the United States. Patriotism did not include the potential loss of the Saudi concession for the American partners in Aramco. The United States president, Richard Nixon, was essentially powerless to do much of anything, embroiled as he was in the loss of his own job owing to the revelations of the Watergate Scandal. However, the effects of the oil price run-up wreaked havoc with his New Economic Policy, intended to break the specter of stagflation that had been emerging for years and abetted seriously by the increase in energy prices [29].
10.3.7 The End of the Liberal Growth Agenda
The liberal growth agenda was based on the idea that the government should stimulate economic growth but also had the power to “fine-tune” the economy to manage unemployment and inflation. This ideology and set of policies fell apart in the 1970s. The 1973 energy crisis was not the only force that crippled the US economy. In fact the pillars of postwar prosperity were all crumbling. The rising power of the oil-producing nations was only one sign of the end of Pax Americana. There were many others. Europe and Japan, once war-torn nations in a state of shock, caught up to, and even surpassed, the United States in terms of industrial output. The terms of trade, or the ratio of export prices to import prices, rose from near parity to 1.3 to 1 in 1972. They plummeted to less than 1.1 to one by 1979. Despite the rising cost, imports increased from 4% of GNP in 1948 to 10% in 1972. The US share of total world exports in 1955 was 32%. They stood at only 18% in 1972. The postwar monetary system was based on fiat money, where the value of a nation’s currency depends upon productive power and political stability. American productivity growth, which averaged 2.7% per annum in the 1950s, fell to 0.3% per year in the 1970s. As the rise in oil prices attests to, the United States no longer bought in a buyer’s market and sold in a seller’s market. Moreover, the expansion of cold-war military spending plus the outflow of funds directed toward direct foreign investment worsened the US balance of payment situation. The Bretton Woods Accords mandated the United States to convert holdings of foreign currencies to gold at the price of $35 per ounce. By 1973 outstanding claims exceeded the American gold stocks. Richard Nixon “closed the gold window,” and the Bretton Woods Accords collapsed, thereby ending the dominant position of the dollar and all its benefits. Soon, the world was to open up to an unprecedented increase in global oligopolistic rivalry. The days of the insulated oligopoly position of US business were nearing their end, and the demise was reflected in the decline of corporate profits. After-tax corporate profits for the nonfinancial sector, which averaged 10% in 1965, dropped to less than 3% in 1973.
With productivity growth on the decline and international dominance eroding, American corporations could no longer “afford” the expensive mechanisms of labor peace erected in support of the capital-labor accord. Less access to energy was a primary factor in the decline of productivity growth, and the capital-labor accord depended upon growing productivity. An “open shop movement” began in housing construction by the 1980s, and myriad consulting firms specializing in “managing without unions” also emerged. As a result, wages began to fall. Hourly income, which grew at 2.2% per year in the long expansion of 1948–1966, grew only at 1.5% year in the time period between 1966 and the 1973 oil boycott. Unemployment rates, which had been as low as 3.6% in 1968, began to rise as well, reaching 5.6% by 1972. Pressures on the economy had been building since the long boom and took the form of classic Keynesian “demand-pull inflation.” With the economy at nearly full employment, rising military expenditures, coupled with increased consumption and investment, began to increase the claims on national output beyond the capacity to produce it. Federal budget deficits increased from $2.8 billion in 1970 to $23.4 billion in 1973. The Federal Reserve System accommodated the booming economy by keeping interest rates low and credit readily available. The government also reduced business taxes to keep the economy expanding and spur further investment. There was simply “too much money chasing too few goods,” and inflation began to rise from 1.3% per year in 1964 to 3% in 1966. By 1965 President Lyndon Johnson’s advisors were recommending either a tax increase or a decrease in spending. Neither strategy fit with Johnson’s political or economic objectives. The Federal Reserve did briefly tighten credit, but the strategy was quickly abandoned after the “credit crunch” devastated industries that were dependent upon credit such as automobiles, the construction trades in general and housebuilding in particular.
Upon his election Richard Nixon began to engineer a mild recession in order to decrease inflation, according to the Phillips curve, and unemployment began to rise. However, the recession was short-lived. Having other problems to deal with (e.g., the troubles in international finance and an impending oil crisis), Nixon once again pursued an expansionary fiscal policy. Government deficits rose from $11.3 billion in 1971 to $23.6 billion in the quarter preceding the 1972 election. Unemployment declined and Nixon was reelected, proclaiming, to the chagrin of his conservative supporters, that he was now a Keynesian. However, the brief and mild recession did not wring the inflationary pressures from the economy. Prices continued to rise, but a new phenomenon was about to occur: rising prices in the context of high levels of unemployment. Upon succeeding Richard Nixon as president in 1974, Gerald Ford and his advisors pursued a contractionary policy under the guise of “Whip Inflation Now.” Spending was reduced and taxes were increased to produce budget surplus which exerted a downward force on aggregate demand. In addition, the oil price increases (commonly referred to as the OPEC tax) removed another $2.6 billion of purchasing power from the economy. Despite the reduction in spending, prices continued to rise, with inflation averaging 11% by 1974. The Federal Reserve tightened credit as well. The inflation rate abated slightly, to 9.2% in 1975, and then further to 7.8% by 1978. But unemployment increased to 7.7% in 1976 in response to the contractionary policies [30].
Traditional demand management practices were no longer working. If the government expanded the economy inflation worsened without achieving full employment. If the government conducted contractionary policies, unemployment soared without eliminating inflation. Political economists concluded that the economy was suffering from an entirely different form of inflation known as cost-push, where rising prices led to rising business costs, which were passed on to consumers in the form of higher prices. Oligopoly power remained strong and business was able to pass on rising energy costs as higher prices. The last vestiges of the capital-labor accord took the form of cost-of-living adjustments (COLA) provisions in union contracts. When business passed on costs as higher prices, workers received an automatic increase in wages. In addition, oligopolies had long ago stopped relying upon the market to determine prices. Rather, they set a target profit rate and marked up costs in an attempt to achieve their targets. When the nation’s monetary authorities raised interest rates, the businesses were able to simply raise prices. Consequently, restrictive monetary policy and high interest rates exacerbated the inflationary spiral rather than reducing it [31]. The decade of the 1970s remained a stagnant one. The ineffectiveness of the policies of the new economists did not change with the election of a Democratic president, Jimmy Carter, in 1976. Unemployment fell to 6.1% by 1978, but this level at a cyclical peak was higher than the rates found in the troughs of recessions in the 1950s and 1960s. Carter attempted to deal with the problem of structurally embedded inflation by deregulating the airline industry, hoping to unleash the forces of competition. Yet inflation varied between 5.75 and 7.6 percent until 1978, which were, themselves, historically high levels in the postwar era. But things were to change rapidly, once again driven by oil prices, in 1979.
10.3.8 The Fateful Year of 1979
Since 1953, when the Central Intelligence Agency helped engineer the overthrow of Prime Minister Mohammed Mossadeq, the Shah of Iran engaged in a rapid modernization program. This modernization led to many of the economic problems associated with rapid growth: traffic-clogged streets, rising prices, urban pollution, and income inequality. By 1979 the Shah’s empire crumbled. Initially a moderate social democratic form of government emerged, but it was quickly replaced by the charismatic cleric (or Ayatollah) Ruhollah Khomeini who subsequently proclaimed the Islamic Republic of Iran. In the waning days of the Shah’s regime, Iranian oil workers struck, disabling production. Exports fell from 4.5 million barrels per day to less than 1 million. By Christmas 1978 oil exports stopped entirely. Oil prices increased by 150%, stimulating a panic which led to further speculative increases. Saudi Arabia and other OPEC nations increased their own production, but the shortage was real [32]. When the Saudis worried that the increased production would damage their wells, and reduced production, prices spiked again. Iranian students seized the American Embassy. The responsibility for a failed rescue attempt fell upon President Carter, who had tried to govern in the center while imposing an austerity plan. He spoke to the American people that “life was not fair,” placed solar panels on the White House roof, turned down the thermostat, and urged his fellow citizens to do the same. Many were in no mood to listen. Earlier in the year, Carter had to deal with the partial core meltdown of a nuclear power plant in the Susquehanna River on the outskirts of Harrisburg, Pennsylvania, at Three Mile Island. America’s energy future was highly uncertain, and the economy was on the verge of plunging into another oil-price-driven recession. Carter was to be a one-term president, learning the hard lesson that, in times of austerity, the center moves to the right. The 1980 election pitted the incumbent Carter against former actor and governor of California, Ronald Reagan. Reagan won in a landslide, promising the return of “Morning in America.” His economic plan was one designed to restore the lost American hegemony, control labor and energy costs, and boost corporate profits.
10.3.9 The Emergence of Supply-Side Economics
The use of a restrictive monetary policy to generate high interest rates and engineer another recession, largely in order to raise unemployment to discipline labor.
Further intimidate or eliminate labor unions in order to reduce wage-based inflation and enhance the ability of business to appropriate the gains of productivity.
Deregulate business, especially finance, in order to restore competition. This also entailed the elimination of environmental laws and worker safety laws to further reduce costs to business.
Increasing the degree of inequality in order to redistribute income and wealth toward the wealthy and corporations. This was accomplished by means of changing the tax code.
Remilitarization and the return to an aggressive, unilateral, anti-communist military policy.
Jimmy Carter had appointed a conservative central banker, Paul Volker, to the Federal Reserve Chair in an attempt to restrain inflation and prop up the value of the dollar he moved to increase interest rates. In 1978 the rate that banks charge one another for overnight loans (called the Federal Funds Rate) stood at 7.9%. Other presidents, for example, had toyed with contractionary monetary policy (also known as tight money) but had abandoned the experiment when unemployment rates increased. But during the Reagan era, tight money was not abandoned. By 1981 the Federal Funds Rate rose to 16.4%, and rates for home mortgages rose to nearly 20%. Unemployment increased from 5.8% in 1979 to 9.5% in 1982. Failures per 10,000 businesses rose from 27.8 to 89.0 in the same time period. Economists Samuel Bowles, David Gordon, and Thomas Weisskopf termed this policy “the Monetarist Cold Bath.”
The Reagan Administration also continued the Carter era experiments with deregulation, launching a public campaign to convince the nation’s citizens that regulations were outmoded and cumbersome. As we saw in the previous chapter, the older regulatory agencies, such as the Interstate Commerce Commission, were created at the behest of business to control cutthroat competition. During the Great Depression, the nation’s banks were regulated in an attempt to stem the financial crisis. The Reagan Administration turned to the dismantling of the newer regulatory agencies such as the Environmental Protection Agency (EPA) and the Occupational Safety and Health Administration (OSHA) which they believed to be a primary cause of the increases in business costs. Their staffs were cut and Reagan appointed James Watt, who believed that environmentalism was “dangerous radicalism,” to head the Department of Interior. Spending on regulation declined by 7% from 1981 to 1983, and staffing was reduced by 14%. The high interest rates that resulted from the monetarist cold bath led to the problem of “financial disintermediation”. During the Great Depression, thrift institutions such as savings and loans were allowed to pay higher interest rates on deposits than were commercial banks (Regulation Q). In return they were to loan money only for purchases of homes and apartment buildings. But the increase in interest rates made Regulation Q irrelevant, as deposits left the savings banks to find more lucrative returns in other financial markets. The Garn-St. Germain Depository Institutions Act of 1982 allowed savings banks to pay market interest rates and to invest their funds in more speculative housing projects. The system came to a crashing halt during the presidency of Reagan’s successor, George H.W. Bush, necessitating the need for a multi-billion dollar bailout. In addition, the banking sector accounted for more mergers than any other industry by 1986.
As a candidate Ronald Reagan stood on the steps of the State Capitol in Concord, New Hampshire, and proclaimed that for America to get richer, the rich need to get richer. This was to be accomplished by reducing the progressivity of the tax codes, whereby the wealthy pay a proportionately larger share of their income in taxes. The effective corporate tax rate dropped from 54% in 1980 to 33% in 1986. The Economic Recovery Act of 1981, better known as the Kemp-Roth tax cut, reduced the top marginal tax rates of top income earners from 70% to 50% and cut overall taxes by 23% over the course of 3 years. It also reduced estate taxes, allowed for accelerated depreciation, and reduced corporate taxes by some $150 billion. Government revenues fell by $200 billion. As a result the income distribution of the United States changed, becoming more skewed toward the top. The Gini coefficient, which measures overall income inequality, rose from 0.406 to 0.426 over the course of the Reagan Administration. The higher the coefficient, the greater is the degree of inequality. The share of income accruing to the top 1 percent of the population rose from 8.03 in 1981 to 13.17 in 1988. The share that went to the top one hundredth percent rose from 0.65% to 1.99%. This was supposed to free up funds for investment in the newly deregulated economy. Unfortunately, the surge in investment was not forthcoming.
Finally, the last component of the supply-side agenda was an increase in military spending. This was hardly supply-side economics but rather old-fashioned demand expansion by means of increased government spending. Massachusetts Institute of Technology economist and Newsweek columnist Lester Thurow went as far as to call Reagan “the ultimate Keynesian.” Military spending as a percent of gross national product peaked at 9.2 percent during the height of the Vietnam War but had declined since then. But between 1979 and 1987, inflation-adjusted military spending increased by 57%. The Reagan Administration had clear cold-war objectives. They believed that the Soviet Union would bankrupt itself trying to keep up with American spending. They were correct. Increased military spending plus declining oil revenues were the primary economic cause of the collapse of the Soviet Union at the end of Reagan’s presidency. But the increases in military spending were also designed to increase US power in an increasingly militant world. The United States conducted military operations, for instance, in Grenada which had elected a mildly socialist president (Maurice Bishop). It was hoped that the increased military power would restore the days of Pax Americana and bring the benefits of a strong dollar and low raw materials prices back to the country [33]. Thus while the economy expanded during the 1980s, it was not possible to attribute it to either reducing the tax burden on the rich or on Keynesianism. The road to prosperity was instead paved with low oil prices.
Given these objectives, the macroeconomic performance of the Reagan years produced mixed results. Inflation rates fell, dropping into the 3–4% per year range by the mid-1980s from a high of 13.6% in 1980. Much is made of the effectiveness of the assault on labor unions in lowering the rate of wage growth and the decline in interest rates since the zenith of the cold bath policy. What is rarely mentioned, but rather important, is the role falling oil prices played in both controlling cost-push inflation and bringing about the demise of the Soviet Union. In the mid-1970s, additional sources of oil were discovered in Mexico and in the North Sea between the United Kingdom and Norway, all beyond the control of OPEC. The first oil from the North Sea flowed into England in 1975. In the period from 1972 to 1974, oil was discovered in the Bay of Campeche in Eastern Coastal Mexico. The wells were prolific enough such that Mexico met its own needs and began to export to the world market. The Trans-Alaskan pipeline, on hold since the late 1960s, was completed in 1977. With the completion of the pipeline Alaskan oil, production soared from a mere 200,000 barrels per day in 1976 to slightly more than 2 million barrels per day in 1988. Since the 1988 peak Alaskan oil production has subsequently fallen to only 700,000 barrels per day as of 2008. Further downward pressures on price came from the development of alternative energy sources: nuclear power in Europe, natural gas and coal, and the conservation that resulted from increased energy prices. By the mid-1980s, a spare capacity of 10 million barrels per day emerged. These forces caused OPEC to reduce its prices. By 1985 the price of oil had fallen to $10 per barrel, reducing the pressure of cost-push inflation [34]. The Soviet Union, deprived of oil revenue, which accounted for a third of its income, could no longer maintain its military spending, especially after its defeat in Afghanistan. The end of the Soviet system was soon to follow.
As a result of decreased income support and an anti-union climate, the growth rate of worker compensation did fall, averaging on 0.6% per year from 1979 to 1990. Unfortunately, productivity growth (or growth of output per worker hour) also grew nearly as slowly, achieving annual growth levels of only 1% during the same time period. So while corporate profits rebounded from their 1981 trough, they were essentially no higher at the end of the Reagan Administration than they were in the beginning of the stagnant 1970s. The real growth in profits would have to wait until the era of Bill Clinton [35]. Perhaps the most negative consequence of Reagan-era economic policy was the explosion of debt.
The Federal budget deficit increased dramatically over the course of the 1980s, driven by the reduction in tax revenues, the high interest rates associated with the monetarist cold bath, and the expansion of Federal spending. Between 1981, the Kemp-Roth tax cut became a law, and 1988 (the last year of the Reagan Administration) when tax revenues as a percentage of gross national product fell from 15.7% to 14%, in the same time period, military spending increased from 5.3 to 6.1 percent of GNP, while interest obligations rose from 2.3 to 3.2 percent. Federal spending on education and infrastructure declined. Given the increase in military and interest spending, the size of the Federal government did not decline, as per the neoliberal goal. Rather, it increased from 20% of GNP in 1979 to 22% in 1981, where it stayed until 1987. The deficit itself, which had ballooned to $221 billion in 1990 from a base of $79 billion in 1981, now represented 2.5% of gross national product by itself [36]. Furthermore the push toward financial deregulation allowed banks and other financial institutions to increase their own indebtedness, although the structural changes of the Reagan Administration would give way to a much greater financial explosion by the early twenty-first century. The relaxation of the antitrust laws, falling inflation, and declining interest rates, once the cold bath shock treatment was completed, provided the incentives for another merger movement in the 1980s. From 1970 to 1977, merger activity averaged $16 billion per year. The value of merger activity increased to $70 billion per year in 1981–1983 and $177 billion from 1985 to 1987. Eleven of the top twenty-five mergers involved oil companies as either buyer or seller. In fact, the top five mergers of the decade were oil company mergers, the largest being the 1984 acquisitions of Gulf Oil by Standard of California for $13.4 billion, and Texaco’s purchase of Getty Oil for $10.1 billion in the same year. Other mergers were concentrated in the food products industry, retail trade, and insurance. Cross-border mergers increased in volume and size, as exemplified by the acquisition of Texasgulf, Inc. by the French oil giant Elf [37]. The economy in general, and the oil industry in particular, emerged from the 1980s as a more concentrated economy, better able to withstand the competitive pressures of falling prices without sacrificing unduly their current and future profitability.
Reagan’s successor, George H.W. Bush, attempted to carry on the same policies, especially in the area of keeping taxes low. However, deficits kept mounting, and the new president was constrained further by the passage of the Gramm-Rudman-Hollings Balanced Budget and Emergency Deficit Control Act of 1985. The Act imposed binding constraints upon Federal spending and limited the creation of further deficits. Bush campaigned on the promise of no new taxes, but the military spending needed to pursue a war in oil-rich Iraq threatened to expand the deficit beyond the Gramm-Rudman-Hollings limits. Reluctantly Bush agreed to raise taxes, and consequently the conservative wing of the Republican Party abandoned him. This set the stage not only for the election of Democrat Bill Clinton but also for the resurgence of the conservative influence upon the Republican Party. Clinton was destined to carry out the legacy of the Reagan Revolution. Running as a liberal, Clinton campaigned on the basis of renewing economic growth by means of supply-side measures to increase labor productivity. Primary among them were public investments in education and infrastructure. However, there was a competing agenda among the Clinton advisors to reduce the size of the budget deficit in order to protect the integrity of the nation’s financial markets, increasingly susceptible to international demands and pressures. The deficit hawks argued that large deficits limit long-term growth, appropriate scarce international capital, and result in rising interest rates and a greater portion of the Federal budget being devoted to interest payments. The deficit hawks won the day. No large-scale fiscal stimulus by means of public investment would be forthcoming. Although the title of Clinton’s campaign pamphlet was entitled Putting People First, his policies put the needs of the bond markets first. The growth path was to be fine-tuned by monetary policy alone, and the Federal Reserve pursued an essentially “accommodative” expansionary “easy money” policy.
In Clinton’s second term, the deficits turned to budget surpluses, rising from $69.3 billion in 1998 to $236.2 billion in 2000. In 1999 Clinton also signed the Financial Services Modernization Act, which repealed the Glass-Steagall Act of 1933. Commercial banking was no longer separated from investment banking. The act provided the impetus for yet another merger movement, this time involving the consolidation of financial services. Citibank merged with Travelers Insurance to form Citigroup. Wells Fargo merged with Norwest to provide myriad financial services, and American Express expanded their product line into nearly every aspect of money management. The bill also insured that hedge funds would remain unregulated forever! As a result of the deregulation of banks and financial services, debt began to expand. Wage growth remained low, averaging only 0.5% per year throughout the 1990s. Moreover, the economy was expanding on the technological changes brought by computerization and the early days of the internet, commonly referred to as the ► dot.com bubble. Most technology stocks were traded on the National Association of Securities Dealers Automated Quotation Index (or NASDAQ). In 1994 the NASDAQ index stood below 1000. By 2000 it had climbed to over 5000.
However, the expansion of debt begun in the Reagan years continued to climb. When wages and incomes of the vast majority of the population are growing slowly, the only way to increase spending is to increase access to credit. From 1990 to 2000, gross domestic product increased from $5.8 trillion to $9.8 trillion. However, outstanding debt increased from 13.5 trillion to $26.3 trillion. Household debt nearly doubled during the period, from $3.6 trillion to $7 trillion, but financial firm debt more than tripled from $2.6 trillion to 8.1 trillion. The economy seemed to be running on financial speculation fueled by easy access to credit, as well as by relatively cheap oil. Oil prices generally remained stable throughout Clinton’s years as well as relatively cheap allowing for revenues to be directed toward deficit reduction rather than increasing oil costs. Oil prices were less than $20 per barrel when Clinton took office and remained at the $30 per barrel level when he left. Oil production also remained high, ranging between 25 and 30 million barrels per day. Clinton’s years saw neither spikes in gasoline prices nor energy crises.
Clinton also pledged to end “welfare as we know it” and did so by signing the Personal Responsibility and Work Opportunity Reconciliation Act of 1996. The act essentially ended welfare (or Aid to Families with Dependent Children) as an entitlement program. AFDC was replaced by Temporary Assistance for Needy Families (TANF), and recipients needed to work in order to earn their checks. The new law was supposed to restore America’s work ethic and was also helpful in deficit reduction. Average monthly welfare payments (AFDC or TANF) adjusted for inflation in 2006 dollars fell from $238 per month in 1977 to $154 in 2000. Not surprisingly with increased financial mergers, a technology bubble in the stock market, rising access to debt, reduced welfare benefits and slowly growing wages the degree of inequality increased as well. The Gini index increased from .454 to .466 to .479 in 2015. This meant that every year of the Clinton Administration exhibited greater income inequality than any year of the Reagan Administration. The share of aggregate income accruing to the top 5% increased from 21% to 22.4% over the same time period, while the share going to the top 1/100 of a percent rose as well, from 1.74% when Clinton began his term to 2.4% when he left office. Increasing income inequality has become a trend. Inequality was higher in every year of the George W. Bush Administration than in any year of the Clinton Administration. In addition, there was more inequality in most years of the Obama Administration than in the Bush years.
A recession began shortly after Clinton left office, in 2001, driven by the buildup of excess capacity in the computer industry and the subsequent fall in NASDAQ values known as the ► dot.com bust. During the first term of George W. Bush, who narrowly won a contested election, the unemployment rate increased from 4% in 2000 to 6% in 2003. Following attacks on the World Trade Centers and the Pentagon in September of 2001, the Bush Administration pursued wars in Afghanistan and Iraq. Oil prices rose from approximately $30 per barrel in 2003 to nearly $150 per barrel in 2008, driven largely by the dislocation of war on oil-producing countries.
10.3.10 Warning Signs in the Early Twenty-First Century
The housing sector was particularly hard-hit—just as it was in the Great Depression. Housing values collapsed by as much as 40% in particularly speculative markets such as Las Vegas, Miami, and the major cities of Southern California. Unemployment in the building trades rose to 20%. The third phase of the crisis began in 2010 and can be found in a fiscal crisis among states. Most states have a balanced budget provision in their constitutions, and the fall in revenue from lost housing values and taxes upon financial assets has created the need to cut costs. Layoffs of public employees are soaring, and some states in the Great Lakes region such as Wisconsin, Ohio, and Michigan have pursued policies of removing the rights to collective bargaining for public employees. More such attempts to reduce cost to state and local governments and instill the “flexibility” of having employees pay for the effects of the economic downturn are likely to occur in the future.
10.3.11 The Housing Bubble, Speculative Finance, and the Explosion of Debt
The economic downturn of 2008, the most severe economic recession since the Great Depression, began much as did the Great Depression of the 1930s: with a major hurricane and a collapse of speculative housing. While the events of the late 1920s were centered in Florida, the antecedents of the 2008 crisis were truly global. Throughout the latter years of the twentieth century, a global pool of money, or a glut of savings, were building from sources as diverse as sovereign wealth funds based on petroleum profits, to Chinese trade surpluses, to individual accumulations in high-saving nations. By the middle of the first decade of the twentieth century, this fund had grown to the order of $70 trillion. Traditionally these funds had been invested in safe assets such as US Treasury securities. However, by 2004 the Federal Reserve Board of the United States had driven interest rates down to the 1% range by purchasing Treasury securities from banks, thereby releasing more money into the system following the collapse of the high-tech bubble. Investors were forced to look elsewhere for better rates of return. One location they found was the housing market in the United States, as well as other housing markets. Prices were rising and the structures created in the Great Depression such as insured long-term, amortized mortgages and the creation of a secondary market where mortgages could be bundled and sold as short-term securities made the market appear safe from risk. Rates on mortgages of 5–7% were far more appealing than were 1% returns on Treasury bonds. The demand from global investors was sufficient that standards for qualification based on income, assets, and employment stability were systematically lowered. Yet when all the qualified potential buyers who met the rigorous traditional standards were exhausted, standards were simply lowered to find more customers to meet the rising demand of the global pool of surplus savings. By 2008 mortgage brokers were no longer asking for documentation of income, employment, or other assets. The famous NINJA loan, or liar’s loan, was born: no income, no job or assets [38]. By 2006 fully 44% of mortgage loans required no documentation. In addition, the average loan-to-value ratio increased to 89% by 2006, as the number of no-down-payment (100% financing) mortgages climbed from 2% in 2001 to 32% in 2006 [40].
The process was abetted by the general climate of financial deregulation that had characterized the US economy since the 1980s. The secondary market, created during the depression at the insistence of banks, allowed for the pooling of mortgages into mortgage-backed securities. As long as the potential for default was low, because the standards for qualification were high, these securities were fairly risk free, as they had been historically. However, the emerging, and unregulated, sectors of the financial security industry created even more exotic instruments by which to finance housing. Groups of mortgage-backed securities were themselves bundled into collateralized debt obligations (CDOs), and they were further divided into slices (or, to use the French word, tranches). Rating agencies, acting on historical data, declared these CDOs to be investment grade (AAA). On the basis of investment grade rating, mortgage security investors were able to purchase insurance policies against possible default known as credit default swaps. In the deregulated climate of 2007–2008, one did not even need to own an asset in order to purchase an insurance policy. The existence of global surplus savings and a lightly regulated climate served as an incentive for mortgage brokers, who would sell the loan immediately, to offer more mortgages to more people who simply did not have the income to pay the loans. But the risk would be managed further up the chain, by regional banks and in the money center banks in the world’s financial districts. The nation’s central bankers (e.g., Alan Greenspan, Ben Bernanke, Timothy Geithner) assured the public that the new financial innovations would reduce systemic risk. However, a problem was brewing beneath the surface, the problem of unsustainable levels of debt.
The purchase of everything from innovative financial instruments to bundles of loans was highly leveraged, that is, purchased with borrowed money, often at a ratio of 20:1. The system remained solvent as long as housing prices kept rising. Consumers could treat their houses as automatic teller machines. From 2004 to 2005, Americans withdrew $800 billion in equity each year. This allowed for the purchases of more home improvement products, automobiles, and exotic vacations, as well as mundane purchases of daily life. More than 7000 Walmarts and 30,000 McDonalds were constructed to meet the growing demand. New television shows such as “Flip This House” advised potential real estate speculators as to which improvements would result in easy financial profit. Wharton School senior strategic planner James Quinn estimates that without these withdrawals, economic growth would have been no more than 1% annually between 2001 and 2007. Homebuilders followed suit, constructing 8.5 million homes in 2005, about 3.5 million more than could be justified by historical trends [41]. However, by 2006 home prices began to fall. This touched off the downward cascade typical of a positive feedback loop. As homeowners found themselves “underwater,” or owing more on their mortgage than the house was worth, mortgage defaults began to increase. Cable News Network estimated that by the last quarter of 2010, 27% of all homeowners were in this situation. As defaults escalated, increasing 23% from 2008 to 2009, the bundled securities that were constructed from these pools of seemingly safe, investment grade, securities began to lose value. Since so many of them were highly leveraged, the falling prices of homes and bundles of mortgages created a panic. Since the financial instruments were so complex, even banks could not figure out what their portfolios were worth. Consequently, the mortgage crisis could not be isolated in the riskier “subprime” market but spread to the entire economy. Major investment banks were crippled as well. Two Bear Stearns hedge funds collapsed, precipitating the general financial panic, Lehman Brothers went bankrupt, and Merrill-Lynch was absorbed by Bank of America, under considerable pressure from the Treasury.
Domestic debt and GDP (trillions of dollars)
Debt by sector | ||||||
---|---|---|---|---|---|---|
Year | Gross domestic product | Total debt | Household | Financial firm | Non-fin’l business | Gov’t (local, state, and federal) |
1976 | 1.8 | 2.5 | 0.8 | 0.3 | 0.9 | 0.3 |
1980 | 2.8 | 3.5 | 1.4 | 0.6 | 1.5 | 0.4 |
1985 | 4.2 | 7.3 | 2.3 | 1.3 | 2.6 | 0.7 |
1990 | 5.8 | 11.2 | 3.6 | 2.7 | 3.8 | 1.0 |
1995 | 7.4 | 14.3 | 4.9 | 4.4 | 4.3 | 1.0 |
2000 | 9.95 | 19.1 | 7.2 | 8.7 | 6.6 | 1.2 |
2005 | 12.6 | 28.2 | 11.9 | 13.7 | 8.2 | 2.6 |
2010 | 14.7 | 37.1 | 13.5 | 15.3 | 10 | 3.0 |
2015 | 17.9 | 45.2 | 14.2 | 15.2 | 12.8 | 3.0 |
Banks had long been seen as recipients of deposits and lenders of money, as safe and conservative in their outlook. But in the new world of deregulated finance, the financial service industry became the largest borrower in the economy. It was this leverage that transformed the financial structure and made it vulnerable to disruptions. John Maynard Keynes made the assertion that “speculation does no harm as a small portion of enterprise. However, once the amount of speculation overtakes that of enterprise, the danger of this position becomes serious” [43].
10.3.12 The Deficit and the National Debt
The Federal government was not immune from the increase in debt. Budget deficits climbed from $3 billion annually in 1970 to $1.414 trillion in 2009. The primary drivers of these increased deficits were a reduction in taxes, especially at the top of the income distribution, and the expansion of government spending, primarily for the military and for entitlement programs such as Social Security, Medicare, and Medicaid. Military spending in 1970, the year of peak domestic oil production and the beginning of the era of stagflation, the government brought in $192.8 billion in receipts and spent $195.6 billion, for a deficit of $2.8 billion. In the last year of the Reagan Administration, whose economic policy was built upon increased military spending and tax cuts, the annual deficit soared to a historically unprecedented $155.2 billion. The last 2 years of the Clinton Administration actually saw modest budget surpluses, as the growth rate of military spending declined and tax receipts increased with the high-tech boom. Deficits began to climb again with the second Bush Administration, rising to $458.6 billion in 2008. By 2009 the annual difference between receipts and outlays was $1.4 trillion. Income tax revenue dropped from $1.635 trillion in 2007 to $898 billion in 2010 as the bush administration reduced taxes, in an unsuccessful attempt to stimulate the economy. Military spending, which stood at $294 billion per year when the Bush Administration took office, rose to $616.8 billion in 2008. It continued to climb during the Obama years, reaching the level of $693.6 billion in 2010. As of 2017, Congress is prepared to fund military spending to a greater degree than even the Pentagon has asked for. The Office of Management and Budget estimates that 2011 military spending will exceed $768 billion. In 1970, at the height of the Vietnam War, military expenditures were 8.1% of gross domestic product, while total government spending was 19.3%. By the end of the Clinton Administration, military expenditures had fallen to 3%, while total spending remained about the same, at 18.5%. By 2010 military spending stood at 4.8% of GDP, and total spending rose to nearly 24%. Mandatory expenditures, such as those on health care (Medicare for the aged and Medicaid for the poor), along with Social Security and other income support programs (unemployment insurance, supplemental security income for the disabled, Food Stamps, etc.) increased from $60.9 billion, or 6% of GDP, in 1970 to more than $2 trillion, or 14.7%, in 2009 [44]. Despite the increase in mandatory expenditures for entitlement programs, the income distribution grew more skewed, largely as a result of a stock market boom and subsequent bailouts, along with tax cuts at the top of the income distribution, along with stagnant wages at the bottom. In 1980, at the beginning of the neoliberal economic strategy, the Gini coefficient was 0.403 and the top 20% of the income distribution claimed 16.5% of aggregate income. The top 1% received 8% of income and the top 0.01% 0.065%. By the end of the second Bush Administration, the Gini coefficient increased to 0.466, indicating a greater degree of overall inequality, the top 20% claimed 21.7% of aggregate income, while the share of the top 0.01%, which amounts to about 14,000 families out of a population of 300 million, rose to 3.34% [45].
The 2010 congressional elections saw a large enough segment of the population expressing concern that the Democratic majority was unseated by conservative activists who see as their top agenda item the reduction of the budget deficits and the return of the glory days of the neoliberal agenda in the 1980s. In the first edition of this book, we asked whether we are reaching peak debt as well as peak oil? At the time, the credit system had largely frozen and few loans were granted. Not surprisingly, the total debt outstanding fell. We wondered whether this condition would be permanent. A quick glance at ◘ Fig. 10.2 shows that it was not. Debt began to grow again after 2010 and reached historic highs by September 2017, once again showing the role of debt as a driver of economic growth. The political will to expand more debt within the United States is clearly shrinking, and the willingness of other economies and investors to purchase Treasury securities is also in decline. But what are the potential effects of declining government participation in the economy? If one believes that the market economy is resilient and self-regulating, then a decrease in government spending will simply free money for spending in the private sector, and the economy will prosper. If, on the other hand, one believes the explosion of financial speculation and debt was due to investors seeking financial profits in an otherwise stagnant real economy, as indicated by declining rates of industrial capacity utilization, then the reduction of government spending, coupled with the rise of inequality, might cripple the economy by reducing its overall level of demand. the second scenario is far more likely. The growth of inequality may well be an important factor in the slowing of growth over the past few decades. Theoretically, if the rate of return on capital exceeds the rate of economic growth income concentrates at higher income levels [46]. The age of peak oil may well be the age of degrowth as well. It is likely to turn into an age of austerity.
10.4 Conclusion
The world economy collapsed into depression in the 1930s. Governments faced few ways out: fascism, communism, or social democracy. John Maynard Keynes wrote his classic text, The General Theory of Employment, Interest, and Money, as a guidebook for “saving capitalism from itself” in order to avoid the other outcomes which he detested. In the United States, the program took the form of “the New Deal.” While Franklin Roosevelt was able to restore confidence among a shattered population and put millions back to work, the New Deal did not engineer an economic recovery. It took the Second World War to do that! The United States exited the war in a clear position of economic and military power. Pent-up consumer demand, low gasoline prices, and a very productive factory system insured the economic surplus could be absorbed. Its corporations expanded into former colonies, and the terms of trade were positive and the prospects bright enough that corporations could share the gains from rising productivity with workers, insuring adequate income to buy their products while increasing profits at the same time. Oil was cheap and plentiful, and the American consumer could utilize the rising quantities of cheap and available oil to live the American Dream of a house in the suburbs, good schools, a steady job, and a cornucopia of consumer goods.
Most explanations of the postwar social order focus on the internal dynamics of the world economic system: its overall demand, technology, and the distribution of income. How do these factors affect the aspirations of the world’s population for a decent income and a meaningful life? But we contend that the world economic system is limited not only by its internal dynamics but also by the external biophysical conditions posed by the availability of energy and the consequences of using it.
When the first edition of this book went to press, the Middle East was afire with democracy movements. Unfortunately, the hope of the Arab Spring turned into a nightmare of dictatorship and perpetual war. Oil prices, which had climbed to nearly 150 per barrel in 2009, fell to around $50 today. We contend that the fall in oil prices was a primary reason for the economic recovery in the United States, and the impoverishment of the oil-producing regions in the Middle East and in North Africa. On March 11, 2011, an earthquake of magnitude 8.9 on the Richter Scale, the most powerful one in recorded history, struck the northern coast of Japan. The nation was devastated by the quake and subsequent Tsunami. The lack of electricity shut down the cooling systems of the Fukushima nuclear reactor complex. The latent heat from the fuel rods boiled away the water, resulting in a partial core meltdown. The heat also liberated the hydrogen from the oxygen leading to the buildup of flammable hydrogen gas. On March 14, 2011, the second of the reactors exploded. The viability of the third is in question. If the Japanese abandon their commitment to nuclear power and switch to oil or natural gas, what will be the effects on the world markets? Can an economy that has been stagnant for two decades recover? If the Japanese heed the advice of their American advisors and increase consumption in order to grow their way out of economic disaster, what will happen to the world’s fossil fuel resources and the quality of its atmosphere?
The Obama years were blessed with low oil prices and a commitment to stimulative policy. The Federal Reserve Bank kept short-term interest rate close to zero for the entire period, and government spending remained high. According to the Bureau of Economic Analysis, government spending stood at nearly $5.6 trillion at the beginning of 2012, increasing to $6.256 trillion by the end of 2016. Given the extension of the Bush-era tax cuts, receipts fell short of expenditures, thereby increasing the Federal budget deficit. Contrary to popular opinion, the deficit did not increase consistently throughout the Obama years. It was lower in 2016 (−$873 billion) than it was in 2012 (−$1.3 trillion). The commitment to a neoliberal policy of war and free trade did not end with the inauguration of a Democratic administration. The US continued to have a military presence in the oil-producing countries of the Middle East and Central Asia. A health-care reform bill (the Affordable Care Act) was passed without a single Republican vote in the first year of the administration, based on the plan that Obama’s rival, Mitt, implemented while serving as the Republican governor of Massachusetts. Unemployment fell from nearly 10% of the labor force at the beginning of President Obama’s term to less than five at the end, while inflation remained negligible despite the monetary and fiscal stimulation. Unfortunately, the good news was not spread evenly across the population. The job loss across the nation’s heartland remained above average, as the jobs that were lost in the 1970s never returned.
As it turns out, the resentment was long-lived and multi-generational. In October of 2016, Republican candidate Donald Trump won a surprise victory on a platform of “Making America Great Again,” largely by restricting immigration; subsidizing the energy industry; rescinding regulations, especially environmental regulations; and encouraging the expansion of fossil fuel use. The early indications are that the integrity of the environment will be a very low priority for the Trump administration. The president has appointed Scott Pruitt to head the Environmental Protection Agency who made his reputation as Attorney General of Oklahoma by suing the EPA for “overreach” as regards regulating greenhouse gases. The former head of Exxon-Mobil, Rex Tillerson, is now the Secretary of State. Furthermore, the US delegation to the most recent Conference of Parties, designed to implement the Paris Climate Accords, is headed by coal company executives. Whether resistance to this program leads to mobilization and a greater attention to Earth’s biophysical systems remains to be seen.
At some point the production of oil on a world basis will peak, and will begin to decline. Problems of instability and rising prices will cease to be just cyclical and political but will become secular and geological. What does that portend for the economic system? Will peak oil exacerbate the inherently stagnationist tendencies of the monopolized economy as Baran and Sweezy argue? How can we generate employment and reduce poverty, advocate democracy, and rebuild after natural disasters when the energy base to do so is in decline? If every scientific measurement, from ecological footprinting to biodiversity loss, to peak oil, and to carbon dioxide concentrations in the atmosphere, shows that humans have overshot the planet’s carrying capacity, then how can we grow our way into sustainability? We can’t, but how do we deal with the consequences of a nongrowing economy which have historically manifest themselves as periodic depressions? We will return to these questions in the final section of our book.
Questions
- 1.
What was the “Treaty of Detroit?” How did it impact postwar labor relations in the United States?
- 2.
What were the four “pillars of postwar prosperity?” Explain how each helped set the stage for the long economic expansion of the 1950s and 1960s.
- 3.
What was the New Deal? What problems did it try to address, and what was its major legislative accomplishments? How successful was the New Deal in restoring American prosperity?
- 4.
What was the role of the Second World War in transforming the US economy?
- 5.
How did the world oil industry, and the US role in it, change in the years after the Second World War?
- 6.
Why could the period from the end of the Second World War be characterized as the era of economic growth?
- 7.
Why was the “New Economics” of the 1960s successful in stimulating economic growth?
- 8.
What is stagflation? What was the role of peak oil in bringing about stagflation in the United States?
- 9.
What other factors led to the erosion of the pillars of postwar prosperity?
- 10.
Why was the “New Economics” unsuccessful in eliminating stagflation?
- 11.
What are the major tenants of the conservative growth agenda, also known as neoliberalism?
- 12.
To what degree was the neoliberal program of the Reagan era successful? What were the economics and social costs of this success?
- 13.
How did the Clinton Administration carry on the neoliberal agenda? How did low oil prices during the 1990s affect US economic performance?
- 14.
How much did debt expand in the first decade of the twenty-first century? What were the economic outcomes?
- 15.
How might biophysical limits affect economic performance as we enter the second half of the age of oil?
In the 1950s and 1960s economic growth was driven by cheap oil, as the oil ceased to become less cheap something else had to drive economic growth—cheap money and the expansion of debt.