Chapter 16
The Bubble Economy

The nineteenth-century pattern of boom and slump, culminating in the World Depression of 1931, promises not to repeat itself since most governments have learnt the importance of preventing the collapse of their financial systems.

—James Foreman-Peck, A History of the World Economy (1983)1

In the early decades of the computer era following World War II, visions of the impact of information technology on American society in what was quaintly called “the year 2000” ranged from the utopian to the dystopian. Optimists envisioned an egalitarian future in which a universal middle class was freed from onerous labor by robots and computers. Pessimists worried about technological unemployment or the regimentation of society under the surveilliance of an omniscient Central Computer.

Nobody in the 1950s or 1960s could have guessed that average Americans in 2000 would be working longer hours or that their incomes, in real, inflation-adjusted terms, would not have risen in a generation, while a few rich Americans would have collected most of the gains from thirty years of economic growth. Americans during the glorious thirty years of capitalism after World War II would have reacted with shocked disbelief if they had been told that leading American companies would shut down their factories in the United States in order to exploit poor, unfree labor in China, an authoritarian state whose economy combined many of the most oppressive features of communism and capitalism. And they would have concluded that the visitor from the future who told them that the big winners in the computer age would be bankers—bankers?—was a complete lunatic.

THE INFRASTRUCTURE OF GLOBALIZATION

In earlier eras, transportation technologies such as canals, railroads, and interstate highways and communication technologies such as telegraphy and telephony had enlarged markets and transformed business models. The economic globalization of the late twentieth and early twenty-first centuries similarly rested on technologies that included jets, container ships, computers, and satellites.

The jet made air travel affordable to ordinary people around the world. It also revolutionized business. Diminishing air-flight times permitted increasingly centralized management of multinational corporations, by allowing managers from the home country to visit subsidiaries and allies. In the 1920s, Ford’s British division was a largely independent company. By the 1960s, thanks to jet travel and improved communications, it was completely controlled by the headquarters in Detroit.2

In addition to making centralized global corporations possible, jets transformed global supply chains. Wide-bodied jets were used by the commercial cargo fleets of FedEx and UPS. In ton-kilometers, global air cargo rose from 750 million in 1950 to 140 billion in 2005—a 200-fold expansion.3

Along with the jet, the most important part of the new global infrastructure of commerce that evolved during the third industrial era was the container ship. The age of container shipping began on April 26, 1956, when Malcolm Purcell McLean, owner of a North Carolina trucking company, sent a World War II T-2 tanker named Ideal X with fifty-eight large containers on deck from Port Newark, New Jersey, to Houston, Texas. The scale grew from the first specialized container ship built in 1960, with a capacity of 610 TEU (twenty-foot equivalent units), the Emma Maersk, which in 2006 had a capacity of 11,000 TEU.4

Modern freight shipping is dominated by two kinds of ships—tankers and dry-bulk carriers, on the one hand, and container ships on the other. In 2005, there were forty-eight hundred ships in the global tanker fleet. Nearly half carried crude oil, while sixty-five hundred dry-bulk carriers hauled other cargoes.5 The leading dry-bulk commodities were coal, iron ore, and grain.6

Container-ship technology transformed ports as well. The off-loading of ships, once a prolonged, laborious process, became swift and mechanized. Cranes lifted cargo containers directly from ship to dock, truck, or train. Modern cargo ships are off-loaded twenty times faster than their predecessors were in 1950.7

WALMART AND THE GLOBALIZATION OF RETAIL

In the nineteenth and early twentieth centuries, the linking of local markets into a single national market first by railroads and then by highways permitted the emergence of national distributors like Sears, Roebuck and A&P. In the same way, the development of a global commercial infrastructure based on container ships and cargo jets, along with computerized business management, allowed national retailers to become global giants. Walmart was the biggest. In 2005, Walmart was not only the world’s largest retailer but also the world’s largest profit-making corporation.

Walmart was founded in Bentonville, Arkansas, by Sam and Bud Walton. They took advantage of road construction in the 1950s to replace small crossroads stores with Walmarts at highway intersections, first in the rural South and then throughout the nation.

Walmart embodied the reactionary southern version of American capitalism that survived the New Deal and the civil rights revolution below the Mason-Dixon Line. Fordism was the system in which well-paid production workers provided a mass market for the products that they made. Walmart represented anti-Fordism. Its low wages and lack of benefits for most workers resulted in a workforce dominated by teenagers, retirees, and female workers. In the mid-twentieth century, factory supervisors at GM earned five times as much as the average production employee. Half a century later, Walmart district store managers earned ten times as much. In 1950, GM president Charles E. Wilson earned 140 times as much as each assembly worker. H. Lee Scott, Walmart CEO, in 2003 earned 1,500 times as much as a full-time Walmart employee.8 The heirs of the founders of Walmart together were as rich as the family of Bill Gates.

Another part of Walmart’s anti-Fordist version of American capitalism was the fact that the goods it sold were made elsewhere, chiefly in China. In 2006, Walmart sourced 80 percent of worldwide sales from China.9 Many goods came from Shenzhen, the center of Chinese manufacturing for export. In Fordist America, high wages permitted American workers to buy goods produced by other well-paid American workers. In post-Fordist America, low prices for Chinese imports permitted low-wage American service workers to buy goods produced by poorly paid Chinese workers.

FROM INTERNATIONAL TRADE TO TRANSNATIONAL PRODUCTION

The most important economic effect of the third industrial revolution on the world economy was the replacement of traditional global trade by global production.10 Information technology, satellite technology, and efficient, inexpensive container-freight transport allowed the establishment of corporations and industrial production networks on a regional or global scale. Between 1974 and 2000, world trade grew faster, at a rate of 5 percent a year, than overall world GDP, which grew at an annual rate of only 2.9 percent.11 By the early years of the twenty-first century, between a third and a half of what was labeled as global “trade” was intrafirm trade—that is, the transfer of components within a single multinational enterprise located in several countries.

Between the end of the Cold War and the crash of 2008, globalization resulted in the organization of one global industry after another as an oligopoly, with most of the transnational enterprises headquartered in the United States, Europe, or Japan. A similar pattern of consolidation was evident among both final-assembly, or “system-integrator,” firms and their suppliers.

Two companies, US-based Boeing and Europe’s Airbus, had 100 percent of the global market share in large jet airliners. Among their suppliers, the global market for jet engines was divided among three firms: GE, Pratt and Whitney, and Rolls-Royce. Microsoft enjoyed 90 percent of the global market share for PC operating systems. Four firms divided 55 percent of the PC market among themselves, while three companies shared 65 percent of the market for mobile handset phones. Three firms dominated the world market in agricultural equipment (69 percent) and ten companies dominated the global pharmaceutical market (69 percent).12 Ninety-five percent of microprocessors (chips) were made by four companies—Intel, Advanced Micro Devices, NEC, and Motorola. Four automobile companies—GM, Ford, Toyota-Daihatsu, and DaimlerChrysler—manufactured 50 percent of all cars, while three firms—Bridgestone, Goodyear, and Michelin—made 60 percent of the tires. Owens-Illinois and Saint Gobin made two-thirds of all the glass bottles in the world.13 Concentration in global finance was accelerated by US deregulation, which allowed the emergence of a small number of US-based megabanks, some of which grew even more during the Great Recession when, with the support of the US government, they absorbed failing banks, as Bank of America took over Merrill Lynch and JPMorgan Chase acquired Washington Mutual (WaMu).

In addition to being dominated by oligopolies, the emerging world economy was highly regionalized and still connected to the nation-state. The hundred largest multinationals in 2008 had 57 percent of their total assets and 58 percent of their total employment abroad, and foreign sales made up 61 percent of their total sales.14 As this demonstrates, the typical multinational still had a distinct national identity, with around half of its assets, employment, and sales within its home market. Few multinational corporations did an overwhelming majority of their business outside their home countries. Those that did, such as Nestle and Ikea, tended to be based in small countries, so their markets were largely foreign. More typical were the large automobile companies, each of which assembled and sold a majority of its products in its home region, with a minority of its sales in other regions.

The domination of global commerce by corporations based in the United States, Japan, and Germany—the three most populous industrial democracies—showed the continuing importance of a large domestic market as a base for multinational sales and operations. Despite the celebration of global corporations by libertarians and their denunciation by leftists and populists, global companies turned out to have national identities after all during the crisis that began in 2008, when major banks and automobile companies turned to their home country governments for bailouts.

THE CURRENCY WARS

Far from being created abruptly by the end of Communism or the rise of the Internet, the global economy of the 1990s and 2000s represented an extension of the America-centered “free world” economy of the Cold War era to former Communist countries like China and former neutral third world countries like India that had practiced import substitution during the Cold War. Following World War II, the United States had offered its defeated great power rivals Germany (in the form of West Germany) and Japan a deal. If they accepted the status of semisovereign, largely demilitarized powers in world politics, the United States would protect their interests, including access to resources like Middle Eastern oil. In return for giving up the ambition to be independent military powers again, the Germans and the Japanese also would be informally guaranteed access to American consumer markets for their exports and to American investments and technology. West Germany and Japan accepted the offer and specialized as civilian trading states. They made cars, not wars.

Following the collapse of its bubble economy, the Japanese government asked the Clinton administration to allow Japan to try to export its way out of its problems with the help of a devalued yen. The administration agreed, and in 1996 the “Reverse Plaza Accord” ended a decade of US policy and lowered the value of the yen to the dollar by 60 percent.

The strong-dollar policy benefited Wall Street. But it was a catastrophe for American exporters. And the Reverse Plaza Accord had even more disastrous effects on Japan’s rivals in East Asia. In the decade since the original Plaza Accord, many developing countries in Asia had pegged their currencies to the weak dollar in order to compete more effectively with the Japanese for export markets in the United States and elsewhere. The Reverse Plaza Accord abruptly made their exports more expensive, even as Japan’s became cheaper.

The US government and the IMF had pressured developing countries to open up their financial markets to foreign investors—yet another policy that benefited Wall Street. As growth slowed in many Asian countries, however, foreign investors became nervous and troubles in Thailand inspired an irrational rush of foreign capital out of many Asian countries in a short time. The Asian financial crisis that began in 1997 crippled many of the nations in the region, hitting Thailand, Indonesia, and South Korea particularly hard. China and Malaysia, which had resisted American pressure to liberalize their financial systems, proved to be less vulnerable and suffered less.

The Asian financial crisis was only the first of a series of unforeseen disasters that went off like a string of firecrackers following the Reverse Plaza Accord. The collapse of the preceding boom in Asia led to a decline in oil prices, which in turn triggered the Russian financial crisis of 1998. The Russian crisis led to the collapse of an American hedge fund, Long-Term Capital Management, foreshadowing greater crises to come.

BRETTON WOODS II

In the 1990s and 2000s, China grew rapidly to surpass Japan in the size of its economy. The Chinese economic model was a variant of the mercantilist system used by Japan and the Little Tigers, with distinctive Chinese characteristics.

Like Japan, China intervened in currency markets to undervalue its currency, the renminbi, in order to help Chinese exports and cripple American exports. If the exporters of a nation with trade surpluses are allowed to turn their dollar earnings into buying domestic cash, such as Chinese yuan or Japanese yen, then the surplus country’s money supply will increase and the currency will appreciate relative to other currencies, making its exports less competitive. In order to prevent the currency from appreciating, the surplus country must purchase and hold foreign exchange. Beginning in 2003, China, followed by other East Asian governments, engaged in currency intervention on a massive scale to keep the US dollar high, Chinese exports strong, and American exports weak. To facilitate its mercantilist industrial policy, China forbade its companies to sell or buy debt or stocks in transactions with foreigners without government approval. Using different methods, Japan pursued a similar currency manipulation strategy of imposing “currency tariffs” on American exports.

The strategy of relying on undervalued currencies to promote export-led industrialization required China, Japan, and other governments to accumulate either dollars or dollar-denominated assets like US federal debt and the debt of government-backed entities like Fannie Mae and Freddie Mac, thereby keeping the US dollar high and crippling rival American export industries. The massive purchases of US federal debt by the Chinese and Japanese central banks, in return, allowed the US Treasury, and, through it, American banks and other financial institutions, to keep interest rates low.

Exporters of manufactured goods like the East Asian countries and Germany were not the only countries to run chronic surpluses with the United States. In the first decade of the twenty-first century, American oil imports accounted for about half of the American trade deficit. Like the mercantilist manufacturing exporters, the petrostates helped to enable America’s debt-led consumption by recycling their earnings into US Treasury securities and dollar-based assets, permitting lower interest rates and more borrowing in the United States.

Some economists claimed that the system of currency manipulations and global imbalances was stable, sustainable, and beneficial for all sides; they dubbed it “Bretton Woods II” after the Bretton Woods system of exchange rates that had existed from the late 1940s until the 1970s.15 It should have been clear that Bretton Woods II was a Ponzi scheme, in which excessive borrowing by American consumers permitted overinvestment in manufacturing and infrastructure by East Asian mercantilist economies, doomed to end when American consumers hesitated to borrow and spend.

WORKSHOP OF THE WORLD

By the early twenty-first century, China was the workshop of the world economy. In 2005, the world’s leading container ports of origin were Singapore, Hong Kong, Shanghai, and Shenzhen. The leading receiving ports were Rotterdam, Los Angeles, Long Beach, and Hamburg.16

There was far more to China’s successful mercantilist program of crash industrialization than currency manipulation. In addition to manipulating their currencies to help their export industries, East Asian mercantilist regimes steered credit toward targeted industries on the basis of national goals rather than market logic. The East Asian mercantilist model is usually described as export-oriented growth; it can also be described as “investment-driven growth.”17

Following the death of Mao Zedong, China’s government abandoned Communist economic policy, even though the one-party Communist dictatorship remained. In the 1980s, Deng Xiaoping’s reforms helped township and village enterprises (TVEs), to the benefit of the entire country and rural China in particular. But in the 1990s, the emphasis shifted toward a version of the export-led development model focused on exports to the US consumer market like the strategies of of Japan, South Korea, Taiwan, and Singapore. Export-processing zones in coastal areas used young, low-wage workers from the rural hinterland to produce goods for multinational corporations that would be sold in foreign markets—primarily the American market.

Following the 1997 Asian financial crisis, the older hub-and-spoke system of Pacific trade, in which Japan and the Little Tigers individually targeted the US consumer economy, was replaced by a new, China-centered system. In the new system, Japan, South Korea, Taiwan, Hong Kong, Singapore, and other regional countries exported components to Chinese production facilities, which then incorporated them into goods for export to the United States and elsewhere. While high-value-added components from industrial countries flowed into China’s factories, commodity-exporting countries from Australia, Chile, and Brazil to African countries supplied China with raw materials and food. Chinese workers mostly assembled components imported from Japan, Singapore, Taiwan, South Korea, and other, more advanced countries. In 2003, China, including Hong Kong, had trade deficits with Japan, South Korea, and Taiwan, even as it enjoyed a growing surplus with the United States.

A major feature of the early-twenty-first-century Chinese model was state capitalism, or state ownership of major banks and businesses. All major Chinese financial institutions were state controlled, so that lending tended to reflect government priorities, not market logic. By the second decade of the twenty-first century, three of the world’s four largest banks by market value were Chinese.18 In 2010, forty-one Chinese state-owned enterprises (SOEs) were among the world’s five hundred largest companies; three were among the top one hundred.19 In 2008 one SOE, China Mobile, the state-controlled wireless phone company, was the world’s fourth largest company by market value.20

As China used currency manipulation, credit steering, subsidies, wage suppression, and other mercantilist techniques at the expense of its trading partners and rivals, its economy grew at the rate of 10 percent a year from 2003 to 2005, 11.6 percent in 2006, and 13 percent in 2007. China’s current-account surplus swelled from 3.6 percent of GDP in 2004 to 7.2 percent in 2005 to an unsustainable 11 percent in 2007.21 In 2010, China surpassed Japan as the world’s second largest economy.

AMERICA’S CHRONIC TRADE DEFICITS

In 1971, America was shocked by its first trade deficit of the twentieth century, of $1.5 billion. From 1989 to 1997, the US current-account deficit hovered below 2 percent of GDP. Following the East Asian financial crisis of 1997, the deficit rose in 1998 to 2.4 percent of GDP. In 2006, just before the crash that began the Great Recession, the US current-account deficit had grown to 6 percent of US GDP and nearly 2 percent of global GDP.22 The US trade deficit between 1976 and 2010 added up to more than $7 trillion; of that, more than 70 percent was accumulated after 2000.23

Between 1998 and 2008, the US merchandise trade deficit with China alone rose 375 percent. The top US sales to East Asia were semiconductors, aircraft parts, waste and scrap metal, basic organic chemicals, and soybeans.24 Other than “waste and scrap metal” and “organic chemicals,” the major American exports to China were from industries that relied on US government support—the Defense Department, in the case of aircraft parts, and federal farm subsidies, in the case of soybeans.

Between the early 1980s and 2006, the US economy grew by a factor of 12 and the current-account deficit grew by a factor of 158.25 Because of the artificial strength of the dollar that resulted from foreign-currency manipulation, as well as other mercantilist techniques used by foreign governments to promote their industries, US exports lost one-fifth of global market share in the first decade of the twenty-first century.26 The deindustrialization of the United States accelerated. In 1980, manufacturing accounted for 21 percent of US GDP and finance for only 14 percent. By 2002, the proportions were reversed, with 14 percent of GDP accounted for by manufacturing and 21 percent devoted to finance.27 While manufacturing as a share of employment declined in all Organization for Economic Cooperation and Development (OECD) countries, from around a quarter in 1970 to an average of 15 percent in 2008, it declined least in countries with export trade surpluses like Germany (19 percent) and most in countries running large merchandise trade deficits like the United States (9.5 percent in 2008).28 Meanwhile, the share of total corporate profits accounted for by the financial sector exploded from 18 percent in 1980–1990 to 36 percent in 2001–2006.29

RISE OF THE MEGABANKS

By the early twenty-first century, three banks—Citibank, JPMorgan Chase, and Bank of America—dominated the American commercial-banking sector. Citibank grew by turning itself into a conglomerate with commercial banking, investment banking, and other services. JPMorgan Chase was the product of mergers by Chase Manhattan, Manufacturers Hanover, JP Morgan, Chemical Bank, First Chicago, and National Bank of Detroit. North Carolina National Bank, based in Charlotte, North Carolina, took the name of A. P. Giannini’s old California-based Bank of America in the course of an acquisition spree beginning in the 1980s. By 2007, shortly before the Great Recession began, Bank of America’s assets were the equivalent of 16.4 percent of US GDP, JPMorgan Chase’s were 14.7 percent, and Citigroup’s were 12.9 percent. To put this into perspective, in 1983 Citibank, then America’s largest, had controlled assets amounting to only 3.2 percent of US GDP.30 The Big Three grew even more as a result of the global financial crisis, as Bank of America absorbed the investment bank Merrill Lynch and America’s largest mortgage lender, Countrywide, while JPMorgan Chase swallowed up America’s largest savings and loan, Washington Mutual, and the investment bank Bear Stearns.31 From 1980 to 2000, the financial assets of commercial banks and securities firms swelled from 55 percent to 95 percent of GDP.32

In the decades of the bubble economy, the culture of investment banking changed. Brokers had worked on fixed commissions. The inflation of the 1970s resulted in a bear market and low securities yields. Pension funds and other institutional investors lobbied to end fixed commissions on trading, so that they could negotiate discounts with brokers. The abolition of fixed commissions in 1975 shifted the business of Wall Street from traditional investment banking to securities trading, with traders motivated to rake in commissions by frequent “churning” of investment portfolios. In 1968, volume discounts were authorized; then commissions were deregulated in 1975. As a percentage of industry revenues, brokerage commissions declined from 53.8 percent in 1972 to 17.3 percent in 1991.33

Before 1970, the New York Stock Exchange (NYSE) required that member firms be partnerships, on the grounds that partners would take fewer risks than managers of corporations enjoying the shield of limited liability. Over the objections of the NYSE, a brokerage firm called Donaldson, Lufkin & Jenrette became a publicly traded corporation in 1970 and by 1999 all the major investment banks had become public corporations. Within investment banks, there was a shift away from members of the social elite who could fraternize with old-school CEOs toward “quants” who could maximize revenue for the investment bank with ever more complex computer programs.

In a 2009 interview with the London Times, Goldman Sachs CEO Lloyd Blankfein, who was awarded a $9 million bonus that year, explained that Goldman “does God’s work” by helping “companies to grow by helping them to raise capital.”34 But in 2007, the year before the crash, the activity of helping companies by underwriting stock for them accounted for only 3 percent of Goldman’s revenues. Sixty-eight percent of its revenues came from trading and principal investments.35

The New Deal financial order had been almost completely dismantled by the time Glass-Steagall was repealed in 1999. No adequate new system of financial regulations replaced those that had been torn down. Taking advantage of “regulatory arbitrage,” the increasingly large, rich, powerful firms engaged in activities that were poorly supervised by a miscellany of agencies, from the Securities and Exchange Commission (SEC) to the Commodity Futures Trading Commission (CFTC), not to mention state agencies. Many of the regulators looked forward to jobs in the firms that they regulated.

Free-market ideology led government officials to resist new regulations and enforce existing regulations lightly or not at all in what became known as “the shadow banking system.” For example, Fed chairman Alan Greenspan rejected calls for regulating over-the-counter (OTC) derivatives: “In the case of the institutional off-exchange derivatives market, it seems abundantly clear that private market regulation is quite effectively and efficiently achieving what have been identified as the public policy objectives of government regulation.”36

FROM MANAGERIAL CAPITALISM TO FINANCIAL-MARKET CAPITALISM

The rise of a few universal banks from the wreckage of the New Deal financial system was only part of a larger story: the replacement of managerial capitalism by a new system that can be described as “financial-market capitalism.” Even as traditional banking activities shrank to a portion of the activities of financial markets—some regulated, some part of the unregulated shadow banking system—pressures emanating from Wall Street were reshaping the American corporation.

Wall Street investment bankers and free-market theorists shared a hostility to midcentury America’s corporate oligopolies with their bureaucratic managers. Some of the debt incurred by conglomerates in takeovers, or in fending off hostile takeovers, was rated as junk bonds that were below investment grade. Michael Milken, then working for Drexel Burnham, began selling these high-risk but high-yield junk bonds to institutional investors. Milken and other corporate raiders borrowed money to take over conglomerates in leveraged buyouts and then sold off various parts, repaying lenders while pocketing profits.

Sometimes the takeover artists made companies more efficient, refocusing companies on single lines of business. In many cases, however, the raiders simply broke up corporations in order to sell their assets, raising short-term shareholder returns at the expense of America’s long-term productivity base. Jack Welch, the CEO of General Electric, sold off Bell Labs to a French company, Lucent. Many once-great corporations were reduced to brands, euphemistically known as “original equipment manufacturers” (OEMs), that were attached to products made largely by other firms (outsourcing) or foreign manufacturers (offshoring). The chairman of Ford, William Clay Ford Jr., remarked: “It’s easy to build a car. It’s harder to build a brand.”37

The dismantling of the vertically integrated, oligopolistic industrial behemoths that had formed in the merger waves of the 1890s and early 1900s and the 1920s and had survived the decades of the New Deal era until the 1970s was evident from the statistics. The same companies were in the list of the four top American corporate employers in both 1960 and 1980: GM, AT&T, Ford, and GE (1960) and AT&T, GM, Ford, and GE (1980). In 2007, the top four employers were Walmart, UPS, McDonald’s, and IBM. AT&T had been broken up as a result of a federal antitrust suit in the 1980s, and GM and Ford had dropped out of the list not only of the top four employers but also of the top ten. In 2007, only GE remained among the top ten employers, below the four listed above and Citigroup, Target, and Sears Holdings.38

The growth of financial-market control over corporations probably would not have occurred but for the growth of institutional investors seeking high yields. As late as 1952, households—mostly rich families—held 90.4 percent of all corporate stock. By 1994, households owned only 48 percent. Meanwhile, in the same period, the pension fund share of corporate stock ownership grew to 25 percent and that of mutual funds increased to 10 percent.39 The share of corporate equities owned by public and private pension funds grew from 6 percent in 1965 to 23 percent in 2007, while the share owned by mutual funds increased from 5 percent in 1985 to 26 percent in 2007.40 The shift from defined-benefit plans to defined-contribution plans like 401(k)s and individual retirement accounts (IRAs) further swelled the assets under institutional management. IRA savings invested in mutual funds expanded from 17 percent in 1985 to 49 percent in 1999.41 By 2008, the brokerage firm Fidelity was the largest shareholder in approximately one in ten publicly traded corporations listed on Nasdaq and the NYSE, exercising potential power far beyond that of J. P. Morgan in the days of interlocking directorates.42

This phenomenon in turn was driven by a peculiarity of America’s midcentury social contract: the role of employer-provided defined-benefit pension plans and, later, tax-favored defined-contribution plans like IRAs and 401(k)s. Social Security’s low rate of preretirement income replacement made a growing number of Americans desperate to see high rates of return from the retirement savings they or their employers had entrusted to mutual funds. By 2005, institutional investors like mutual funds accounted for three-fourths of the ownership of the typical large corporation.43

The rapid expansion of stock ownership on the part of mutual funds increased the pressure on corporations to maximize their short-term earnings, at the expense if necessary of long-term investment and growth. In 2005, 80 percent of more than four hundred chief financial officers responded to a survey by saying that “they would decrease discretionary spending on such areas as research and development, advertising, maintenance, and hiring in order to meet short-term earnings targets.”44 Short-termism was reflected in the fact that institutional investors and others moved away from long-term investments in companies. The annualized turnover of all stocks on the NYSE rose from 36 percent in 1980 to 118 percent in 2006.45

In the new age of financial-market capitalism, as in the era of finance capitalism in the early 1900s, American industry was subordinate to finance. But there was a profound difference. Finance capitalists like J. P. Morgan were long-term investors who sought to profit over many years from the industrial corporations and utilities that they owned. The new financial-market capitalism, by contrast, was marked by short-term ownership of stocks by the agents of millions of anonymous investors, most of whom had no idea what company stocks were being bought and sold on their behalf.

While some corporate managers lobbied state legislatures to create laws protecting their companies against takeovers, most CEOs in the United States were reconciled to financial-market capitalism by means of stock options. In 1993, Congress changed the tax code to encourage corporations to reward their executives with stock options. By the beginning of the twenty-first century, more than half of the compensation of the average Fortune 500 executive took the form of stock options.46

Federal Reserve chairman Alan Greenspan observed that large stock options “perversely created incentives to artificially inflate reported earnings in order to keep stock prices high and rising.”47 The goal was no longer to build a productive company that would make useful products and last for generations, but to maximize short-term profits in time for the next quarterly earnings report. “Wall Street can wipe you out,” the CEO of Sara Lee observed. “They are the rule-setters. They do have their fads, but to a large extent there is an evolution in how they judge companies, and they have decided to give premiums to companies that harbor the most profits for the least assets. I can’t argue with that.”48

MASTERS OF THE UNIVERSE

Stock options created fortunes for many CEOs—particularly when their companies drove up the price by buying back stock. In the 1990s, stock buybacks became the major method of distributing corporate revenues to shareholders, even as they increased the wealth of corporate executives who were compensated in stock options. Even so, rising CEO compensation could not keep up with the fortunes being made in the swelling American financial sector. From 1948 to 1980, pay in finance was comparable to that in other lines of business; from 1980 onward, financial-industry compensation on average was twice the pay in other American industries.49 Beginning in the 1980s, compensation in banking rose much faster than compensation in the rest of the private sector. By 2007, the average financial-sector employee earned twice as much money as the average worker in the rest of the private sector.50 In 2004, the top twenty-five hedge-fund managers made more money than all the CEOs in the corporations of the S&P 500 put together.51

Prestige followed wealth and power. In an article in Harper’s in 1949, the management theorist Peter Drucker observed: “Where only twenty years ago the bright graduate of the Harvard Business School aimed at a job with a New York Stock Exchange house, he now seeks employment with a steel, oil, or automobile company.”52 Richard Fisher, the chairman of Morgan Stanley from 1991 to 1997, recollected that after he graduated in the 1960s from Harvard Business School “investment banking was about the worst-paying job available to us. I started at Morgan Stanley at $5,800 a year. It was the lowest offer I had. . . . I’m sure my classmates who went to Procter & Gamble started at $9,000 a year.”53 By the end of the twentieth century, business students who preferred Procter & Gamble to Morgan Stanley would have been ridiculed. Between 1950 and 1980, graduates of Harvard who went into finance were paid no more than those who went into law, engineering, and medicine. By the 2000s, those who worked in finance made nearly twice as much as their colleagues in other professions.54

Between 1980 and 2007, the financial sector’s share of US GDP grew from around 10 percent to a peak of around 40 percent. In Britain, where a similar process of financialization produced cancerous growth of the City of London’s financial industry, finance’s share of GDP grew by more than 10 points from 1990 to 2006. In the same period, finance expanded its share of the economies of Germany and France by no more than 6 percent.55

FROM THE GREAT COMPRESSION TO THE GREAT REGRESSION

Between the 1970s and the early twenty-first century, what some scholars have called “the Great Compression” of incomes in the United States was reversed. Between 1913 and the beginning of the New Deal in 1932, the share of income in the US going to the top 10 percent was between 40 and 45 percent, only to plunge and level off at between 31 and 32 percent from World War II until the 1970s. Beginning in the Reagan years, inequality began to grow until it neared its pre-1929 level before the crash of 2008.

In fact, not one but two forms of inequality revived—earnings inequality and asset inequality. As the economist James K. Galbraith demonstrated, wealth inequality was accounted for largely by the disproportionate gains to investors and the stock options received by Silicon Valley entrepreneurs during the tech bubble of the 1990s.56

Attributing rising inequality within the United States to allegedly unstoppable forces of globalization or technology appealed to the American elite by implying that their disproportionate gains had nothing to do with the power of some economic classes relative to others. But the most plausible explanations for late-twentieth-century and early-twenty-first-century inequality in the United States attribute it to changes in the bargaining power of capital, labor, and professionals, not to long-run forces beyond human control.

The ability of the working-class majority in the United States to bargain for higher wages was undermined after the 1960s by several purely domestic phenomena: the declining real value of the minimum wage as a result of inflation, low-end labor markets flooded with unskilled, low-wage immigrants, and the decline of labor unions.

REVOKING THE SOCIAL CONTRACT

While wages stagnated in the neoliberal era, economic security declined for many American workers. The welfare-state components of America’s New Deal social contract such as Social Security and Medicare remained robust, despite long-run challenges to their funding. But the welfare-capitalist elements, such as company pensions and employer-provided health care, crumbled rapidly in the late twentieth century. Bankruptcies in the airline, automobile, and other industries forced the Pension Benefit Guarantee Corporation to take over many corporate pensions.57 Some companies, including Ford and GM, replaced health coverage of retired workers with health retirement accounts.58 Between 1981 and 2003, the number of employees with corporate pension plans who had defined-benefit (DB) plans declined from 81 percent to 38 percent.59 Under pressure to cut costs, employers increasingly replaced DB pension plans with defined-contribution (DC) plans like 401(k)s for the minority of Americans who had any pensions at all.

Conservatives and libertarians sought to replace the post–New Deal hybrid of welfare capitalism and welfare statism with tax-favored private accounts. Another welfare-market technique was using tax credits rather than direct public spending to achieve social goals. In the last decades of the twentieth century, the tax code was riddled with new tax subsidies for individuals—the child tax credit and the child-care tax credit among them—joining older ones like the home-mortgage-interest deduction. While the child tax credit was refundable—that is, paid to individuals who made too little income to pay federal income taxes—most federal income-tax credits were not available to the bottom half of American wage earners, who had been effectively removed from the federal income tax rolls by 2000. This meant that what the social scientist Christopher Howard called “the hidden welfare state” consisted largely of means-tested subsidies of a novel kind—subsidies available only to the affluent, not the poor.60 The oligarchic nature of the evolving American political system was symbolized by the lack of protest that greeted the “child-care tax credit,” which in essence was a subsidy by Americans who could not afford nannies or other private child care to the affluent minority who could.

AN AMERICAN PLUTONOMY?

In the late nineteenth century, the American elite was identified with the Four Hundred—the number of guests who could be accommodated in Mrs. Astor’s ballroom in New York. From 1992 to 2007, the ratio between the income of the median household and the top four hundred households grew from 1,124 to 1 to 6,900 to 1.61 For the top four hundred households, pretax income, adjusted for inflation, ballooned by 409 percent between 1992 and 2007, while for the median American family of four it increased by only 13.2 percent.62 The top four hundred households in the United States enjoyed a decline in their effective tax rates (the percentage of income paid in taxes) from 26.4 percent in 1992 to 16.6 percent in 2007.63 Hedge-fund managers benefited from a tax provision that allowed the income they were paid for managing their hedge funds to be taxed at a low capital gains rate rather than the highest income tax rate.

While conservatives and libertarians promoted visions of “the ownership society” in which every American would be an investor, capital income grew much more concentrated. In 1979, the top 10 percent of Americans by income received 67 percent of the income from capital, while 33 percent went to the bottom 90 percent. In 2006, the share of capital income that went to the top 10 percent of Americans had increased to 81.3 percent while that of the bottom 90 percent had declined to 18.7 percent.64 The percentage of the increase in disposable income that went to the top 1 percent of US households fell from 22–23 percent in 1929 to a low of 8–9 percent in the 1970s, before rising to a remarkable 73 percent during the two terms of George W. Bush.65 Because capital gains were taxed at lower rates than income from labor, Warren Buffett observed that he and other billionaires were taxed at lower rates than their secretaries.

In 2005, three Citigroup analysts—Ajay Kapur, Niall MacLeod, and Narendra Singh—described the United States as a “plutonomy.” They explained, “Plutonomies have occurred before in sixteenth century Spain, in seventeenth century Holland, the Gilded Age and the Roaring Twenties in the U.S. What are the common drivers of Plutonomy? Disruptive technology-driven productivity gains, creative financial innovation, capitalist-friendly cooperative governments, an international dimension of immigrants and overseas conquests invigorating wealth creation, the rule of law, and patenting inventions. Often these wealth waves involve great complexity, exploited best by the rich and educated of the time.” In a plutonomy, the economy is driven by the consumption of the classes, not the masses: “In a plutonomy there is no such animal as ‘the U.S. consumer’ or ‘the UK consumer,’ or indeed the ‘Russian consumer.’ There are rich consumers, few in number, but disproportionate in the gigantic slice of income and consumption they take. There are the rest, the ‘non-rich,’ the multitudinous many, but only accounting for surprisingly small bites of the national pie.” The Citigroup analysts speculated that a plutonomic world economy could be driven by the spending of the world’s rich minority, whose ranks are “swelling from globalized enclaves in the emerging world.”66

According to Moody’s, the top 10 percent of American earners accounted for 22 percent of all spending and the top 25 percent for 45 percent of all consumer spending. The bottom 50 percent of Americans accounted for only 29 percent of all American consumer spending. At the same time, however, the top 10 percent of earners received 50 percent of all income, while they accounted for only 22 percent of spending. Where did the rest of their money go?

Much of the money of the American rich went into speculation in the two waves of the bubble economy between the late 1990s and 2008. Had more of that money been in the hands of the bottom 50 percent, more of it would have been spent on consumer goods, including manufactured products, and far less would have gone to gambling on condos in Manhattan and Miami and trendy stocks.67 Just as a ship with a broad base is more stable than a top-heavy boat, so an economy in which well-paid workers create mass markets for goods and services is more stable than a top-heavy plutonomy.

SECURITIZATION: THE WEAK LINK IN THE GLOBAL ECONOMY

Securitization proved to be the weak link in the global economy. This innovative financial practice originated in the 1970s, when the Government National Mortgage Association (GNMA, or Ginnie Mae), began bundling mortgages together and selling them to investors. The larger US quasi-public mortgage agencies, Fannie Mae and Freddie Mac, quickly adopted the practice. With the help of increasingly sophisticated computer programs, the financial industry devised ever more ingenious methods for packaging and selling debt-based securities of various kinds.

Securitization revolutionized the way that banks did business. In the past, banks had been limited in how much they could lend by the debts that they were owed. By permitting them to clear the books, securitization permitted them to engage in a much greater volume of lending. They could not have done so had there not been a market for structured debt products. But China and other countries had an insatiable appetite for dollar-denominated debt, where they could park dollars earned from trade surpluses which, if released into the markets, would have led their currencies to appreciate at the expense of the export industries on which their growth strategies depended.

In these conditions, the business model of American banks and other lenders changed for the worse. The old practice of buy-and-hold was replaced by originate-and-distribute. Because banks and other lenders made a profit on every mortgage, they had less of an incentive to monitor the creditworthiness of borrowers. If the borrower could not repay the loan in the future, that would be somebody else’s problem. In the meantime, they would have collected their fees.

In their search for profits, banks and other lenders offered loans that required low down payments or no down payments at all. People described as NINJAs (no income, no job, no assets) found it easier to take out mortgages. Paperwork suffered, as lenders hired employees known as “robosigners” to approve loan applications as quickly as possible. Confused paper trails would cause problems later, when banks sought to foreclose on properties.

In theory, bundling a number of risky mortgages or other loans into one structured security would lower the risk of default, because all of the borrowers were not likely to default at the same time. That was the theory of the ratings agencies that provided high ratings for increasingly complex structured-debt products, which central banks and other financial institutions bought, believing them to be safe assets.

The flaw in the theory was revealed beginning in 2007, when, as a result of recession in the United States, many American homeowners began to default. In the old days, the losses would have been borne by the lenders who held the mortgages. But now that the mortgages were packaged and distributed widely, nobody could be certain which mortgages were valuable and which were toxic. Paralyzed by uncertainty, the global financial system suffered the equivalent of cardiac arrest.

THE BUBBLE BURSTS

At the beginning of the twenty-first century, global imbalances and skyrocketing inequality in the United States were strikingly similar to trends in the 1920s. Would they be followed by a global economic decline on the scale of the Great Depression? In 2007, the views of the mainstream neoclassical economics profession were summed up by the Nobel Prize–winning economist Robert Lucas, in his presidential address to the American Economic Association: “The problem of depression prevention has been solved.”

Faith in the post–New Deal American model of capitalism was shaken to its foundations when, in August and September 2008, the Federal Reserve and the Treasury mounted the greatest economic rescue effort in world history. Fannie Mae and Freddie Mac, the government-backed corporations that underwrote a majority of America’s home mortgages, were effectively nationalized. Most of the great investment banks on Wall Street, including Bear Stearns, Lehman Brothers, and Merrill Lynch, victims of bad gambles on home-mortgage debt, either vanished or were absorbed by or converted into large commercial banks. In a desperate effort to stop the contagion of bad debt and avert a credit freeze that could cause a new depression, the US government promised a bailout of the financial sector of more than a trillion dollars.

The crisis began the previous year. Home prices rapidly declined as the bubble collapsed. In 2007, the recession led to fear that inability by homeowners to make their payments would trigger defaults throughout the complex tranches of mortgage-based securities. To forestall trouble, in September 2007, the Fed lowered the federal funds interest rate from 6.25 percent to 5.75 percent; by December it had lowered the rate to 5.25 percent. The Fed dropped the rate by a dramatic 0.75 percent on January 22, 2008, and again by another 50 basis points a little more than a week later, for a total reduction of 2.25 percentage points in only three months.68

When the investment bank Bear Stearns, with a capital-to-assets ratio leveraged by 35 to 1, found itself in crisis, the government provided emergency financing so that JPMorgan Chase could purchase Bear Stearns in March 2008. In June, Bank of America took over Countrywide, which had been responsible for a fifth of American mortgages. Concern about the value of the more than $5 trillion in mortgage-backed securities held by Fannie Mae and Freddie Mac led Treasury secretary Henry Paulson to ask Congress for authority to inject billions into the two government-sponsored enterprises (GSEs). Congress passed the American Housing Rescue and Foreclosure Prevention Act in July.

In September, Bank of America absorbed Merrill Lynch, while Treasury and the Fed unsuccessfully tried to find a buyer for Lehman Brothers. Lehman filed for bankruptcy on Monday, September 15, 2008, sending shock waves through the global financial system. On the evening of September 16, the Fed announced a bailout of the American International Group (AIG), fearing that its collapse so quickly after Lehman’s would be devastating. The chair of the House Financial Services Committee, Massachusetts Democrat Barney Frank, joked that September 15 should be called Free Market Day: “The national commitment to the free market lasted one day. It was Monday.”69

The financial system was now in a death spiral. On the evening of September 18, Paulson, Ben Bernanke, chairman of the Federal Reserve, and Christopher Cox, the chairman of the SEC, met with members of Congress, warning that the country faced what Bernanke called “Depression 2.0.” They asked for $700 billion for the Troubled Assets Relief Program (TARP). Their pleas were rejected on September 29, when the House voted down the Emergency Economic Stabilization Act. But the continuing economic collapse led to another vote and on October 3, 2008, a modified version of the act was passed by Congress and signed into law by President Bush. In December 2008, the federal government also allocated more than $17 billion to rescue GM and Chrysler.

To prevent what came to be called the Great Recession from turning into a second Great Depression, both monetary policy (interest rates) and fiscal policy (spending and taxation) were required. By December 16, 2008, the Fed had set the federal funds rate at 0 to 0.25 percent.

The United States was now in the “liquidity trap” described by Keynes, in which normal monetary policy cannot function and expansionary fiscal policy is needed. The economist Dean Baker has calculated that government spending of $1.5 trillion a year beginning in 2009 would have been required to offset the demand lost in the residential housing sector ($600 billion), lost consumption demand ($500 billion), lost demand in nonresidential construction ($250 billion), and lost demand caused by states and local governments that cut spending to balance their budgets ($150 billion). Instead, the stimulus in 2009 and 2010 amounted only to around $300 billion a year.70

Christina Romer, one of President Barack Obama’s economic advisers, argued that to counteract the contraction of the economy the federal government should undertake a $1.2 trillion fiscal stimulus. Instead, Obama proposed a package half that size. On February 13, 2009, Congress passed the American Recovery and Reinvestment Act (ARRA), which spent only $787 billion over several years until September 30, 2011.

In addition to being too small, the stimulus was limited in its effects by the economic contraction imposed by state governments whose constitutions required them to balance their budgets, even in a near-depression, by slashing expenditures, firing employees, and raising taxes. Two economists, Mark Zandi and Alan Blinder, concluded that the ARRA had reduced unemployment only by 1.5 percentage points and created only 2.7 million jobs.71 Meanwhile, the stingy and fragmentary nature of America’s social safety net, in the form of “automatic stabilizers” like unemployment insurance and Social Security, meant that the United States suffered greater unemployment and economic contraction than European democracies with more generous systems of social insurance. Unemployment was far more severe for less educated workers. In 2011, unemployment among college-educated Americans was 4.3 percent, while it was 14.3 percent among workers lacking a high school diploma.72

Having passed an inadequate Keynesian stimulus, the Obama administration and the Democratic majority in Congress turned their attention away from the economic crisis to focus on measures of long-term reform: the Patient Protection and Affordable Care Act of 2010 that sought to provide universal health-insurance coverage, the Dodd-Frank Wall Street Reform and Consumer Protection Act passed in July 2010, and a controversial and ultimately failed attempt to create a cap-and-trade system to reduce greenhouse gas emissions that contribute to global warming. The diversion of attention and energy away from the immediate unemployment crisis that accompanied these other projects brought to mind the warning of Keynes to the Roosevelt administration in its first term that “even wise and necessary Reform may, in some respects, impede and complicate Recovery.”

In the November 2010 midterm elections, the Republicans regained the House. The activists of the conservative Tea Party movement were reminiscent of the anti–New Deal Liberty League in their denunciations of the supposed “fascism” and “socialism” of the Obama administration. As it had done since the Reagan years, the Republican Party asserted without evidence that further tax cuts for the rich and corporations were the solution to all problems. On the left, the Occupy Wall Street movement, spreading from downtown Manhattan to cities throughout the nation in the fall of 2011, gave voice to the frustration of many Americans with the financial industry.

Having originated in America, the crisis dragged the entire global economy down. The major industrial countries protected their own banks and industries and responded according to their national traditions and interests. China shoveled more state-controlled credit at its overbuilt export sector and infrastructure. “Euroland,” the European area that shared the euro as a common currency, was crippled by disputes between Germany and debtor nations like Greece.

THE PREDATORS BALL

Even before the Great Recession began in the crash of September 2008, the first decade of the twenty-first century was a Japanese-style “lost decade” in the United States. Compared to the 24 percent overall growth of the 1990s, the US economy grew by only 6 percent in the 2000s.73

What little growth there was went to a tiny plutocratic minority. During the Bush years, two-thirds of the income growth in the United States went to the top 1 percent of the US population.74 Over a longer period, 82 percent of all gains in US wealth between 1983 and 2009 went to the richest 5 percent of American households.75 By the early twenty-first century, the Gini coefficient, a measurement of economic inequality, showed that the United States was radically different from other developed nations and resembled other highly unequal nations including Rwanda, Ecuador, and the Philippines.

In 1985, junk-bond king Michael Milken threw a party he called “the Predators Ball.” By the second decade of the twenty-first century, it was clear that the party was over.