In the present crisis, government is not the solution to our problem; government is the problem.
—Ronald Reagan, 19811
On June 3, 2003, the Treasury Department’s James Gilleran brought a chainsaw to a photo op. While speaking to reporters, he promised to cut up piles of paper representing regulations of the financial sector. Joining him were representatives of four other US regulatory agencies in charge of overseeing finance, armed with less formidable (but still sharp) gardening shears. The message was clear: The Bush administration was tearing down the final pieces of the New Deal regulatory wall.2
With the financial panics and stock manias of previous decades in mind, the architects of the New Deal created a regulated financial system in the United States that established a firewall between investment bankers on the one hand and savings banks and savings and loans on the other. For five decades, the United States was free from the bank panics that had plagued the economy before the New Deal. By the 1970s, however, the lessons of the 1920s had been forgotten. Influenced by contributions from the financial industry, the US Congress under Democratic and Republican presidents alike dismantled the system the New Deal had built to stabilize American finance. The result was predictable—larceny and losses on a colossal scale. The savings and loan meltdown in the 1980s was followed by the subprime mortgage crisis of the 2000s, another result of excessive financial deregulation.
The destruction of the New Deal order was not limited to the financial sector. Managerial capitalism gave way to a new financial-market capitalism. Corporate raiders successfully challenged the autonomy of American corporate managers, whose personal interests were increasingly aligned with the short-term interests of investors by means of the stock options that played an ever increasing role in their compensation. At the insistence of Wall Street investors, vertically integrated industrial corporations were dismantled into pieces. Former brand-name corporations became mere “brands,” slapped onto products assembled in East Asia and other regions that nurtured rather than neglected their manufacturers. Squeezed by foreign competition, companies sought to raise their profit margins by ending the postwar truce with organized labor and smashing unions. By 2000, private-sector union membership in the United States had fallen to the lows of the early 1900s. A flood of unskilled immigration, legal and illegal, put additional downward pressure on the wages of the nonunionized workforce. A new category unique to the United States among industrial nations was discovered—the “working poor,” millions of full-time workers who could not subsist on a minimum wage that the federal government had allowed inflation to turn into a starvation wage.
The dismantling of the big corporations brought with it the dismantling of the always-inadequate employer-based benefit system that complemented government insurance programs like Social Security and Medicare. Some companies could not afford to pay their pensions, while others shifted the risks of retirement investment to employees by replacing defined-benefit with defined-contribution pension plans.
Utility regulations, too, were dismantled. In the areas of transportation and trucking, deregulation caused a return of the ills that had prompted regulation in the first place, like chronic industry-wide bankruptcies in the airline industry and volatile prices for electricity. The Golden Age of infrastructure spending between the 1930s and the 1960s gave way to an era of crumbling bridges and barge-canal locks and traffic and freight congestion, as spending on infrastructure declined.
The era between the 1970s and the bubble economy that followed the end of the Cold War has yet to find a name. The most appropriate is the Great Dismantling.
THE MANAGERIAL ELITE: FROM ENGINEERS TO FINANCIAL OFFICERS
The prosperity of the US economy following World War II disguised a hidden rot in the culture of American management. While other industrial nations relentlessly developed their capacity to catch up with and challenge America’s industrial leadership, as the United States had done in its race to catch up with industrial Britain in the nineteenth century, the nature of management in America’s major corporations changed for the worse.
As we saw in the last chapter, following the breakdown of the NIRA experiment in business-government collaboration, in the late 1930s, Roosevelt sided with liberals in the Brandeisian antitrust tradition who were hostile to corporate concentration. This led to a flurry of antitrust suits in the late 1930s and again in the late 1940s, interrupted only by the war. In 1950, Congress passed the Celler-Kefauver Act, also known as the Anti-Merger Act. Intended to close loopholes in the Clayton Antitrust Act, the Celler-Kefauver Act made it more difficult for corporations to buy their rivals or suppliers in the same industry. However, it was relatively easy for corporations in different industries to merge.
What followed, in the late 1950s and 1960s, was one of the oddest episodes in the history of the American economy: the conglomerate movement. Just as the outlawing of cartels but not mergers prompted the great merger wave of the 1900s and 1920s, so the Celler-Kefauver Act had a surprising and harmful effect on US industrial organization that was not anticipated by its backers. Forbidden to grow by absorbing firms in their own fields, many American corporations expanded by annexing companies in completely unrelated lines of business. Unlike the merger waves of the 1890s to the 1900s and the 1920s, which concentrated ownership in particular fields, the merger wave of the 1950s and 1960s produced conglomerates made up of units in entirely different lines of business. Between 1950 and 1978, Beatrice Foods made 290 acquisitions, W. R. Grace 186, and International Telephone and Telegraph (IT&T) 163.3 W. R. Grace, which began as a chemical company, went into western wear, banking, lumber and timber, fire extinguishers, Mexican restaurants, chemicals and sports teams, battlefield radar, and, last but not least, Hostess Twinkies snack cakes.4
Not since Supreme Court decisions in the late nineteenth century inadvertently set off the great merger wave, by outlawing industry-wide cartels while tolerating industry-dominating mergers, had legal factors inadvertently altered the very structure of American industry on such a dramatic scale.
The share of all corporate assets owned by the 200 largest manufacturing companies increased from 47 percent in 1947 to 59 percent in 1967. Between 1948 and 1955 only 10 percent of acquired assets resulted from cross-industry conglomeration; by 1972–1979 that figure had risen to 46 percent. Among the 148 corporations that made the list of the top 200 leading corporations in both 1950 and 1975, the mean number of lines of business controlled by the corporation grew from 5.2 to 9.7.5
The architects of the conglomerates unwittingly blazed the trail for later “takeover artists.” By taking over a company with a low price-earnings ratio, a conglomerate could get a one-time boost in its reported earnings. Conglomerate-building corporations financed their takeovers first with cash, then with stock, and then, by the 1970s, increasingly with debt. The debt-to-equity ratio in manufacturing rose from 0.48 to 0.72 between 1965 and 1970.6
The harm done by the conglomerate movement in the wake of Celler-Kefauver was compounded by the change in corporate culture that it produced. Many of the CEOs of the great industrial corporations of the mid-twentieth century, like Alfred Sloan and Charles Wilson, had engineering degrees. In the new conglomerates, corporate leadership increasingly passed to chief financial officers (CFOs) and CEOs with backgrounds in finance who did not know anything about any particular product or industry. “Companies do not make money,” the influential management theorist Peter Drucker insisted. “Companies make shoes.” That might have been the motto of the triumphal managerial capitalism of the middle decades of the twentieth century. But in the second half of the century, more and more managers as well as investors disagreed. It did not necessarily matter what a company made as long as it made money.
Even as Japan, South Korea, Taiwan, Germany, and other industrial countries focused on developing world-class manufacturing, the leaders of many American manufacturing companies neglected the making of superior products in order to pursue short-term gains from mergers and financial manipulation. Beginning in the 1970s, these trend lines would converge and the American economy would pay a heavy price.
THE 1970S: DECADE OF CRISIS
The Golden Age of postwar capitalism ended in the 1970s. The decade was a time of geopolitical and economic disasters that shook the faith of Americans in the New Deal order constructed by Roosevelt and Truman, ratified by Eisenhower, and expanded by Kennedy and Johnson. The United States abandoned its allies in Indochina and withdrew from Vietnam and the global market experienced oil shocks following the Arab-Israeli War of 1973 and the Iranian Revolution of 1979.
The postwar settlement in the world and at home was clearly falling apart. In the United States and other Western democracies, the post-1945 truce between labor, business, and government was threatened by inflation. The rate of economic growth in every industrial democracy slowed abruptly in the 1970s, reviving only in the late 1990s. At the same time, the demands of organized labor for higher wages added “wage-push inflation” to the causes of inflation. Inflation reached a peak of 18 percent in the last years of the Carter administration.
Assuming that high levels of economic growth would continue forever, Americans in the 1960s were entranced by technological optimism when thinking about the future, envisioning futuristic cities and colonies on the moon and Mars in the near future. But America’s temporary monopoly of world trade came to an end, with the revival of Japan and Western Europe. From 1973 to 1996, American economic growth fell dramatically, compared to the previous two decades. All the industrial economies suffered a productivity slowdown around the same time, perhaps as a result of the maturity of second-industrial-revolution technologies like electricity and automobiles. Whatever the reason, the result was stagflation—a toxic combination of declining corporate profit margins and spiraling wage-price inflation.
AMERICA’S EMBATTLED MANUFACTURERS
The prosperity of postwar America rested on foundations that grew weaker over time. One was the assumption that free trade was unproblematic. The unchallenged supremacy of the US economy in the world could last only until America’s industrial rivals recovered from the devastation of World War II—in large part because of generous American help.
Recognizing that the policy of import-substitution protectionism was no longer necessary, now that the US was the dominant global economy, Congress passed and President Roosevelt signed the Reciprocal Trade Agreements Act (RTAP) in 1934. While Roosevelt did not share the utopian belief in peace through free trade, he favored an integrated global trading system that would ensure all nations access to markets and raw materials without the need for war or colonialism. During World War II, the United States led the way in creating the World Bank, the International Monetary Fund (IMF), and other institutions of what came to be known as the Bretton Woods system, after the hotel in New Hampshire where many of the negotiations took place. To get the shattered economies of Europe going again, the United States provided Europe with financial aid under the Marshall Plan.
The postwar global trading system, however, never functioned as designed. With the outbreak of the Cold War, the Soviet Union and its satellites formed their own economic bloc. During the Cold War and after it, the United States subordinated its preference for a liberal global trading system to the imperatives of keeping its European and Asian allies within the American-led alliance, even if that meant sacrificing the interests of particular American industries.
The United States made one-way trade concessions in order to secure the cooperation of other countries needed as allies, first in the struggle against the Axis powers and then against the Soviet bloc during the Cold War. Between 1939 and 1943, the United States offered unbalanced trade concessions to Iceland, a number of Latin American countries, and Turkey, in order to lure them away from Nazi Germany and its allies.7
Following World War II, the United States provided $33 billion in nonmilitary aid between 1946 and 1953.8 Because of the limits to foreign aid imposed by American public opinion, American foreign-policy makers suggested what the historian Alfred E. Eckes calls a policy of “trade, not aid.”9 In an unpublished page for his memoirs, President Harry Truman wrote: “American labor now produces so much more than low-priced foreign labor in a given day’s work that our workingmen need no longer feel, as they were justified in feeling in the past, the competition of foreign forces.”10 In December 1946, the State Department instructed its officials to help the countries in which they were stationed to export to the United States: “In general, a Foreign Service officer should give the same attention to serving United States importers as he would give to United States exporters.”11 A commission on trade headed by Daniel W. Bell, the former budget director of the Roosevelt administration, proposed to increase exports of manufactured goods even if that led to unemployment for an estimated sixty thousand to ninety thousand American workers: “In cases where choice must be made between injury to the national interest and hardship to an industry, the industry [should] be helped to make adjustments by means other than excluding imports—such as through extension of unemployment insurance, assistance in retraining workers, diversification of production, and conversion to other lines.”12 President Eisenhower argued that measures “which tend to drive away an ally as dependable as Great Britain . . . do much more harm in the long run to our security than would be done by permitting a US industry to suffer from British competition.”13
Not content to sacrifice American industries to help Cold War allies prosper, the United States in the 1960s sought to use access to the American market to drive development in the postcolonial world. In 1963, President Kennedy called on the United States and its allies to open “our markets to the developing countries of Africa, Asia and Latin America.”14 Kennedy warned an AFL-CIO convention in Dallas that protection of US industry risked “driving potential trading partners into the arms of the Soviets.”15 In March 1964, a Johnson administration task force on foreign economic policy called for a “war on poverty—worldwide. . . . The whole country would be the gainer if, over time, we could shift resources away from textiles, shoes and other unsophisticated manufactures into more advanced items where we have a comparative advantage . . . [such as] capital, scientific and technological research, skilled and educated labor.”16
One of the few government officials who dissented from the prevailing orthodoxy was George Humphrey, Eisenhower’s secretary of the Treasury. A veteran of the mining industry, Humphrey declared at a 1954 cabinet meeting, according to another participant, that “we were protectionists by history and had been living under a greatly lowered schedule of tariffs in a false sense of security because the world was not in competition. That has changed now and the great wave of competition from plants we had built for other nations was going to bring vast unemployment to our country.”17
By the 1970s, America’s policy of tolerating foreign mercantilism was harming American industry. US manufacturers like Detroit’s Big Three automakers were being battered by imports from Europe and especially from Japan, which had devised a superior just-in-time (JIT) system of assembly-line production. America’s embattled automakers claimed correctly that they were excluded from the Japanese automobile market by various subtle forms of Japanese protectionism. But Japan’s discrimination against foreign imports was not the only reason for its success. Another reason that the US automakers suffered in competition with Japanese firms was a lack of scale. Even though they were smaller than the big American carmakers, the leading Japanese car companies made cars in plants that were much larger. In the mid-1980s, the average plant for GM and Ford produced 182,000 cars per year, while the average plant of one of the four largest Japanese car companies produced around 460,000 cars per year.18
American industry had contributed to its own problems. While the Japanese improved production processes like JIT manufacturing, and while the Germans continued their tradition of fine craftsmanship, American manufacturers had rolled out shoddy (and, in the case of cars, dangerous and polluting) products and relied on advertising to inspire American consumers to buy the latest models. High oil prices following the OPEC embargo in 1973 damaged the US automobile industry by creating a market in the United States for fuel-efficient Japanese compact cars. The bitterly adversarial tradition of US labor-management relations made it hard for the United States to compete against Japanese companies with their paternalistic labor relations and against German companies in which codetermination by managers and workers was part of the law. An even more significant cost of increased foreign rivalry was the inability of business to pass on the costs of higher wages for workers to consumers, now that its profits were constrained by international competition.
The major industrial countries—the United States, the European Economic Community (EEC), and Japan, as well as Canada, Australia, Switzerland, Sweden, Norway, Austria, and Finland—grew at a rate of 4.9 percent a year from 1950 to 1970, compared to 2.6 percent a year from 1870 to 1913 and only 1.9 percent a year between 1913 and 1950.19 In part because they were more advanced, the United States (3.5 percent) and Britain (2.8 percent) grew much more slowly than France and Finland (5 percent) and the former Axis countries Italy (5.6 percent), Germany (6.3 percent), and Japan (9.8 percent).20 In the 1980s, this inspired the rueful quip, “The Cold War is over, and Germany and Japan won.”
EAST ASIAN MERCANTILISM: THE JAPANESE MIRACLE
Japan was the most important beneficiary of the US policy of tolerating mercantilist trade policies carried out by America’s Cold War protectorates. Without asymmetric trade between the United States and Japan, there would have been no Japanese economic miracle between the 1960s and 1980s. The average Japanese growth rate of 10 percent in the 1960s dropped to 5 percent in the 1970s and 4 percent in the 1980s, before plunging to 1.5 percent in the “lost decade” of the 1990s.
During US-Japanese trade negotiations in 1955, a Japanese negotiator remarked that “if the theory of international trade were pursued to its ultimate conclusions, the United States would specialize in the production of automobiles and Japan in the production of tuna; . . . such a division of labor does not take place . . . because each government encourages and protects those industries which it believes important for reasons of national policy.”21 Following those negotiations, the United States granted $37.2 million in trade concessions and received only $6.4 million from Japan. While Japan, following classic mercantilist strategy, reduced its duties on raw materials and foodstuffs, the United States reduced its duties on Japanese manufactured imports like electrical products, apparel, and glassware.22 In addition to making its own trade concessions to Japan, the United States offered access to the US market for other countries if they opened their own markets to Japanese exports.23
Following the end of the US occupation in 1950, Japan carried out a long-term industrial policy, using methods such as direct subsidies, preferential tax policies and off-budget finance, channeled credit to targeted industries, controls on foreign exchange and imports, encouragement of domestic cartels, and informal bureaucratic guidance.24 In agriculture and strategic manufacturing, Japan used tariffs, quotas, and informal barriers to limit imports. Money was funneled from Japanese consumers to Japanese corporations by markups over the world price for many items. In the 1990s, it was estimated that Japanese paid more than 600 percent of the world price for radios and televisions.25 Credit allocation was another key element of Japanese mercantilist industrial policy. The Postal Savings Bank offered consumers poor returns on savings, while steering their deposits to Japanese manufacturing and infrastructure companies.
The Japanese government in 1960 declared that computing was a strategic industry. The Ministry of Trade and Industry (MITI) ordained that the role that the Pentagon played in the United States in procuring and promoting a computer industry would be played by Japan’s National Telegraph and Telephone Company (NTT), a government monopoly. The Japanese government pressured International Business Machines (IBM) into licensing all its technology to Japanese producers, and compelled IBM’s Japanese subsidiary to agree to export 60 percent of its production and to submit new models for approval to the government. In addition, the Japanese government imposed a 25 percent tariff on imported computers that included IBM computers produced inside Japan.26
As a result of these mercantilist policies, Japan’s merchandise trade surpluses with the United States and other countries ballooned. Between 1980 and 1989, Japan accounted for 38 percent of the US current account deficit.27 In 1975, the United States absorbed 12 percent of the world’s manufactured exports; by 1987 that had grown to 22 percent. Japan’s economy was half the size of the US economy, and yet Japan admitted to its protected national market only 2 percent of the world’s manufactured exports in 1975 and merely 4 percent in 1987.28
One Japanese CEO explained: “We don’t export because it’s profitable. We export because it is national policy.”29
EAST ASIAN MERCANTILISM: THE LITTLE TIGERS
The so-called Little Tigers—South Korea, Taiwan, and Singapore—employed variations of mercantilism as part of their own export-oriented strategies of national economic development. Emulating Japan, they exploited their importance to the United States in its Cold War competition with the Soviet Union and China in order to gain access to American consumer markets and American technology, while protecting their domestic markets and carrying out state-guided industrial policies.
The Korean “miracle” of the 1960s through the 1980s was an example of modern mercantilism in action. Like Japan, South Korea relied on one-way access to US consumer markets and technology transfer by US corporations. The US Agency for International Development (USAID) provided financial and technical assistance for transferring US technology to the newly created Chungju Fertilizer Company, created in 1960. When the first South Korean oil refinery was established in 1967 by the Korea Oil Corporation (KOCO), Dow Chemical, as part of a joint venture, agreed to share its technology, train Korean engineers, and use them to replace American engineers as soon as possible. All the American engineers were replaced by 1976, and then in the 1980s Dow sold off its interests in South Korea.30
Like Japan, South Korea quickly moved up the value chain from garments to high-value-added industries like electronics and automobiles. South Korea became even more mercantilist following the ascension of the military dictator General Park Chung-hee in the 1960s. In the 1970s, South Korea reversed financial liberalization, so that its government-controlled banking sector could make “policy loans” to targeted strategic industries. The military government controlled foreign exchange, to the point of mandating the death penalty for foreign-exchange violation.31
In addition to using the state-controlled financial sector to steer credit toward strategic industries, the South Korean government also used state-owned enterprises (SOEs) such as the nationalized steelmaker, POSCO. In South Korea, the equivalents of Japan’s keiretsu were the chaebol, or business families, several dozen large groups of companies. Before the 1997 Asian financial crisis, government-chaebol collaboration was central to South Korean industrial policy.
Taiwan, another former Japanese colony, also emulated Japanese mercantilism. Taiwan used tariffs, nontariff barriers, and channeled credit to promote targeted manufacturing sectors. Like other Asian mercantilist countries, Taiwan relied on a state-controlled credit system and lax intellectual-property protection.
As city-states, Singapore and Hong Kong were necessarily more open, lacking hinterlands and large internal markets of their own. They pursued different but equally mercantilist policies of export-driven economic development.
NIXONIAN REALPOLITIK: THE ROAD NOT TAKEN
Richard Nixon is often thought of as a precursor of Ronald Reagan, but that honor belongs to Jimmy Carter, whose administration first adopted economic neoliberalism and began the large-scale dismantling of the New Deal system. Nixon was the last New Deal president. The policies of the Nixon administration can be understood as an attempt to deal with the problems caused by the contradiction between the New Deal order at home and the global economy by sacrificing America’s post-1945 hegemonic grand strategy in favor of a more overtly nationalistic US foreign military and economic policy.
Nixon inherited a global system in which the bills were coming due for the obligations undertaken by his postwar predecessors. In his inaugural address, Kennedy had glibly declared: “Let every nation know, whether it wishes us well or ill, that we shall pay any price, bear any burden, meet any hardship, support any friend, oppose any foe, in order to assure the survival and the success of liberty.”32 Nixon, presiding over the disengagement of the United States from the involvement in Vietnam that Kennedy had begun and Johnson had escalated, declared that in the future America’s allies would be expected to rely chiefly on themselves for their defense—the Nixon Doctrine. Nixon and his national security adviser, Henry Kissinger, argued that the bipolar world had been replaced by a multipolar world, in which the United States should pursue realist policies to maximize its own strategic interests.
Nixon’s foreign economic policy was a form of nationalistic Realpolitik. He found a like-minded ally in Treasury secretary John Connally, who once confided in his deputy Pete Peterson, “The foreigners are out to screw us.”33 Unable to reconcile America’s role as global economic hegemon with its national interests, Nixon and Connally abandoned that role for American self-help.
In 1971, the United States experienced its first current-account deficit, earning less on the value of its exports than it owed on the value of its imports. Nixon retaliated against Japanese mercantilism by slapping a tariff on Japanese imports, an act remembered in Japan as the “Nixon shock.” Lacking enough gold to pay foreign claims, the Nixon administration suspended the convertibility of the dollar to gold, at first temporarily and then permanently in 1973. Freed from the gold standard, the United States devalued the dollar.
Like Franklin Roosevelt, who dismissed the gold standard as one of a number of “fetishes of international bankers,” Nixon was less interested in the global system than in American national interests.
Had Nixonian nationalism prevailed, the New Deal order might have been saved by a tailoring of America’s military and trade policies to domestic objectives. But beginning with Jimmy Carter and Ronald Reagan, the United States took a different path. Instead of modifying New Deal regulations, neoliberal Democrats and conservative Republicans alike dismantled them and replaced them in many cases with no regulations at all.
THE COUNTERREVOLUTION AGAINST NEW DEAL LIBERALISM
In the 1970s, the public’s loss of faith in authority included a growing hostility to big business. The Harris polling agency found that the percentage of Americans who expressed “a great deal of confidence” in the heads of large corporations dropped from 55 percent in 1966 to 15 percent in 1975. In a July 1975 Gallup poll of public confidence in institutions, business came in last at 34 percent, below organized labor at 38 percent, and Congress at 40 percent.34
For its part, American business in the 1970s glared back at American society and did not like what it saw. “The have-nots are gaining steadily more political power to distribute the wealth downward,” one executive complained at a meeting in a series hosted by the Conference Board in 1974 and 1975. “The masses have turned to a larger government.”35 Another participant complained: “Our enemies want cradle-to-grave security for everyone.”36 One business executive lashed out at democracy itself: “One man, one vote will result in the failure of democracy as we know it.”37
Conservative American business elites had long demonized American liberalism by comparing it to German National Socialism or Soviet Communism and, when those canards seemed excessive, accusing American liberalism of being French was sure to win applause. The postwar stagnation of Britain provided the American Right with another nightmare. “We are following England to disaster, trying to beat them where they are going,” one businessman complained at a Conference Board meeting. Another warned: “England is our future over the next decade. First, they got the dole, then we got our relief. First, they had socialized medicine, then we got Medicaid and Medicare. First, they had nationalized industry, now our institutions are in danger of being taken over by the government.”38
American companies increasingly offshored production to countries where wages were low, workers were nonunionized and sometimes repressed, and where governments offered subsidies and other incentives for foreign direct investment. By 1980, more than 80 percent of semiconductors were produced by US multinationals in foreign countries including Singapore, South Korea, Taiwan, and Mexico.39
The business and financial sectors also poured funding into the conservative intellectual movement and conservative media outlets. From the 1940s to the 1960s, a conservative and increasingly mathematical and unworldly version of Keynesianism, described by the British economist Joan Robinson as “bastard Keynesianism,” came to dominate US economics departments. Mathematical economists in the tradition of Paul Samuelson, author of the leading textbook in the field, pushed out institutional economists in the tradition of John Kenneth Galbraith, who were more interested in studying the real world than in devising pseudoscientific models. The increasingly arcane interests of the academic Keynesians left them disarmed against a counteroffensive by free-market conservatives, who spread their ideas in the mass media as well as in academic seminars.
The leader of the free-market Chicago School was Milton Friedman, whose Nobel Prize for economics in 1976 marked the rightward shift of economic thinking. With Free to Choose, a bestselling book that became a television series, Friedman and his wife, Rose, popularized libertarian economic ideas that had been marginalized in the United States and other industrial countries for a generation after the Depression. Conservative and libertarian think tanks, funded by rich donors and business interests, and the business press, most notably the Wall Street Journal, spread the message: economic problems are almost always the fault of the government, not the market. President Ronald Reagan gave voice to this dogma in his first inaugural address, when he declared that “government is not the solution to our problem; government is the problem.”40
Reagan and other conservatives adopted what became known as “supply-side economics,” based on the erroneous belief that, by increasing the rewards to effort, tax cuts would generate more than enough revenue through growth to compensate for the cuts. Supply-siders were fond of pointing to the Kennedy-Johnson administration’s use of a tax cut in 1964 to stimulate the economy. Galbraith presciently told the Joint Economic Committee of Congress in 1965 of his misgivings about the tax cut of 1964: “I was never as enthusiastic as many of my fellow economists over the tax reduction of last year. The case for it as an isolated action was undoubtedly good. But there was danger that conservatives, once introduced to the delights of tax reduction, would like it too much. Tax reduction would then become a substitute for increased outlays on urgent social needs. We would have a new and reactionary form of Keynesianism with which to contend.”41
While besieged by a newly confident and aggressive free-market Right, New Deal liberalism was attacked from the Left by Marxists and environmentalists. Marxist scholars denounced “corporate liberalism,” declaring that New Deal liberals like Roosevelt and Johnson were pawns of the capitalist class and rewriting history to make industrial regulation a conspiracy by the industries themselves.
Even more influential was the increasingly powerful environmentalist movement. Franklin Roosevelt, like his cousin Theodore, belonged to the conservationist tradition, which favored the “wise use” of natural resources for purposes of economic growth and re-creation. Liberal conservationism was eclipsed in the 1970s and 1980s by radical environmentalism, which drew on nineteenth-century romanticism. For radical environmentalists, the dams and highways that symbolized the New Deal were unnatural abominations that desecrated nature. Influenced by the British writer E. F. Schumacher, author of Small Is Beautiful, environmentalists on the Left condemned automobiles, electrical grids, suburbs, mass consumption, and industrialized agriculture and industry, and fantasized about a postindustrial utopia of locally grown food and “soft energy” sources like solar power and wind energy.
The Limits to Growth, a report published by the Club of Rome in 1972, convinced many Americans of the existence of an imminent crisis of resource depletion. When resource exhaustion turned out not to be a problem, the apocalyptic imagination of the environmentalist movement invoked other threats—peak oil and global warming—to justify its critique of industrial society and consumerism. In 1982, Hugh G. Parris, manager of power for the TVA, gave a speech entitled “The New TVA: Managing in a No-Growth Environment.” The TVA began to emphasize energy conservation as well as energy production. In a move that would have appalled Roosevelt and Lilienthal, the TVA offered more than a thousand poor households in East Tennessee and East Georgia a premodern technology: wood stoves.42
By the 1960s and 1970s, New Deal policies were being criticized from across the political spectrum. The stage was set for the Great Dismantling.
JIMMY CARTER: THE FIRST NEOLIBERAL
The Great Dismantling began during the administration of Jimmy Carter, not Ronald Reagan. Richard Nixon, a veteran of the Office of Price Administration who expanded government and fought inflation by imposing price controls, was a Modern Republican like Dwight Eisenhower who did not challenge the basic assumptions of the New Deal. If Nixon was the last New Dealer, Carter was the first neoliberal in the White House.
In 1980, the Carter administration showed a belated interest in strategic national economic policy as an alternative to the free-market ideology that became known as neoliberalism. The Treasury Department undertook a study of the Reconstruction Finance Corporation, while the State Department commissioned studies of industrial policy in Japan, Germany, France, and Sweden. In August 1980, Carter announced an economic revitalization plan, complete with a national development bank, an Economic Revitalization Board, targeted policies for particular sectors, and tripartite committees for major industries.43 Bold on the surface, Carter’s plan represented the most comprehensive vision of American developmental capitalism since the New Deal. But the plan was hastily drawn up to appease liberal and labor critics of the administration and it was forgotten after Carter was defeated by Reagan in 1980.
Many otherwise liberal Democrats viewed deregulation as a way to combat runaway inflation. From around 1 percent in 1960, inflation rose to 12 to 14 percent in 1979–1980. To this day, nobody is quite sure why. One factor may have been the refusal of the Johnson and Nixon administrations to raise taxes to pay for the Vietnam War and expanded domestic spending.
The oil shocks of the 1970s provided an additional source of trauma as well as of inflation. Until the 1970s, the Texas Railroad Commission, by regulating the output of Texas oil wells, had been able to control the world price of oil. The embargo imposed by OPEC on the United States and other allies of Israel following the Arab-Israeli War of 1973 demonstrated that control of the global oil price had passed to other oil-producing countries. Saudi Arabia, America’s ally since the 1940s, nationalized the holdings of American oil companies. Increasingly the United States seemed to be, in the words of Richard Nixon, “a pitiful, helpless giant.” Another oil shock followed in 1979 with the Iranian revolution.
Inflation became endemic as part of the wage-price spiral, as unionized workers anticipating further inflation won wage increases from employers, who passed them on to consumers via higher prices, with accommodating central banks like the Federal Reserve under Arthur Burns printing money. Nixon had failed to stop inflation with wage-and-price controls and Gerald Ford had been ridiculed for asking Americans to wear buttons with the motto “WIN (Whip Inflation Now).”
By the time that Jimmy Carter was inaugurated in 1977, many economists and policymakers hoped that free-market competition in infrastructure industries such as transportation and energy could lower the cost of goods and reduce inflation. The Carter administration treated deregulation as part of its inflation-fighting efforts. The April 1977 anti-inflation program of the administration combined a two-year extension of Nixon’s Council on Wage and Price Stability, which oversaw wage-and-price controls, with a call for deregulation of communications, trucking, and other industries that had been regulated since the New Deal.44
THE GREAT DISMANTLING
The result, in the years from Carter to Clinton, was the rapid dismantling of most of the system governing the US economy erected during Roosevelt’s presidency. The Carter administration supported the deregulation of airlines (1978), rail (1980), and trucking (1980); the bus industry was deregulated in 1982. Telecommunications was partly deregulated in 1996; in 1999 came the Gramm-Leach-Bliley Act, repealing the separation of commercial from investment banking established by the Glass-Steagall Act of 1933.
Deregulation decimated labor unions in airlines, trucking, and telecommunications—a result that was welcomed by many of the proponents of deregulation, who blamed inflation on unions whose wage costs were passed along by businesses in the form of higher prices. In 1979, 8.6 percent of the private workforce was found in the highly regulated, highly unionized transporation and utility sectors.45 As a percentage of the private-sector workforce, unionized labor fell from 20 percent in 1980 to 10 percent in 1990 to less than 7 percent in 2010.
In some industries, a case could be made for deregulation. The regulated utility system of the New Deal era sometimes slowed the diffusion of new technologies and preserved archaic distinctions among industries. The organization of the telephone industry dominated by AT&T made sense as long as wire-based telephony created natural regional and continental monopolies. But the case for monopoly was undermined by technological advance in the area of radio communications. Like so many technological innovations, this emerged from the US military. During World War II, the US Army Signal Corps, seeking an alternative to wire or cable telephony, developed microwave transmission. This technology diffused to the private sector and, in the 1960s, Microwave Communications Inc., which became MCI, sought to compete with AT&T for long-distance service. In 1974, the Justice Department filed an antitrust suit against AT&T. AT&T was found guilty, among other things, of allegedly limiting competition by cross-subsidizing its state units. But state regulators had long insisted on low local-service rates, which could only be delivered by a policy of cross-subsidies. In other words, the federal government punished Ma Bell for doing what state governments forced her to do.46 Under a consent decree in 1982, the company divested itself of its local operating companies—the Baby Bells—and retained only its long-distance, manufacturing, and research components. In succeeding years, cell phones rapidly replaced land lines and continuing technological innovation blurred the boundaries among telephone, cable, satellite radio, and Internet providers.
In other cases, such as airlines and electricity, deregulation turned healthy industries back into the “sick industries” that regulation had been designed to cure. The Civil Aeronautics Act of 1938 had turned the US airline industry into a regulated utility under the Civil Aeronautics Board (CAB). Like other regulated industries, the airline industry was exempt from antitrust laws so that airlines could set rates together. Among the enlightened regulations of the CAB were prohibitions on charging more for short flights than long ones and rate floors to prevent “fare wars.” But by the 1970s, the example of independent local airlines like Southwest in Texas persuaded many analysts that airlines could not only increase routes but also combat inflation by lowering fares. The Carter administration supported the Airline Deregulation Act of 1978, which eliminated federal controls over entry, routes, schedules, financing, and fares. In 1984, the CAB itself was abolished.
The result of airline deregulation has been a chronically sick industry. Other than Southwest, few new entrants to the market have survived the Darwinian struggle. No airline went bankrupt in the era of regulation; since deregulation, more than 160 airlines have gone out of business, including Pan Am, TWA, Braniff, and Eastern. Only one of 58 airline companies created between 1978 and 1990 survived past 2000.47
As a result of mergers and alliances between particular cities and particular airlines that turned airports into “fortress hubs,” competition diminished. By 2000, there were a dozen “dominated” hubs, where one airline had more than 50 percent of local passengers or two airlines had more than 60 percent.48 Evidence of predatory monopoly behavior was abundant. The Government Accounting Office (GAO) found that fares were higher at dominated hubs.49
In 1981, Paul Stephen Dempsey accurately predicted three phases in the future of the US airline industry following deregulation: “In the first, price and service competition are increased, carriers become innovative and imaginative in the types of price and service combinations they offer, and consumers thereby enjoy lower priced transportation.” In the second stage, “the intensive competition they are forced to endure under deregulation will force many carriers to float ‘belly up’ in bankruptcy.”
The final outcome, Dempsey predicted, would be worse than the preregulation system: “Stage three of deregulation will constitute the ultimate transportation system with which the nation is left. A monopolistic or oligopolistic market structure will result in high prices, poor service, and little innovation or efficiency. . . . Small communities will receive poorer service and/or higher rates than they enjoyed under regulation. . . . In the end, the industry structure created by the free market may be much less desirable than that which was established by federal economic regulation.”50
Another deregulatory disaster occurred in the electrical utility business. Because electrical utilities are so widely considered to be natural geographic monopolies, free-market libertarians saw them as the ultimate challenge—if electricity can be deregulated, anything can. Libertarians proposed that power providers other than utilities should be allowed to sell electricity on utility lines.
Allowing wholesale prices to be set by the market while limiting consumer-price increases created opportunities for market manipulation that were quickly exploited by Enron, a Houston-based energy company. A modern-day Samuel Insull, Ken Lay, the CEO of Enron, died in prison after his manipulative energy empire crumbled, exposing a shocking variety of criminal practices. According to one government report, Enron used market-manipulation scams with secret internal names like Black Widow, Get Shorty, Big Foot, Ricochet, and Death Star. The Death Star scam involved tricking state utilities into believing that there was congestion, forcing the state governments to pay “congestion fees” to Enron. The result was a series of “rolling blackouts” in California in 2000 and 2001.51 In testimony to the US Senate in 2002, the chair of the California Power Authority, S. David Freeman, explained: “There is one fundamental lesson we must learn from this experience: electricity is really different from everything else. . . . And a market approach for electricity is inherently gameable.”52
FINANCE AND THE GREAT DISMANTLING
Dismantling New Deal–era regulations had even more dramatic effects in the area of finance. As we have seen, the New Deal regulatory system continued the Jeffersonian tradition of protecting small “country banks,” but it had the beneficial, if incidental, side effect of averting “ruinous competition” among banks and other financial institutions that might lead to reckless practices capable of endangering the economy. Thanks to Glass-Steagall and federal and state restrictions on interstate and intrastate branch banking, the US banking system was dominated by small local and regional banks. Between 1945 and 1970, the number of commercial banks dropped only slightly from 14,126 to 13,690; as late as 1991, there were nearly 12,000 banks.53 From the late 1930s until deregulation began in the 1980s, the failure rate of banks and other financial institutions was extraordinarily low.
In the 1970s, money-market funds began to compete for customers with banks. Between 1977 and 1989, the share of American households with savings accounts shrank from 77 percent to 44 percent, while the bank share of household financial assets dropped from 30 percent in 1989 to only 17 percent in 2004.54 In response, commercial banks panicked by falling profits lobbied Congress to repeal restrictions on their activities. In a short period of time, the walls between commercial banking, investment banking, thrifts and savings and loans, and insurance companies were torn down. In 1978, the Supreme Court in Marquette National Bank v. First Omaha Service Company struck down a South Dakota state usury law. Rapidly, limits on interest rates disappeared nationwide.
In 1980, the Depository Institutions Deregulation and Monetary Control Act began the process of elimination of Regulation Q (established by the Glass-Steagall Act; see chapter 13) and state-imposed usury limits. The Garn–St. Germain Depository Institutions Act of 1982 authorized money-market accounts. The Competitive Equality Banking Act of 1987, by limiting the ability of nonbanks to expand, gave them an incentive to become bank holding companies. The Financial Institutions Reform, Recovery and Enforcement Act of 1989 dealt with savings and loans. In 1994, the Riegle-Neal Act repealed the prohibition on interstate banking of the 1927 McFadden Act. Following the technically illegal merger of Citicorp and Travelers Insurance in 1998, the repeal of the Glass-Steagall ban on the combination of investment banks and commercial banks took place in 1999.
The result was a wave of mergers among banks and other financial institutions. Between 1990 and 2007, the number of FDIC-insured commercial banks dropped from more than twelve thousand to around seventy-five hundred. In 2007, the ten largest banks had 51 percent of banking industry assets and 40 percent of US domestic deposits.55
THE VOLCKER RECESSION
In 1943, the Polish economist Michal Kalecki predicted that attempts by governments to promote full employment would lead to inflation and provoke a backlash by business and investors: “The assumption that a government will maintain full employment in a capitalist economy if it knows how to do it is fallacious. . . . The workers would get out of hand and the ‘captains of industry’ would be anxious to ‘teach them a lesson.’ . . . A powerful bloc is likely to be formed between big business and rentier interests, and they would probably find more than one economist to declare that the situation was manifestly unsound.”56
Sir Alan Budd, chief economic adviser to British prime minister Margaret Thatcher, wrote: “The Thatcher government never believed for a moment that [monetarism] was the correct way to bring down inflation. They did, however, see that this would be a very good way to raise unemployment. And raising unemployment was an extremely desirable way of reducing the strength of the working classes. . . . What was engineered—in Marxist terms—was a crisis of capitalism which re-created the reserve army of labour, and has allowed the capitalists to make high profits ever since.”57
Three months after Thatcher became prime minister of the United Kingdom, Paul Volcker was appointed as chairman of the Federal Reserve by Carter in 1979 and reappointed by Reagan. He served until 1987. Volcker was a former executive of Chase Manhattan Bank. As Carter’s domestic policy adviser Stuart E. Eizenstat explained, “Volcker was selected because he was the candidate of Wall Street. This was their price, in effect. What was known about him? That he was able and bright and it was also known that he was conservative. What wasn’t known was that he was going to impose some very dramatic changes.”58
Volcker’s Fed raised short-term interest rates 7 percent over the rate of inflation to 19 percent. The result was two sharp recessions in three years. To the surprise of most policymakers, foreign money poured into the United States to take advantage of the high interest rates. The dollar rose 40 percent against other currencies in the four years after 1981. The sharp increase in the value of the dollar against other currencies priced American exporters out of many foreign markets.
America’s manufacturing base was devastated. By late 1982, unemployment reached 10.8 percent. Factories were idle and neighborhoods and cities turned into wastelands. Volcker did more than anyone else to turn America’s manufacturing region into the derelict Rust Belt. The chairman of Atlantic Richfield Oil Company, Robert O. Anderson, a Republican himself, complained about Volcker and Reagan that “they’ve done more to dismantle American industry than any other group in history. And yet they go around saying everything is great. It’s like the Wizard of Oz.”59
By 1984, inflation was no more than 4 percent and remained at low levels thereafter. Was Volcker’s artificial recession really necessary to defeat inflation? Subsequent history suggests that a period of low inflation was about to begin anyway. As a factor in wage-price inflation, the bargaining power of workers was already diminishing rapidly, thanks to dwindling union membership. The contraction of the manufacturing sector as a result of productivity growth and offshoring, and the growth of employment in the nonunion service sector was already under way. Another structural factor might have moderated inflation without an artificial near-depression by the mid-1980s. The oil shocks of the 1970s were followed by an oil glut in the 1980s. The real price of crude fell from its high in 1980 to low levels for a generation. If the structural underpinnings of long-term low inflation were already in place before Paul Volcker took office, the damage that the Volcker recession inflicted on millions of Americans and critical US industries might have been a tragic and avoidable mistake.
THE BIRTH OF THE BUBBLE ECONOMY
Although they had no intention of doing so, Volcker, Reagan, and other policymakers of the 1980s accidentally laid the foundation for the US bubble economy of 1980–2008. What the financier George Soros has called a “super-bubble” created an illusion of prosperity, based on the underpinnings of debt that became unsustainable and led to the collapse of 2008. In the wake of the Volcker recession, policymakers and economists in the United States and other countries learned the wrong lesson. As governments vigilantly focused on the slightest threat of inflation, they pursued policies of low interest rates and credit-market deregulation that spawned the inflation of assets like stocks and real estates. The Volcker recession inspired shocked investors to shift from inflation-proof assets like gold and land to financial assets like stocks and bonds. The bubble economy was born.
The troubles of American manufacturing were compounded by a strong dollar that hurt American exporters even as it helped East Asian mercantilist nations and American consumers. Hoping that a higher yen would reduce Japan’s chronic trade surplus, Reagan’s Treasury secretary, James Baker—a Texan realist in the tradition of Nixon’s Treasury chief, John Connally—in 1985 negotiated a deal with the finance ministers and central-bank heads of the Group of Five or G5 (France, Germany, Japan, the United Kingdom, and the United States). Called the Plaza Accord after the New York hotel where the group met, the agreement devalued the dollar relative to the yen, whose value the Japanese had suppressed to help their exports. The policy failed to balance US-Japanese trade, however, because the Japanese government continued to pursue an export-oriented industrial policy by other methods. One was to lower domestic interest rates. This encouraged Japan’s export industries to invest in even more overcapacity. By making speculation cheap, the low-interest-rate policy also encouraged speculation that led to bubbles in real estate and stocks. The funds from Japan’s enormous trade surpluses permitted easy credit creation by Japanese banks, which led to a real-estate bubble in Japan. Prices rose far out of relation to fundamental values. At one point, the Imperial Palace grounds were said to be worth more than Tokyo. The collapse of the Japanese real-estate and stock-market bubbles produced decades of slow growth in Japan, beginning in the 1990s.
The policy of devaluing the dollar came to an end in 1987 with the Louvre Accord. From that point until the crisis of 2008, the United States followed a strong-dollar policy, to the benefit of Wall Street and to the detriment of productive export industries in the United States.
To the surprise of supply-siders, fiscal conservatives, and Keynesians alike, the Reagan deficits were easy to finance because of the inflow of foreign money, much of it from Japan. Dollars spent on Japanese imports were recycled in Japan to finance America’s public and private deficits.
Unwilling to raise taxes much higher or to engage in painful cuts in military and domestic spending, the Reagan administration learned to live with twin fiscal and trade deficits. According to R. Taggart Murphy, “The real causes of the imbalances were twofold: first, the U.S. federal deficit, which the Reagan administration had structurally embedded into the U.S. body politic, and secondly, the Japanese ‘developmental state’ system of national leverage, centralized credit allocation, and credit risk socialization . . . which required exports to close the economic circle.”60
The paradoxes involved in the interaction of free-market America and mercantilist Japan were marked by observers. The financial analyst David Hale wrote: “Future historians will probably note with more than ironic delight that at the end of the 1980s it was graduates of the Ministry of Tokyo Law School presiding over the Finance Ministry of the industrial world’s least deregulated economy who helped to rescue the Reagan administration and the international economic system from currency misalignments, trade imbalances, and financial crises produced by the fiscal and monetary policies of economics graduates of the University of Chicago.”61
The United States was now the world’s largest debtor and Japan was the world’s largest creditor. The Japanese central bank bought huge quantities of US government bonds to keep the yen artificially low, thereby subsidizing Japanese exports while hurting American exports. The same mercantilist technique would be adopted on a much larger scale by China a few decades later, with disastrous results for the economy of the United States and the world.
THE END OF THE NEW DEAL
The New Deal era came to an end in 1976, not 1980. The Age of Reagan should be called the Age of Carter.
Carter, not Reagan, pioneered the role of the fiscally conservative governor who runs against the mess in Washington, promising to shrink the bureaucracy and balance the budget. Early in his administration, Carter was praised by some on the Right for his economic conservatism. Reagan even wrote a newspaper column entitled “Give Carter a Chance.” The most conservative Democrat in the White House since Grover Cleveland, Carter fought most of his battles with Democratic liberals, not Republican conservatives.
Today’s Democrats would like to forget that supply-side economics was embraced by many members of their own party during the Carter years, while it was resisted by many old-fashioned fiscal conservatives in the GOP. As the economist Bruce Bartlett points out in a history of supply-side economics, “By 1980, the JEC [Joint Economic Committee of Congress] was a full-blown advocate of supply-side economics, despite having a majority of liberal Democrats, such as Senators Edward Kennedy (D-MA) and George McGovern (D-SD). Its annual report that year was entitled, ‘Plugging in the Supply Side.’ ”62
In defense spending, as in supply-side economics, Reagan continued what his predecessor in the White House had begun. The reversal in the post-Vietnam decline of American military spending began under Carter, following the shock of the Iranian revolution and the Soviet invasion of Afghanistan. Carter called for raising defense spending from a starting point of 4.7 percent of GDP to 5.2 percent of GDP in his final budget for fiscal year 1981. The Carter administration called for defense spending to rise even further by 1987 to 5.7 percent of GDP—only a little below the 6.2 percent at which it peaked in 1986.63
In hindsight, the neoliberal cure was far worse than the New Deal liberal disease. The maturity of the New Deal’s system of regulated, managerial capitalism coincided with the post–World War II boom and the greatest expansion of the middle class in American history. Consumer advocates, however, blamed it for stifling diversity, libertarians and conservatives claimed it choked off economic progress, and political scientists denounced it for spawning “interest-group liberalism.”
To the applause of liberal Democrats and conservative Republicans alike, the New Deal system of regulation was dismantled in one sector of the economy after another in the late 1970s and 1980s. The result was not the flourishing diversity hoped for by liberal consumer activists nor the solid, sustainable economic growth promised by free-market ideologues. Instead, the result was the collapse of unions, the decline of private R&D, three decades of wage stagnation, and an economy driven by financialization, speculation, and rising debt rather than by productive industry and rising wages.