CHAPTER 3

MAKING THE WORST OF IT

Bad Policy, Denial, and Decline

It is not often that two billion new workers and trillions of dollars of excess cash appear on the world stage. In fact, nothing like this has ever happened before. So you can kind of understand it if policy makers in places like Washington, London, Frankfurt, and Tokyo don’t exactly know how to handle the situation.

Still, what’s remarkable is the degree to which these policy makers made all the wrong choices, starting with the United States, where regressive ideologies and bad public policy ensured that the global explosion of labor and capital would snowball into a disaster that decimated America’s industrial base, deeply indebted the country, fueled a record rise in inequality, and led to the financial crisis. This disaster is far from over.

So exactly how did the United States screw up?

The short answer is that America’s leaders gave up on managed capitalism at exactly the moment when we needed it most.

A longer answer is about the overreach of laissez-faire ideas, the perversion of conservative ideology, the ongoing embrace of supply-side economics, and the compounding damage that occurred when centrist Democrats fatefully misread the global economy.

I know that’s a long list of culprits. So let’s unpack the failed U.S. economic and policy response to oversupply in more detail.

THE GREAT REACTION

The rise of the emerging nations in the 1990s came at a fateful moment. By 1997, the United States had been engaged for over fifteen years in a “great reaction”—an economic experiment that rejected decades of post-1932 managed capitalism in favor of unrestrained liberty, laissez-faire government, and a previously rather eccentric backwater economic notion called supply-side theory.

Emphasizing consumption and easy credit as the principal means of achieving growth, this economic philosophy proved attractive to a population that had grown tired of activist government. For a baby boom generation that had seen opportunities diminish amid the hyperinflation and deindustrialization of the 1970s, what could be more seductive than a rejection of the scrimping, saving, and collective sacrifice of its “greatest generation” forebears in favor of more spending, more stuff, and, well, “me”?

Most stunningly, winner-take-all capitalism—rejected by the public for decades following the excesses of the Gilded Age—was rehabilitated in the 1980s and sold to an aspirational middle class as a virtuous paradigm that promised everyone not merely a good job, a home for their family, and education for their children but also a shot at Lifestyles of the Rich and Famous. Tellingly, The Waltons, Hollywood’s paean to grit and community during the Great Depression, went off the air just after Reagan took power, and Dynasty, a TV drama about a wealthy oil family, began an eight-year run days before he took the oath of office.

So-called conservatives led the laissez-faire revolution, from Reaganomics to the Contract with America. But the economic and political strategies they embraced were decidedly not conservative in the prudential sense of the word. Not even close. Broad claims were made about the virtues of small government and low taxes, and about how efficient markets and rational consumers could solve any number of societal problems—claims that any number of Republicans embraced. Essentially, though, there was no historical proof for many of these claims, including the basic logic chain of supply-side economics, namely, that lower taxation would spur greater investment and consumption, create a greater supply of goods and services, produce a larger economy, and yield larger tax revenue overall. Whereas American conservatives had once preached caution and incrementalism in the economic realm, they suddenly turned into radical gamblers, swinging for the fences with untested ideas and putting the nation’s fiscal health at risk in the process.

Supply-side theory had, before the late 1970s, been a fantasy of a small minority of economists who remained on the far fringes of the mainstream during the early postwar era. Many of them found academic homes at the University of Chicago and other midwestern universities in the Great Lakes region (hence the terms “Chicago school” and “freshwater school” to describe these economists). Over the years, they blended a cocktail of ideas into a plausible vision of economics. They argued that more freedom—freedom from taxation and regulation—would produce a greater supply of goods, services, jobs and, of course, wealth.

Freshwater economic ideas arrived at a propitious moment. They fit perfectly with a distrust of government and a new embrace of individualism in American society. Also, the essential optimism, even utopianism, of these ideas seemed quintessentially American, like the heartland from which they sprang. The bright future they offered felt like the perfect antidote to the nightmare of the Vietnam conflict, the economic shocks of the 1970s, and the malaise of the Carter years. The planets were aligned to turn consumption and lifestyle aspirations loose, financed with tax cuts and borrowing rather than toil.

So, by the early 1980s, the pattern of the next quarter century would be set. The great engine of industrial American capitalism began to recede into memory, replaced by a new “service” economy that specialized in instant gratification. (More on that below.) Also, thanks to the rise of shareholder capitalism and other trends, quarterly profits became all-important, shifting the focus of corporations away from long-term innovation and toward whatever steps it took to make a buck by the next quarterly filing.

Consumption as a percentage of GDP steadily increased from the average 62 percent to 63 percent range (where it had hovered during the period from the end of World War II through the early 1980s) to 70 percent of GDP by the 1990s. Most of this additional consumption would be financed not by greater productivity but by credit. And overall outstanding debt in the United States would explode from the 150 percent of GDP of the postwar years to over 350 percent by the peak of the Great Credit Bubble in 2009.1

Meanwhile, America’s political leadership, particularly Republicans, learned an alarming (as it turned out) lesson during the 1980s: that “deficits don’t matter,” as Vice President Dick Cheney would later say. Politicians could win office by pandering to the public’s desire for low taxes and lots of spending, and leave the bill to future generations. Once upon a time, Republicans were the main opponents of this free-lunch approach; in the 1980s, they became the chief proponents of something for nothing—all under the guise of championing growth and freedom. The old conservative ethos of pay-as-you-go was dead and buried by the time Reagan was reelected in a landslide in 1984. And in subsequent years, as the GOP moved even more to the right, the most positive aspects of conservatism—fiscal prudence and a reflex toward balanced restraint—all but disappeared, leaving the right to morph into the most economically reckless ideological movement in American history. What many Republicans sought to put across was not really conservatism—although they wore the moniker with gusto—but that magic combination of prosperity and populism that assures political success.

When Ronald Reagan came to power in 1981, he and other advocates of economic liberalization made some good arguments. Yes, there was too much regulation in some parts of the economy. And yes, the top tax bracket of 70 percent was, in fact, way too high. Certainly there was a strong case to be made that more free-market dynamism was needed to foster faster growth.

As Daniel Yergin and Joseph Stanislaw wrote in their book, The Commanding Heights, the Reagan Revolution was part of a larger global pushback against too much government intervention in the economy.2 Ultimately this drive for liberalization would fundamentally remake societies around the world, with capitalist practices eventually pushing aside a heavy-handed statism, particularly in the Soviet Union, India, and China, as we have already discussed.

Ironically, though, the very same movement that gave rise to new competitors in the developing world—and ushered in the age of oversupply—would end up going too far in the United States. While China and other competitors never abandoned a strategic role for government in choreographed economic growth, such thinking became anathema in a Republican Party that controlled much of the U.S. government from 1981 to 2009, an all-important period of change.

By rejecting managed capitalism as major competitors gained power, the United States economically disarmed itself just as the age of globalization began. The hard realities of geoeconomic power were ignored by a Republican Party that embraced the beliefs of earlier times: that economic liberty and less government involvement in commerce would inevitably result in long-term, steady prosperity. Their economic message—balanced and smaller government budgets, lower taxes, less national debt, reliance on the common sense of individuals to save and invest, and the reduction or elimination of socioeconomic engineering and entitlements—sounded very attractive and, in fact, it is attractive. In some parallel universe there may be an America that lives plainly, that consumes only what it can profitably produce, that requires fewer services and less support from government, and that lives within its means and sacrifices for the common wealth of family and nation.

That nation, however, is not the America of the past three decades. It may or may not be an America that ever actually existed. But it is not the consumerist, overly leveraged country of the present day.

The evasions and denials of this fact by those of a broad range of political stripes helps account for several problems I discuss later on, particularly regarding massive deficit spending. Yet the bigger problem with the new so-called conservative economic vision is that the right embraced supply-side thinking just as the world was entering the age of oversupply.

SUPPLY-SIDE DREAMS

Starting in the early 1980s, a focus on tax cuts stood at the core of supply-side arguments embraced by a generation of Republican leaders. Supply-siders argue that economic growth and prosperity is driven by increasing the supply of goods and services by making an investment in the increase of such supply economically attractive to those with capital and other resources.

Supply-side adherents hold that consumption (demand) merely derives from an increase in supply. By offering incentives—and limiting disincentives, such as taxes—to those capable of increasing the supply of goods and services, the benefits will trickle down to all participants in an economy. If goods and services are being produced at a lower cost, people will be able to buy more of them—increasing demand, spurring more hiring, and growing the economy. All of which, the logic goes, will result in more tax revenues, even though rates are lower—an alluring outcome also known as tax revenue optimization.

Now, as it turns out, tax revenue optimization theory was developed by none other than John Maynard Keynes, though without all the supply-side trappings. Keynes postulated that there was a point of the curve of potential taxation rates at which government revenues would be maximized by virtue of the relative incentives and/or disincentives caused by tax rates. The supply-side economist Arthur Laffer, as political mythology has it, drew this curve on a napkin for none other than future Reagan advisers Dick Cheney and Donald Rumsfeld, along with the writer Jude Wanniski, in the cocktail lounge of the Washington Hotel in 1974 to illustrate the need to avoid tax increases proposed during the Ford administration.

The “Laffer curve” (so christened by Wanniski) would dominate conservative economic thinking for thirty years. It rested on the notion that tax revenues could actually be made to increase if only tax rates were lowered. This was a slight bastardization of the curve’s original postulation by Keynes, and even of the reason Laffer drew it on the famous napkin, which was to prove that increasing tax rates did not necessarily increase government revenues but could decrease them. The inverse, some economists and many Republican politicians believed, had to be true as well—didn’t it?

Actually, it was all a bit of wishful thinking fueled by strong populist appeal. Nearly every element of the supply-side worldview was popular with a good chunk of the American electorate: lower tax rates for all, with substantial cuts for those at the top tier to trickle down to those below; smaller government with fewer restrictions on individual liberties; higher consumption based on the simple premise of there being more things to consume; and, best of all, the promise of unlimited economic growth.

Who wouldn’t sign up for that plan? And sign up America did—in fact, swallowed it hook, line, and sinker.

One problem with the above formula, though, is that if overall tax revenues didn’t rise sufficiently amid a new golden era of growth ushered in by lower tax rates, as the Laffer curve predicted, a fiscal disaster could ensue—unless there were big cuts in government spending. A second problem is that this scenario required the maintenance of respectable levels of net savings versus investment by rational consumers.

Those were two big “ifs.” And there were no historical reasons, or other forms of proof, to show that these conditions would ever be met. And indeed, both hopes turned out to be pipe dreams. During none of the periods in which supply-side theory dominated economic policy making in the United States did things ever go as the theory imagined. Most notably, lower tax rates never sparked enough growth to generate more revenues overall—not by a long shot. And during the two decades that Republicans held the White House, between 1981 and 2009, there was basically no political will to bring government spending into line with the reduced government revenues orchestrated by Reagan and George W. Bush. The Republican Party, once the guardian of fiscal responsibility, became something else altogether as its adherents learned that they could get elected by cutting taxes, or promising to cut taxes, and then leaving spending cuts for another day.

When the budget was actually in surplus during a few of the Clinton years, it was accomplished mostly through higher government revenues together with a truly booming economy driven by huge technology-induced productivity increases. The Laffer curve never really worked, at least fiscally.

Nor did supply-side economics work for ordinary Americans, many of whom saw their earnings stagnate as the lion’s share of winnings went to the top 10 percent of Americans. But flat earnings didn’t stop the cost of living from going up, which explains why personal savings went into a nosedive. The personal savings rate, which hovered between 8 percent and 10 percent in the United States in the decades prior to 1980, has declined steadily since the supply-siders began to weave their fabric of hyper-investment and consumption, falling to near zero by the mid-2000s.3

Meanwhile, personal debt rose. With wages lagging behind living costs amid record levels of inequality, consumers turned to credit cards, home equity lines, and other forms of credit to make up the difference. And thanks to deregulation of the banking industry, along with growing supplies of cheap money, it became easier and easier for Americans to borrow as much as they wanted. So even as supply-side economics was failing, new ways to keep the public pacified and living standards high emerged.

This model, of course, would not be sustainable in the long run.

Deregulation was also an important part of this picture. Laissez-faire theorists touted less government oversight of the economy as an essential step (along with tax cuts) to achieve limitless growth. And like tax cuts, this wasn’t a very tough sell to Americans, with their historic distrust of government and the post-sixties ethos that elevated individual choice over the authority of institutions. In any case, it wasn’t the public that needed to be sold on deregulation; it was elected leaders and policy makers in Washington. And many gladly went along with deregulation, including deregulation of finance, both because they believed it was good for the economy and because major campaign donors made sure it was good for their careers. For example, between the early 1980s and the financial crisis, Wall Street spent literally billions of dollars on lobbying and campaign donations to shape the regulatory rules that governed their behavior.4

As a practical matter, fewer rules on Wall Street and the banking industry made it easier for ordinary Americans to tap into the world’s growing sea of cheap money—and thus remain insulated from the consequences of stagnant wages and rising inequality.5 Sure, America’s economic system had stopped working for the bottom half of the income ladder by the 1980s, but deregulation of the credit card industry meant that just about anyone could get a wallet full of plastic that let them buy nearly anything they wanted. And deregulation of the mortgage industry made it easier to buy a house and then turn around and borrow against it. The opiate of cheap credit worked in tandem with low costs for consumer products to persuade the masses that everything was more or less fine.

Yet deregulation would come to have major costs, as we now know. Banks and the shadow banking industry were unleashed, and they took huge risks with other people’s money—money that was suddenly more abundant than ever before.

In retrospect, it’s hard to imagine a more ill-timed confluence of historic trends. Washington, enthralled with laissez-faire ideas, removed key checks on the financial industry at the same time that cheap money flooded the world as never before; and also as strapped Americans increasingly looked to credit to maintain their standard of living.

This was a recipe for a train wreck, and that’s exactly what ensued.

Emerging nations—starting with Japan in the 1960s, then the Asian Tiger states, and later China and others—were both the enablers and beneficiaries of this self-destructive behavior. They sold American consumers freighter after freighter of low-priced goods to satiate our endless consumption aspirations, papering over stagnant incomes and rising inequality, and then loaned us their export surpluses so we could continue living beyond our means, oblivious to what was really happening.

But the story gets worse because, along the way, the United States gave up key parts of its ability to generate its own national wealth.

THE GREAT INDUSTRIAL DEVOLUTION

The fall of America’s mighty industrial sector over the past thirty years is often described as an unavoidable natural evolution. But the truth is that U.S. industry didn’t have to fall so far or so hard. What really happened is that America’s leaders—either entranced by laissez-faire ideologues or cowed by them—ensured the worst possible outcomes at precisely the moment that hundreds of millions of new workers joined the global economy.

We have all heard the standard explanation for the decline of U.S. manufacturing: Rust Belt industries simply couldn’t compete against low-wage nations. In a classic example of Schumpeterian evolutionary economics, the story goes, weaker companies were destroyed and new ones arose.

In fact, the actual history is more complicated. Yes, the inability to compete against low-wage nations has been an important driver of deindustrialization. But it’s also true that U.S. leaders failed to take coordinated steps that could have bolstered America’s manufacturing sector, steps such as rejuvenating industrial technology, improving and better-utilizing the nation’s human capital via retraining, and nurturing better business practices overall.

Were such ideas floating around a few decades ago, when U.S. competitiveness first began to decline? You bet. Proposals for an American “industrial policy” started emerging in the 1970s and some were officially embraced by President Jimmy Carter. The Reagan administration rejected such initiatives, but a growing circle of economists and politicians championed industrial policy through the early 1980s. Among them was the senator and presidential candidate Gary Hart.

Yet Hart was dismissed as a charlatan when he talked up “new ideas” to make America more competitive, and early advocates of industrial policy such as Robert Reich and Lester Thurow were painted as quasi-socialists.6 Many in Washington rejected on ideological and political grounds the concept of developing a broadly defined national economic policy or, more properly described, a national competitiveness policy.

For generations, such policy was unnecessary, given the United States’ strength and resources. It was something that developing countries needed to do in order to compete with the developed world. Yet even as the situation changed, and industrial powers like Japan and Germany showed how effective a strong national economic policy could be in fostering growth, many U.S. leaders resisted such policy on the grounds that, ultimately, only markets could allocate wealth and resources efficiently. It was deeply unfortunate timing that the so-called conservative movement arose at precisely the moment when the United States needed to use government more proactively to compete in the global economy.

Of course, the United States has for decades had a de facto national economic policy in the hodgepodge of subsidies and tax credits offered to various industries either directly or through low-cost government insurance or guarantees. The United States has long heavily subsidized the housing sector through the mortgage interest deduction, the health-care sector through the employee insurance deduction, Wall Street through the 401(k) pension deduction, and a few other sectors, such as energy and agriculture. So it’s nonsense to imagine that Washington has been dominated by some purist opposition to a government role in the economy.

The problem is that there has been nothing strategic about these subsidies, at least from a geoeconomic perspective. This potpourri of perks has mainly aimed to satisfy political constituencies, not to bolster the country’s comparative advantage relative to its competitors. For decades, we have annually wasted hundreds of billions of dollars in tax expenditures that have had nothing to do with the arguably most important challenge facing the United States—retaining its ability to create wealth and compete globally. Conservatives pooh-poohed industrial policy as the wrongheaded picking of “winners and losers” even as they largely accepted a tax structure that did exactly that—only not in ways that were actually useful.

We have seen entire industries vanish from our shores that could have been retained with tax credits, modifications to business practices, the targeted bolstering of human capital via retraining, and investment in state-of-the-art technology and systems. Businesses lost to competitors go beyond old-school manufacturing, and encompass the most modern and labor-intensive industries—including ones such as animation and computer graphics, which Canada, India, and China have sought to emphasize.

If you’re an advanced nation, the right response to rising global competition is to start making products that offer a higher added value—and consequently higher gross profit margins—from specialized and/or sophisticated manufacturing processes. Japanese advanced electronic components and German machine tools and manufacturing equipment offer excellent examples of this phenomenon, as do parts of the U.S. economy, to a somewhat lesser extent.

This path isn’t without challenges, of course. Demand in the so-called high-value-added market is more limited than that for general manufactures—so if everyone is pursuing that portion of the market, margins become squeezed. Also, what was high-value-added at one point in time (computer chips, for example) has a tendency to become commoditized over time. Finally, this approach doesn’t necessarily produce a lot of jobs. The highest margins of all are obtained by “industrial” companies that design and distribute but outsource all manufacturing to lower-cost labor outside of their principal high-wage markets. A “U.S.” consumer product company named Apple comes to mind in this regard: massively profitable as a margin of sales because it doesn’t actually make any products.

A second notion that is often floated among freshwater economists—one that was well exposed to skepticism by my grammar school buddy, Timothy Noah, in his book on income and wealth inequality in the United States, The Great Divergence—is the Ricardian notion (after early-nineteenth-century economist David Ricardo) that trade among nations, even between low-wage and high-wage nations, was a benefit to both sides.7 The argument states that the availability of low-priced goods to workers in higher-wage countries would serve to increase their purchasing power, thus more than offsetting any pressure on nominal wages. As Noah so correctly points out, that aspect of Ricardo’s contribution to classical economics was put out of its misery by Wolfgang Stolper and the inimitable Paul Samuelson in a 1941 paper that proved the opposite to be true—that with significant enough wage imbalances, the loss of jobs and consequent downward pressure on wages in high-wage countries would overwhelm any benefit of cheaper goods. The only problem is that until the Great Rejoining, developed-world commerce did not involve such substantial wage imbalances on a high volume of trade, so no one had an opportunity to really study Stolper and Samuelson’s proposition in the real world. Again, the Great Rejoining was truly a unique global event.

Ironically, the Great Industrial Devolution was actually exacerbated by one of the greatest technological leaps in the history of mankind, known generally as the information revolution. The Internet had profound and salutary effects on domestic productivity, yet it also came with three highly problematic consequences: (i) it enabled the emerging markets to have nearly perfect communication with, and access to, consumers in the developed world; (ii) it likewise made it easier for companies in North America, Europe, and Japan to employ the abundant and inexpensive new labor pools in emerging economies; and (iii) it produced enormous efficiencies in the developed world that arguably cost more domestic jobs than the technology created.

Of course, many of the Internet’s most profitable applications were those that greased the wheels of consumption—something that went hand in glove with the still-powerful pull of the supply-side zeitgeist.

Nevertheless, productivity in the developed world accelerated at a rate unparalleled in postwar economic history. In an era of advanced creative destruction, during which businesses and markets sorted through the possible applications of the technology (also known as the tech bubble), productivity masked the advances in emerging markets that were on the cusp of radically destabilizing the balance of global trade.

Productivity is, after all, similar to cholesterol—there is good productivity and bad productivity when it comes to measures of unit labor cost (the cost to produce a unit of goods or services, measured in money value). Technological advances—at least in a theoretical closed universe of national labor—produce generally good productivity. A country can produce for itself more efficiently, with corresponding benefits to living standards, as workers need to work less to meet domestic demand and—again, theoretically—earn higher incomes by devoting the saved time to catering to exogenous demand. This is what we know by a more popular name: progress.

Bad productivity is when workers are paid less (in real terms—they have less purchasing power) for producing a unit of goods or a service. It shows up in statistics as “increased productivity”—but it is the national accounts version of plaque in the arteries.

The Great Industrial Devolution didn’t have to happen the way it did. The United States and other developed nations could have held on to more of their industrial base (and its ability to generate wealth) by making and selling stuff. Instead, the veritable embracing of offshoring by free market supply-siders ensured the worst possible outcome amid the rise of the emerging nations starting in the 1970s and 1980s.

As the Reagan era ended, the United States turned a fateful corner and began moving toward a hollowed-out, financialized economy characterized by easy money, soaring debt, false prosperity, and outsized risk taking by bankers and traders. That’s not a good mix.

One would have thought that the near-collapse of several entire sectors of the economy after the excesses of the 1980s would have tempered appetites for risky economic theories and imprudent financial adventurism. But the desire for a renewal of abundant prosperity—more accurately, the return of the satisfying feeling of nearly unrestrained consumption—overwhelmed mere prudence. And so the mistakes of the 1980s under Reagan were merely a prelude to the greater excesses of the early 2000s. But all the trends would become more magnified over time as the global supply of labor and capital expanded, deregulation went further under Bush, and the housing bubble ushered in the biggest credit binge in U.S. history.