CHAPTER TWO
How to Explain a Crisis
The Revenge of the Populists
THEORIES OF CRISIS AND THE ORIGINS OF THE FED
When we set out to explain the recent crisis, we’re revisiting the late nineteenth-century moment when systematic theories of the so-called business cycle—the “long waves” of economic growth and development—first emerged from close study of modern-industrial capitalism, and from political conflict over the meanings of markets. That’s when the big questions became the ones we still ask: What causes economic crises, and how do we respond to them? Are markets self-regulating mechanisms, or do they require our close attention? How can we manage crises without taking control of every contract—without destroying the free play of plural market forces? Or can we? In sum, can we manage crises in a way that avoids state control of markets?
The first “Great Depression,” which plagued the Atlantic economies of the late nineteenth century by deflating prices and driving profits to the point of no return, convinced most interested observers that the distribution of income was the key to the future; insufficient demand for goods now looked much more important than anything that happened on the supply side. Almost overnight, it seemed, the industrial solution to the age-old problem of scarcity had become the modern problem of chronic surplus. The question that followed was, How do we sell what we produce at profitable prices, or, how do we create demand for this oversupply of goods? Most observers accordingly set out to revise “classical” economic theory (the founding fathers were Adam Smith, Jean-Baptiste Say, and David Ricardo), which had all but ignored consumer demand in seeking to discover an objective measure of value in the quantities of labor time required to produce goods.
Their efforts led quickly to the creation of economics as a distinct academic discipline: in the United States, it soon became a powerful voice in the new state universities and the older private colleges that were remaking themselves as secular institutions. The academic urge to record every spasm in the price system led the newly credentialed professionals to found the Journal of Political Economy in 1892, the American Economic Association (and Review) in 1911, and the National Bureau of Economic Research (NBER) in 1919. The NBER—it’s not an arm of any government—is still the official referee on where we are in the business cycle, reassuring us in the spring of 2010, for example, that we’d been in recovery since the winter of 2009.
The rise of a new imperialism toward the end of the nineteenth century meanwhile motivated economists to explain it in terms of the larger transatlantic slump. As framed by John A. Hobson in England, Paul Leroy-Beaulieu in France, and Charles Conant in the United States, the driving force in this new stage of Western civilization was surplus capital seeking foreign investment rather than surplus goods seeking foreign markets. In their view, oversaving was the most important kind of “overproduction”: it now appeared as the fundamental cause of both economic crisis at home and colonial conquest abroad (V. I. Lenin agreed with them). Here is how Conant, who served four presidential administrations between 1898 and 1915, and who was the single most important theorist of banking reform in the early twentieth century—it’s no stretch to call him the intellectual godfather of the Fed—explained the problem in 1901, while he was reconstructing the Philippines’ monetary system on behalf of the State Department: “The benefit to the old countries in the control of the underdeveloped countries does not lie chiefly in the outlet for consumable goods. It is precisely to escape the necessity for the reduplication of the plants which produce the goods, by finding a field elsewhere for the creation of new plants, that the savings of the capitalistic countries are seeking an outlet beyond their own limits.” In short, surplus capital was a problem because it caused chronic economic crisis; the export of this surplus via the new imperialism was part of the solution.
Systematic business cycle theories emerged, finally, from the movement for reform that led to the Federal Reserve System, the central banking apparatus designed to manage economic crises so that short-term hoarding by panicked banks wouldn’t freeze credit and destroy a system held together by promises to pay. I’ll concentrate on this reform movement because it included and built on the other sources of new theorizing. For instance, academic economists were an integral part of the movement for banking reform, and many nonacademic constituents of the movement, Conant included, argued for a central bank on the grounds that it would enable an American Empire. The movement also deserves attention because it took shape as a response to the Populist challenge of the 1890s.
A hundred years ago, the U.S. Congress established the National Monetary Commission (NMC), then a novel, hybrid mix of elected officials and private-sector experts. Its purposes were to produce studies of modern banking systems and, on that basis, to recommend realistic reform of the American system. Almost all observers at the time understood that “reform” meant the creation of a central bank apparatus—the design of a new headquarters for the system—and that the general purpose of such a bank was to manage the kind of crisis that had plagued the Atlantic economies for a generation. Almost all observers, from the Populists to the big bankers, also understood that this new headquarters would be an agent of public policy, an arm of the state, “under the control of the government,” as the New York Chamber of Commerce put it in 1906, when it proposed a central bank as the key to reform.
The NMC duly produced dozens of volumes that became the empirical basis for the Aldrich Plan, a proposal for a thorough centralization of financial responsibility in the hands of bankers themselves. It quickly died in Congress because most Americans were still suspicious of bankers, none more so than the Southern farmers who, having given up on their Populist dreams, now voted for Democrats. Yet within three years the theory and practice of central banking were institutionalized in the new Federal Reserve System, a landmark reform urged by Woodrow Wilson, the Democrat in the White House.
But the Fed wasn’t Wilson’s doing, and it wasn’t the result of the financial crisis that threatened to break the banks in 1907, either; that was the crisis managed by J. P. Morgan from New York City, not by officials in Washington. The movement that created a central banking system in the United States began long before, in the 1890s, as an inter-regional, cross-class coalition of businessmen, bankers, journalists, academics, and, yes, farmers and workers as well. By the time the NMC convened, that reform movement was defined, indeed constituted, by three crucial assumptions—assumptions that still inform our debates about the current crisis and its comparable predecessors, but not in the ways you’d expect.
To begin with, the movement’s constituents assumed that economic crisis was inevitable, even normal, regardless of how scrupulously and rationally business decisions got made: markets were no longer self-regulating, if ever they had been. Fraud, theft, and earnest chicanery were always prevalent in a modern market society, to be sure, because everybody was looking for the “main chance”; even so, by 1910, just about everyone outside the Populist camp understood that the citation of moral defects couldn’t explain the depth and the scope of economic crisis in the late nineteenth century. So the question was not whether but how to regulate markets.
Just about everyone also understood that rational decisions at the microlevel of the firm could contribute to disaster at the macrolevel of the economy as a whole. For example, a firm’s decision to expand the production of steel rails because yet another railroad was being built could increase supply past the point of effective demand, to where the selling price of the rails no longer covered the costs of their production. The fall of prices and the disappearance of profits would affect more than the single firm, and would meanwhile signal that perhaps too much capital was sunk in railroad building as such. Then what? Then all steel rail producers, their suppliers, and their customers would face bankruptcy, and all on account of a rational decision by one firm.
But “overproduction”—that is, producing a supply of goods that exceeds effective or profitable demand—in one sector is not the same thing as a general economic crisis. The price of steel rails can fall without triggering a larger disaster. Nonetheless the significant theorists of the business cycle in the late nineteenth century—I call them significant because their ideas profoundly shaped the making of the Federal Reserve and our later thinking about crisis management—defined “overproduction” as the key cause of economic crisis. Among these theorists were David Wells, Jeremiah Jenks, A. Piatt Andrew, Wesley Mitchell, David Kinley, and O. M. W. Sprague. Most of them were university-trained economists, but a few, like Conant and Wells, were financial journalists with some experience in banking. They argued that the production of goods beyond effective demand was no longer a local, sectoral problem; it could be measured across the board, in every sector, from agriculture to manufacturing. The second assumption of the movement for banking reform was just this, that “overproduction” as such—not just occasional disproportions here and there—explained more about economic reality and economic crisis than did quantities and kinds of money.
So the argument for the explanatory superiority of overproduction was an argument against the Populists. They had claimed that monetary factors were the real cause of all economic crises; they had insisted that the monopoly powers of the big banks and the large corporations (“the trusts”) distorted market forces that were once decentered, anonymous, competitive, and effective; and they had assumed that legislation to enforce competition by breaking up “the trusts” would restore self-regulating markets. The theorists of crisis whose ideas animated the movement for banking reform claimed instead that monetary factors were important but not decisive causes of crisis: panics and bank failures were typically the results, not the origins, of economic disasters.
These same theorists acknowledged that the centralization of economic authority in the hands of the big banks and the large corporations had profoundly changed market forces, but insisted that this was a good thing, and proposed to bring the banking system into line with the new contours of social power in the larger economy. On the same grounds, they were dubious of trust-busting across the board: not every “combination in restraint of trade,” as the common law named the problem, was an illegal or merely evil conspiracy, and some trusts were “natural monopolies.” Above all, these significant theorists of the business cycle in the late nineteenth and early twentieth centuries believed that self-regulating markets were an impossible dream that disabled new thinking about the uses and limits of markets as such.
The third assumption of the movement to install a central bank was that its principal purpose, managing economic crisis, would be realized by following the advice of Walter Bagehot, the editor of the Economist and the author of Lombard Street (1873)—by “ready lending,” that is, by restoring liquidity to the banking system as a whole, so that a credit freeze would not bring all economic activity to an abrupt and catastrophic end. In this sense, the outer limit of the central bank’s powers in times of crisis was its ability to increase the money supply, by lowering reserve requirements (reducing the lawful ratio of assets to liabilities), or by cashing out all manner of bank receivables, even the least reputable. At other times, it might deploy what we now think of as fiscal devices—for example, selling government bonds to soak up surplus capital or a savings glut—but in the throes of crisis, the central bank would be conducting a kind of financial triage, deciding between the dead, the dying, and the deserving.
One hundred years later, these three assumptions of the movement to create a central bank are still incorporated in books, manifestoes, legislation, and the operations of the Fed itself. But only the last of them—the need to blunt the effects of the business cycle through ready lending—remains as an uncontested premise of all thinking about the sources and management of crisis. The other two assumptions, that crisis is normal and overproduction is its cause, have been turned into questions by economic, intellectual, and political developments since the 1970s—by “stagflation,” monetarism, the Reagan Revolution—so that the recent debates about how to explain the Great Recession have reinstated the Populist voices of opposition once silenced by the movement that gave us the Federal Reserve. The revenge of the Peoples’ Party is not complete. But its fervent opposition to corporate capitalism has changed the way we understand and respond to economic crisis: we hear things differently these days.
EXPLAINING THE RECENT CRISIS: MISTAKES, MONOPOLY, MONEY, MORALITY
To make this difference audible, let me catalog the available explanations of the recent crisis, and notice, meanwhile, how they map onto the debates of a hundred years ago. That way, we can see what was lost in translation and what remains as a fragment of a forgotten tradition. As patriotism is the last refuge of scoundrels, so alliteration is the last refuge of bad writers. Even so, I will now invoke the Four M’s, Mistakes, Monopoly , Money, and Morality, as the content of this tradition. Each of these arguments about the origins of economic crisis contains a degree of truth, but in the end their partial truths lead us away from what we need to understand, and that is the structural, long-term problem of surplus capital. By this I mean the superfluous profits that, when sent in search of the highest return, inevitably find their way into speculative markets, where they eventually fuel bubbles. I call these profits “superfluous” because their reinvestment is unnecessary to increase productivity or output in the industries that generated them in the first place.
I need to emphasize, here at the outset, that the Four M’s have no permanent address. These explanations can be found on the Left and on the Right, and everywhere in between, because they’re new instances of an old attachment to Populist principles.
Mistakes
“The housing bubble and the subsequent crash were the result of extreme incompetence on the part of the country’s top economic policymakers.” That’s Dean Baker talking, in March 2010. He’s the most quoted left-wing economist around, cited even more than Paul Krugman, and in the New York Times to boot. The mere fact that Baker predicted the crash, along with Krugman, Robert Shiller, and Nouriel Roubini—not to mention the bond traders who shorted (bet against) the subprime mortgage market, starting in 2005—should make us take his charge of incompetence seriously.
But how? Which policy makers? The public or the private? Michael Lewis, a liberal, levels the charge against the private sector: How could the analysts at the big banks not have seen that the subprime market was about to implode? Why did they keep buying into that market after 2005, when default rates were already climbing and nobody—not one person—could figure out the mix of “assets” contained in all those “bonds” (collateralized debt obligations, a.k.a. CDOs) built on mortgages? Many others have asked how the same banks could bet against the mortgages they were meanwhile bundling for sale as so-called securities to fund managers. Richard Posner, a conservative, levels the charge of incompetence against the government, claiming that both the Bush and Obama administrations failed the test of economic crisis. In doing so, he joins a huge chorus of critics that insist that the Fed kept interest rates too low after 2001 and that the government’s response to the meltdown of 2008 was indecisive at best. These critics have cast Alan Greenspan, the former chairman of the Federal Reserve, as the villain of the piece. His crime was to have enabled the credit binge that let everybody, especially consumers, borrow too much for too long.
In significant variations on the same theme of incompetence, Justin Fox, John Cassidy, and John Lanchester have asked how truly arcane theories about the efficiency of the “rational market” could have become an intellectual virus that contaminated the brains of highly educated individuals, and then caused them to do great damage to the world economy as well as their own fortunes and reputations. How could so many smart, well-trained people buy into such stupidity, and make such colossal mistakes as result? The answers to these questions aren’t obvious; they require a certain distance from the event in question, but also a willingness to see the world from the standpoint of the participants. With that doubled vision, let’s begin by addressing the incompetence charge against public policy makers.
When the crisis of 1907 struck, most economists and many businessmen accused Leslie Shaw, the secretary of the Treasury, of having kept interest rates artificially low by increasing and then moving government deposits in national banks, usually the biggest banks in New York. He defended his actions by comparing the Treasury to a central bank, and suggesting that major reform of the system was unnecessary in view of the secretary’s powers over the money supply (when he moved those government deposits to the National City Bank, for example, this bank was suddenly able to loan far more than it had without them because its cash reserves were increased). Shaw honestly believed that after the huge stock market debacle of 1902, not to mention the extraordinary economic crisis of the 1890s, he had no choice except to keep interest rates low, to encourage price inflation and sustain growth. Alan Greenspan had similar motives in reducing interest rates in the early twenty-first century. He had already presided over the Asian credit disaster of 1997, the collapse of Long Term Capital Management in 1998, and the
dot.com boom and bust of the same moment. In the aftermath of these serial crises, he decided that the combination of the Bush administration’s tax cuts and lower interest rates—a “fiscal boost” amplified by easy money—was the only available formula for renewed growth.
Greenspan’s great fear was price deflation of the kind that has stunted the Japanese economy since the early 1990s. His successor at the Fed, Ben Bernanke, feared it even more because, as an expert on the Great Depression, he understood that the liquidation of “distressed assets” after the Crash of 1929—back then these were securities listed on the stock market—was registered in the massive deflation that halved wholesale and retail prices by 1932. These men (and their counterparts at the Treasury, Henry Paulson and then Timothy Geithner) feared deflation more than anything else because they knew it would drive down housing prices, slow residential construction, erode consumer confidence, disrupt consumer borrowing, and reduce consumer demand across the board (pretty much what has happened since 2007). Meanwhile, the market value of the subprime assets undergirding the new credit instruments—those indecipherable CDOs—would have to fall, and the larger edifice of the financial system would then have to shrink as the banks recalculated the “normal” ratio between assets and liabilities. In sum, Greenspan understood that economic growth driven by increasing consumer expenditures—in this instance, increasing consumer debt “secured” by home mortgages—would grind to a halt, and the banking system would be at risk, if he didn’t reinflate the housing bubble. So he did, on the assumption that when it burst, the Fed would stand ready to manage the crisis. His remarks on the dubious rationality of market-driven decisions, then and thereafter, were always uttered with this assumption in mind.
These “top policy makers” also feared deflation because they knew its effects on the world economy could prove disastrous. With deflation would come a dollar with greater purchasing power, to be sure, and thus lower trade and current account deficits, perhaps even a more manageable national debt. But so too would come lower U.S. demand for exports from China, India, and developing nations, and thus the real prospect of “decoupling”—that is, a world economy no longer held together by American demand for commodities, capital, and credit. The centrifugal forces unleashed by globalization would then have free rein, and every protectionist urge would be unbound, here and elsewhere—as when the world economy split into competing “spheres of influence” in the 1930s, and world war became the violent means to reunify it. Equally important, American economic leverage against the rising powers of the East would be accordingly diminished, and with it the customary currency of political leverage in deciding international disputes.
So Greenspan and his successor at the Fed are not to be blamed for the scope or the scale of the crisis. Under the circumstances, which included the available intellectual or theoretical alternatives, they did pretty much what they had to, hoping all the while that the inevitable “market correction” would not be too severe. They weren’t incompetent; they were trying to make the best of a bad situation with the tools at hand. To criticize them for not taking up the alternative policies—by, say, raising interest rates to prick the housing bubble, as the Japanese central bank did in 1990—is something like criticizing the ancient Greeks because they used wheels and understood steam power but didn’t build tractors to plow their fields.
Defending their counterparts in the private sector on the same grounds is more difficult. But here, too, the charge of incompetence diverts us from the real issues—from the underlying structural cause of the economic crisis that began in 2007 and overwhelmed markets by 2008, which, again, is surplus capital. There may well be fraud, stupidity, and corruption at work in this mess, but they are much less important than the systemic forces that brought us to the brink of another Great Depression, as I’ll be arguing shortly. In any event, the blame game doesn’t get us very far in understanding either what happened in 2007 and 2008, or what still plagues us. There’s no theory that can predict exactly what will happen in the future, certainly not in a future determined by unruly and irrational individuals with minds of their own. So why would we criticize anyone for not knowing what will soon happen? In the volatile marketplace of modern capitalism, the future is plural, and crisis is normal, while economic equilibrium is always as tentative as a truce between parliamentary factions in Lebanon, or as uneasy as the outward harmony of a couple on the verge of an ugly divorce. The modern countercyclical practices invented a century ago—more liquidity, ready leading by the central bank—are predicated on the acknowledgment of this simple fact.
Information is never perfect, not even in a planned economy, so prediction is a highly risky business. Indeed one of the benefits of a market economy is that it allows chaotic, unpredictable social forces to inform our behaviors; it teaches us to live with uncertainty, to get by with the available information, knowing all the while that it’s not enough to protect us against the next betrayal, the next disaster. So again, why would we criticize anyone for not knowing what will soon happen?
Besides, James Surowiecki and Richard Posner, among others, have explained that the short-term drive for profit by banks and nonfinancial firms makes the so-called irrational exuberance of an economic bubble look like careful calculation. The decentered decisions that determine supply and demand have no predictable result, even while the bandwagon effects of surging markets produce thundering herds of investors on the same narrow trail. As Charles Prince, the former CEO of Citigroup—a bank that has now received close to $40 billion in aid from the Troubled Asset Relief Program—explained in July 2007, “As long as the music is playing, you’ve got to get up and dance.” In other words, if you don’t ride the bubble, even knowing it’s a bubble, your competitors will get the business and reap the profits. So to criticize bankers for investing in (or shorting) the securitized mortgage market is to criticize them for what they’re empowered and supposed to do: maximize profits, and thus keep their standing with the shareholders.
By the same token, to criticize model builders like John Cochrane or Robert Lucas because their theories of “rational markets” didn’t or couldn’t acknowledge the messy details—the ugly realities—of the real world is to criticize them for what they get paid to do as professors of economics at the University of Chicago: abstract from the particulars of the situation and propose universal “laws of motion.” The tests and the proofs of their laws were typically mathematical (the preferred idiom of economic theory) and, in their own terms, convincing. Their counterparts in the business schools—Fischer Black, Myron Scholes, Robert C. Merton—were even more convincing because they invented algorithms for “option pricing,” which allowed for computerized trading programs that began to pay off in the 1980s and 1990s, first in the stock market and then in the “shadow banking system,” where hedge funds became the headquarters.
We should also note, in defense of the model builders’ indifference to institutional and historical realities, that faith in self-regulating markets has no predictable political valence. A hundred years ago, Populists on the Left and conservatives on the Right agreed that a trust-busting approach to the new industrial corporations would restore competition; as a result, they believed, all producers would again be equally subject to anonymous laws of supply and demand, and economic equilibrium would be the rule rather than the exception. The theory of the “rational market,” as Justin Fox and John Cassidy tell the story of its development in twentieth-century departments of economics, is a distant echo of this anticorporate, antimonopoly program—for it posits effective competition between small producers as the ideal state of a market economy, or it falls back on monetary causes to explain business cycles. These aren’t “mistakes”; they’re assumptions that enable the elegance of theories presented as ever-more-complicated equations.
Monopoly
And that brings us to another explanation of the recent debacle that derives from the original Populist animus against “the trusts”—that is, the monopoly power of the big banks and the big corporations. Matt Taibbi, William Greider, Simon Johnson, Thomas Geoghegan, and many, many others have argued that the scope and reach of behemoths like Goldman Sachs or Citigroup have allowed them to distort market forces—to do insider trades that border on fraud as a matter of course, or to pose as essential institutions that are “too big to fail”—and to corrupt politics by paying legislators, literally, for their help in deregulating the financial industry even after the savings and loan debacle of the late 1970s and early 1980s. If these banks weren’t so big, the argument goes, they wouldn’t have a disproportionate influence on the market or on Congress. Thus their venal decisions wouldn’t have warped the entire economy and caused a crisis that has bankrupted Main Street.
There’s something to be said for the argument—to begin with, why should the fortunes of so many be determined by the interests of so few?—and Taibbi (the Rolling Stone writer who called Goldman Sachs “a giant vampire squid”), at least, deserves our thanks for tracing all the deadly strands that tethered our economic futures to the trading floors on Wall Street. But most of what is to be said in favor of the neo-Populist, antimonopoly tradition he represents so eloquently comes down to three dubious propositions. First, small business is self-evidently more efficient, and more productive of new jobs, than big business (tax breaks for enterprise on this scale thus become a bipartisan budget imperative). Second, a smallholder economy is more conducive to political democracy than one dominated by large corporations. Third, regulation in the name of competition among small producers—antitrust writ large, let’s break up the banks!—would give free play to the anonymous market forces we need to promote innovation and keep the bureaucrats at bay.
In order, then, let’s have a critical look at these propositions, which are so deeply embedded in American history and culture that they’ve become political pieties for the Right as well as the Left. Small businesses are not all garage bands with great new sounds; they’re mostly Mom and Pop enterprises that do things badly but cheaply because some, maybe most, of their costs are carried off the books, in uncompensated hours and familial obligations. They’re not more efficient; in short, they’re just cheaper. Larger enterprises are almost always more efficient than small businesses because they can impose a division of labor—because they can divide tasks, make people specialize, and so increase outputs without increasing inputs of capital or labor. We tend to believe that small business is the source of technological innovation (“genius”), and that large enterprises are a constraint on it, mainly because we know, somehow, that bureaucracy stifles imagination. But in fact, almost all such innovation in the twentieth century was the result of research and development sponsored either by large corporations or by the federal government, usually in tandem. Microsoft, for example, now a very large corporation, is inconceivable in the absence of the R & D that came from IBM and the Pentagon, among other large bureaucratic sources, long before Bill Gates left school for the garage. The Internet as such is another example of the same public/private synergy that typically involves large corporations, big money, government grants, and earnest entrepreneurs.
But wait, don’t small businesses, those scrappy little start-ups, create most of the new jobs, somewhere around 60 percent, maybe more? Well, yes, they do. And those jobs disappear within two years—along with the start-ups. That’s the average life span of a small business. What’s more important here is more obvious: apart from the funky bodegas and the cool bars in your neighborhood, small business exists because big business does (and even here, you want to consider the effect of corporate expense accounts on restaurant revenues, or the marvel of the delivery systems that allow you to buy national brands at the corner store). In commercial and residential construction, for example, the subcontractors make their way by doing what the general contractor tells them to, and this overlord is typically a large corporation from out of state, not a local, small business. The little guys who build “on spec”—who put up a house and hope for the best—are a tiny minority of contractors, they hire as few people as they can, they don’t provide benefits, and they’re notoriously dishonest. The little guys in every line of business are the employers who are most likely to be union-busting fanatics because, unlike large corporations, they can’t pass their labor costs on to the final consumer in the prices of their products; so the jobs they create tend toward the minimum wage variety.
But go ahead, take those musty bookstores as your favorite small business threatened by the old chains, like Barnes & Noble, and by the new chains, like Amazon or Alabris. The fact is that those small bookstores can now thrive because they affiliate with the chains—because they serve these large corporations by shipping items directly from their own inventories once the order is placed at the online bookstore. The fact also is that you can find more of what interests you in the online bookstores, because now you’re browsing in an impossibly enormous library, walking down virtual aisles on the scale of Alexandria, and because what piques your interest tells the software what other aisles you might want to explore. And go ahead; complain that all the airfreight and the cardboard and the UPS trucks delivering the readable goods amount to an environmental disaster. You’re wrong. According to Annie Leonard, a most diligent and intelligent environmentalist—her book, The Story of Stuff, is a masterpiece of political wit and grace—online shopping for books is “more efficient and sustainable in terms of energy used, conventional air pollutants generated, waste generated, and greenhouse gas emissions” than the “traditional model,” and that of course is the model that requires you to lug heavy objects from the olfactory hell of the local bookstore.
But what about Silicon Valley, speaking now metaphorically as well as literally, referring to the crowded, dreamlike state where small business start-ups create loads of good jobs for nerdy intellectual types? Well, since 1999 and the
dot.com bust, it’s been a slow-growth sector, and since 2007 it’s been an employment disaster area, mainly because large corporations have cut back on research and development, or have already realized the economies made available by computer-driven technologies, or have outsourced the high-tech jobs. The proverbial garage bands haven’t been looking to hire more players because the recording contracts—the financial affiliations with larger companies—haven’t been happening.
So worshipping at the altar of small business doesn’t make much economic sense. Nor does it make much environmental sense unless the object of our scrutiny is the damage done by corporations that never deal with retail (consumer) realities except as a public relations problem—oil companies, for example, like British Petroleum. But what about the political argument in favor of antitrust and renewed competition? Is it true that a smallholder economy is more conducive to democracy than the available alternatives? It all depends on what political theorists you take for granted, which political movements you associate with progress, and who’s enfranchised by your preferences. The classical theory of republics invented by Aristotle, updated by Machiavelli, and enacted by the American Revolution held that popular government was grounded in a wide distribution of private property. The great threat to republican liberty was, then, the concentration of property in the hands of a few (oligarchy). The Populists of the late nineteenth century were the last mass movement galvanized by this antimonopoly animus—they wanted to abolish “the trusts” because these artificial creatures of the law controlled too much property, made too many decisions about how economic resources would be allocated, and could therefore dictate how political decisions would be reached.
Of course the Populists weren’t the only ones worried about the discrepancy between the concentration of economic power in the boardrooms of the large corporations, on the one hand, and the demands of democracy—government of the people, by the people, for the people—on the other. In 1896, Arthur T. Hadley, the conservative president of Yale University, expressed this concern as follows: “A republican government is organized on the assumption that all men are free and equal. If the political power is . . . equally distributed while the industrial power is concentrated in the hands of a few, it creates dangers of class struggles and class legislation which menace both our political and industrial order.” Eighty years later, Irving Kristol, the founding father of neoconservatism, said almost exactly the same thing: “There is little doubt that the idea of a ‘free market,’ in the era of large corporations, is not quite the original capitalist idea.... [The] concentration of assets and power—power to make economic decisions affecting the lives of tens of thousands of citizens—seems[s] to create a dangerous disharmony between the economic system and the political.”
The nineteenth century looks, on this conservative reading—but then it’s also the Left’s reading of the same history—like a golden age of democracy, when the economic importance and self-reliance of smallholders made them ideal, omnicompetent citizens. But by any measure, the twentieth century, the age of the giant corporation, was a much more democratic moment, when women and minorities were finally enfranchised in the broadest possible sense of that term—gaining the right to vote, to be sure, but also entering the mainstream of the culture. The twentieth century was also the moment of a dispersal of power from the state to society, when the regulation of markets began to be shared between public agencies like the Federal Trade Commission and private organizations like large corporations and trade unions. This, too, was a democratic promise that couldn’t have been made in the nineteenth century; for it taught ordinary people to be aware of their rights, their powers, and their obligations in a new kind of public sphere, where nongovernmental organizations (NGOs), all kinds of associations, became the rule.
Of course most intellectuals in our time complain about the political emptiness of the “public sphere.” But they confuse politics as such with state-centered, policy-relevant, electorally oriented activity—and they do so because they think that Aristotle was right, that our true identities as individuals can be discovered only by participation in debates about policy outcomes, in the strenuous public duties of citizenship so conceived. They’re wrong. The public sphere is not just a political forum where people make arguments and speeches and laws; it includes most of what we have come to know as civil society, the space between the state and the individual, even that most private sector we call “the” family. It’s the place where we imagine and enact new identities, and while we’re at it, where we demonstrate new political possibilities—as, for example, those young black men and women did when they refused to leave Woolworth’s segregated lunch counter in Greensboro, North Carolina, in 1960.
So a smallholder economy may indeed be beautiful, but it’s not the only residence of popular politics. The democratization of American society and culture in the twentieth century coincided with the rise and then the consolidation of corporate capitalism, not the triumph of Populism and the return of the self-made man. Coincidence is not causation, of course, but the fact remains that over the last hundred years, large enterprise has proven itself to be a sponsor of both social mobility and cultural diversity—for example, by building job ladders within corporate bureaucracies that allowed broad access to managerial responsibilities or middle-class incomes, and by taking civil rights and affirmative action seriously. This fact doesn’t preclude regulation and reform of corporate behavior, which has never been uniformly progressive (whose has?); it should, however, make us ask whether breaking up the big banks, thus giving free rein to competitive, anonymous market forces (and silencing the special interests?), is an adequate form of either regulation or reform.
Once upon a time, in 1911, the Standard Oil Trust incorporated in New Jersey was broken up by order of the Supreme Court in a landmark decision that restored the “rule of reason”—that is, the common law distinction between lawful and unlawful combinations in restraint of trade, the distinction built into the Sherman Anti-Trust Act of 1890. Within fifteen years, the various components of the former Trust had become giant interstate corporations in their own right, with new reach, new power, and new names (Mobil Oil of New York, for example, and Esso—S. O., get it?—then Exxon of New Jersey), each of them now impervious to another antitrust suit. A similar process resulted from the court-ordered breakup of American Telephone & Telegraph in 1982, following an antitrust suit first brought against it by the Justice Department in 1974. The so-called Baby Bells created by the settlement quickly evolved into companies with more than regional scope, and then began merging with each other, so that by 2010 only four giant telecommunications corporations were still standing (and by 2011, with AT&T’s bid to acquire T-Mobile, only three, maybe two).
Breaking up the big banks doesn’t come near the real problem, which is that the financial sector has metastasized for a reason unrelated to monopoly power: it became the receptacle, and then the manager, of what Ben Bernanke used to call the “global savings glut,” a polite way of saying surplus capital. As a result, investment banks and hedge funds got bloated, and local commercial banks got greedy. Since the 1980s, the system as a whole has been awash in redundant profits, a.k.a. surplus capital—that’s why there’s been a huge gap between retained earnings and corporate investment for the last decade—so the banks, large and small, have grown by figuring out new interest-bearing places to put this otherwise idle money. And the obvious place to put it was in “retail banking”—in credit cards, for example, in the 1990s, and then in the mortgage market in the early twenty-first century. Even the venture capital that fed the
dot.com bubble of the 1990s was animated by the overpowering urge to place bets on retail choices—in other words, consumer preferences—in this case a huge wager on how demand for the personal computer would change the delivery of goods and services.
By all means let’s monitor and (re)regulate banking practices, along with every other malfeasant corporate behavior. But let’s not assume that small is beautiful and, on this basis, deduce a political program from unexamined propositions that derive, ultimately, from Populist premises. In other words, let’s not assume that our self-evident project is to restore competitive, anonymous market forces by means of aggressive antitrust policies—rather than to subject market forces as such to social purposes like, say, better education for everyone, or gender equity in the workplace, or environmental integrity by means of reform and regulation.
The world economy is too big to fail, and so, for that matter, is its indispensable American component. If the financial system had failed in late 2008, as it might have in the absence of massive government intervention, the world economy would have failed, too, and we would all be the poorer, and worse: we would be repairing something that didn’t need to break down in the first place.
Money
Explanations of the current crisis that gather under the headings of Mistakes and Monopoly are everywhere you look, but for the most part they flourish at the extreme edges of the political universe—both Left and Right—where mainstream economic theory is just background noise. Meanwhile monetary explanations have become the norm among professional economists, at least those with a purchase on the public imagination. This development is more odd than it might seem, because Milton Friedman, the archmonetarist who claimed that the Great Depression was a result of mistakes at the Fed—they raised real interest rates at precisely the wrong time, he insisted—was once a marginal figure, a voice in the wilderness. How did his innocent update of Populist ideas about economic crisis become the gold standard of analysis?
Winning the Nobel Prize (in 1976) helps, of course, but still, how did it happen that when the crisis struck in 2007, all the middlebrow magazines and reputable newspapers were suddenly crowded with famous economists extolling Friedman’s monetary explanations of the business cycle? For example, in a cover story essay for Time magazine, Niall Ferguson, the celebrated historian of finance, just took it for granted that Friedman’s theories had become the mainstream of thinking about both the big event and the unfolding crisis of 2008: “Yet the underlying cause of the Great Depression—as Milton Friedman and Anna Jacobson Schwartz argued in their seminal book A Monetary History of the United States 1867–1960, published in 1963—was not the stock market crash but a ‘great contraction’ of credit due to an epidemic of bank failures.” Ben Bernanke agreed in his apologetic toast to the esteemed authors on the occasion of Friedman’s ninetieth birthday. “I would like to say to Milton and Anna: Regarding the Great Depression: You’re right, we [the Fed] did it. We’re very sorry. But thanks to you, we won’t do it again.”
This monetarist consensus isn’t perfect, of course. It’s telling, however, that even Christina Romer, an economist with Keynesian sympathies and—not incidentally—President Obama’s first choice to head the Council of Economic Advisers, had already joined the chorus by suggesting, back in 1997, that leaving the gold standard, devaluing the dollar, and promoting price inflation were the keys to recovery between 1933 and 1937. In other words, she argued that the fiscal experiment called the New Deal had nothing to do with the fastest growth rates ever recorded in the twentieth century (yes, in those five years):the supply of money explained everything, just as Friedman and Schwartz had argued. It’s also telling that Paul Krugman, a fierce and brilliant critic of Friedman’s theories, recently validated the monetarist line on the 1930s: “Standing aside while banks fall like dominoes isn’t an option. After all, that’s what policymakers did in 1931, and the resulting banking crisis turned a mere recession into the Great Depression.” In sum, it’s telling that everyone, Left to Right, agrees that a “financial fix” was the indispensable first step in addressing the causes of the crisis, even though there is no evidence whatsoever that the banks made any contribution to real growth after 1933, or that they’re contributing anything to recovery today.
In view of the broad agreement on the priority of a “financial fix,” this last statement probably sounds a bit extreme. So let me put it another way. If the proximate cause of the Great Depression was a “great contraction of credit,” a great expansion of credit should explain the recovery of 1933 to 1937, just as it should explain what is called the recovery of 2009–2010. But in the 1930s, the banks folded and never returned to the table. Instead they bought government bonds and parked their assets with the Fed, increasing their loans and discounts a mere 8 percent between 1933 and 1937, from a baseline close to zero—while in the current crisis, the banks are again sitting it out because the value of their “assets” is still in doubt. In the earlier recovery, price inflation was minimal even as industrial output doubled, so currency devaluation accounts for almost nothing there. In the current so-called recovery, both inflation and investment are absent and a 9 percent to 10 percent unemployment rate persists, but the money supply has more than doubled. So it’s hard to see why economists believe the money supply or a “financial fix” explains much of anything, either as a cause or a cure for economic crisis. Why then does the monetarist consensus hold?
To begin with, timing is everything, as the ever-perky Friedman himself remarked when asked to explain his overnight success. He won the Nobel at the peak of “stagflation” in the 1970s, when most economists agreed that some kind of shock treatment was necessary to reduce inflation from an annual rate of close to 20 percent and to improve lagging labor productivity. The monetarists stood ready with their remedy, which amplified the effects of the supply-side revolution that was already changing the economic debates taking place in the Republican primaries. Beyond the exquisite timing of Friedman’s prize, there was, however, another reason for his newfound authority, and that was the inability of anyone to explain the relation between the price of money (interest rates) and the price of labor (wages) or goods, except to say that the latter—the “real” economy—had nothing to do with the former. The silence was nothing new: there is no account of the Great Depression that convincingly explains this relation, and so far the best book on the big event treats the Crash of 1929 as a “random event” with no legible connection to the real economy, as if the burst of the stock market bubble was like the huge meteor that destroyed life on earth as the dinosaurs knew it.
Finally, the monetarist consensus holds because it’s the one place where all sides can meet and agree on a solution to economic crisis—the assumption here being that you have to restore the liquidity of the banks and the confidence of investors by “ready lending,” by increasing the money supply. Once you’ve done that, however, once the financial fix is in place, the so-called real economy is supposed to start working. If it doesn’t, some serious government spending is in order, but if you come to rely on a fiscal solution, the rising national debt—the deficit—will sooner or later force you back to monetary policy. In contemporary parlance, to avoid a “sovereign debt crisis,” you have to hope that your ability to manipulate the money supply will do the trick. What happens, however, when effective interest rates are at zero, as they have been since 2008, and yet crisis persists, or when you can’t induce inflation by devaluing your currency? How, then, does the central bank shape the real economy?
Friedman proposed dropping money from helicopters—I am not making this up—if things got that bad. Of course he was trying to be funny, but his earnest apologist, Ben Bernanke, has, practically speaking, followed his hero’s advice, and in doing so has turned the Fed into the lender of last resort for every imaginable kind of institution, from savings banks to hedge funds, even small businesses with commercial paper to discount (that is, to sell at less than face value to replenish their cash flow). The problem with this broadcast approach is not that it makes the Fed a replica of the Reconstruction Finance Corporation, the government agency that replaced the banking system in the 1930s (by 1934, it was loaning directly to businesses large and small, but also to cities and states with budget problems). No, the problem with this approach is that it treats every symptom in the same way, as if there’s only one cure for whatever ails you. A “financial fix” can’t address the causes of the current economic crisis, just as it couldn’t address the causes of the Great Depression, because the supply of money can change, and has changed, without any appreciable effect on the “real” economy. Then as now, the banks are just the messengers, the guys who take the pictures and stick them in oversized envelopes, not the guys who read the film, tell you what it means, and give you the painkilling prescription. Then as now, the banks come bearing the bad news, not the cure for what ails us.
Morality
The three previous headings intersect because in every case the banks are the central figure in the narrative on offer—they made mistakes, they’re too big and powerful, they need fixing. An explanation of economic crisis in terms of Morality would seem to be of a different kind rather than a different degree. But it secretly functions as the master text of these others; for the fundamental question raised by all of them is, what is to be done, and the answer is always another question: How must we change ourselves and our world to avoid this magnitude of disaster? These have become moral questions, just as they did in the 1930s, and especially as they’ve been framed by journalists looking for the big picture. The recent, best-selling Report on the Financial Crisis, a document sponsored by congressional mandate, similarly frames these same questions, and “Wall Street and the Financial Crisis,” a report issued April 13, 2011, by the Senate’s Permanent Committee on Investigations, goes further, announcing that Goldman Sachs deceived its clients and the public, exactly as Matt Taibbi of Rolling Stone had charged.
The question residing in the explanations of economic crisis I’ve so far covered is, What went wrong? What follows is, How can we make things right? Again, these are moral questions because they presuppose a certain responsibility or culpability on the part of some party to the bargain, if only we knew which one. Was it the fools, the monopolists, or the bankers, or do they share an address? The great irony here is that we want somehow to personify the idiocies of the market economy so as to restore its anonymous, providential force, as if we were trying to deflect the arbitrary effects of fate, hoping to humanize the gods by giving them names. The question never asked is, What if nobody’s to blame?
Morality works on several levels, and for good reasons, as an explanation of economic crisis. The rhetoric is familiar, and effective, maybe because the universe it maps is a place no one has ever been. Here are some examples I’ve culled from books, magazines, talk shows, academic conferences, and conversations, all of them perfect clichés, which are by now so ubiquitous that you’d think there was one big book—some all-purpose moral thesaurus—everybody consults when it’s time to pronounce on the real meaning of economic problems.
“Companies and individuals got ‘overextended,’ they got in over their heads, they started looking for short-term gain rather than long-term profit. They squandered resources rather than harboring them, rather than saving for a rainy day and making provision for an uncertain future.... They borrowed against assets they didn’t have for frivolous or venal purposes, meanwhile lying to the banks and to themselves.”
“At the same time, individuals consumed more goods than they needed—they lived beyond their means—but they never produced anything of value. Indeed their consumption of goods became a constraint on sustainable growth because it replaced saving and crowded out investment.... So the ‘deleveraging’ of debt and the ‘liquidation’ of doubtful assets—paying it down, going from deficit to surplus—are the necessary virtues imposed by periodic crisis. Equilibrium of both the financial and moral kind is the silver lining in the dark clouds of economic catastrophe.”
This rhetoric governs discussion of every major economic crisis since the 1890s. For example, in October 1893, at the beginning of a sharp downturn that would last until 1897, Charles G. Dawes, then a young businessman from Lincoln, Nebraska—he would later be the principal author of the Dawes Plan, which sought to balance payments between the winners and losers of World War I—wrote in his diary as follows: “The panic has had one wholesome effect on business. It has weeded out the rotten concerns.... The habit of economy in domestic, business and national expenditures, which has been inculcated by the hard times, will have a good effect after the acute necessity is over.” In 1932, at the trough of the worst economic crisis of the twentieth century, Bernard Baruch, the most influential investor of the moment, advocated balanced budgets all around—“Cut government spending, cut it as rations are cut in a siege,” he cried—on the grounds that only this retrenchment would restore equilibrium. Meanwhile, Andrew Mellon, the secretary of the Treasury, urged his boss, President Herbert Hoover, to “liquidate labor, liquidate stocks, liquidate real estate,” because this would “purge the rottenness out of the system.” When the system was cleansed by this economic version of fasting, the body politic would be restored to health, and the good life would again be visible, measurable, attainable: “High costs of living and high living will come down. People will work harder, [and] live a more moral life.”
Now you would think that this sort of sermon had gone out of style along with opposition to the New Deal. But you would be wrong. By November 2009 the romance of hard times was a staple of journalism, fed constantly by the exhortations of mainstream economics. And there was no predictable political valence at work here; liberals and conservatives alike were urging everyone to cut back, be virtuous, get real, go green, live sustainably, and above all, stop borrowing. All agreed that the crisis was an opportunity to balance every budget, the kind that fits in your moral calendar as well as the kind that ends with the fiscal year. On the Right, George Will, David Brooks, Tony Blankley, and many others worried about the disturbing economic effects of a decline of household savings after 1990; about an economy galvanized by the reckless hedonism of consumer culture; and about the debilitating moral effects of the entitlements built into the welfare state. These worries culminated in the spring of 2010, when Greece began to look like the end, not the beginning, of Western civilization—when columnists of every persuasion started reading the fate of the United States in the sovereign debt crisis of profligate European states.
On the Left, David Leonhardt, William Galston, Joseph Stiglitz, and many others worried about exactly the same issues, always in the hope of inducing more saving and investment, thus less consumption. So did the environmentalists at Adbusters, the glossy, anticapitalist, culture-jamming magazine that parodies famous advertisements and their corporate sources, and n + 1, a New York literary magazine, not to mention the new “frugalistas” like Lauren Weber, Laura Miller, and Curtis White, who have urged us to shrug off the trappings of consumer culture and live cheaply, in view of our manifest needs rather than the artificial desires created by advertising. The recent bipartisan drive to cut discretionary spending, undo “entitlements,” and reduce the federal deficit is yet another instance of a moral imperative presented as an economic necessity.
The master text of Morality thus organized many strands of thought, like a magnet in the vicinity of those fabled iron filings: it was consistent with the notion that Mistakes were made, that Monopoly was the problem, and that Money was the root of all evil. It allocated blame in a comprehensive way, by suggesting that if only everybody hadn’t become so avaricious, so desirous, so loaded up with the material freight of the world, why, that world would be a better place. This master text identifies certain villains, then, but in the end it also announces that we were all at fault: we know who the bad guys are, but we know, too, that our excess enabled them. As the cartoon cliché goes, we have met the enemy, and it is us.
The recent indictment entered by Robert Samuelson, the insightful Newsweek columnist, is a perfect example of this earnest, prophetic, almost biblical voice. In what follows, notice how the pronouns shift from the third to the first person and back again, so that everybody gets a share of the blame: “People were conditioned by a quarter-century of good economic times to believe that we had moved into a new era of reliable economic growth.... Their heady assumptions fostered a get-rich-quick climate in which wishful thinking, exploitation, and illegality flourished. People took shortcuts and thought they would get away with them.”
Yes, people did think the future would look better than the disaster of the 1930s or the 1970s. But is that any reason to berate them—us—for a lack of foresight? No, of course not, that was a rhetorical question. Morality explains too little by addressing too much. The weak version of the moral explanation is that greed, self-deception, chicanery, and stupidity were the real causes of the disaster; but these aren’t even deviations from the norms of modern life—without them there would be no novels, no movies—so how can we say that they explain anything?
The strong version of the explanatory claim that cites morality is much more compelling. It has two variations. First, several knowledgeable observers, among them Jeff Madrick, have suggested that the entire edifice of trading in derivatives—those securitized bonds known as CDOs—was founded on fraud: the banks, particularly the big ones like Deutsche Bank, Citigroup, Lehman Brothers, and Bear Stearns, knew they were selling toxic assets, and they knew well enough to bet against them, at least late in the game (ca. 2007, when the mortgage market was mysteriously buoyed by demand for more collateralized debt obligations). The ratings agencies like Moody’s went along with this charade by giving triple-A designations to assets that barely deserved a double-B.
But Michael Lewis’s careful research shows that ignorance, not fraud, accounts for almost all of the damage. The handful of fund managers who bet against the mortgage-backed CDO market—according to Lewis, these were seven or eight individuals at most, involving no more than five firms—lost money for years because they were betting on a sharp increase of mortgage default rates that didn’t materialize until 2007. Moreover, for all their diligence and foresight, these lonely Cassandras never figured out what was actually contained in the various levels (or “tranches”) of the mortgages bundled by the banks for sale to traditional pension funds and 401 (k)s. Everybody was guessing.
The second variation on morality’s strong claim to explanatory adequacy is a less exact but more important science, because its declared enemy is consumer culture. Almost all knowledgeable observers have suggested that an economy driven by consumption—to the extent that it accounts for 70 percent of GDP—is simply unsustainable. We need, then, to consume less, and save more, but public policy cannot force the change of habits and sensibilities that a less hedonistic culture would cultivate; a change of moral season, perhaps even a new culture war, is then needed to deliver these goods. The defect in this version of the strong claim is that, since 1990, consumer expenditures have indeed increased at the expense of saving and investment, but not because the latter were crowded out by the former, and not because the moral fabric of American life has unraveled. Investment has atrophied even as corporate profits have risen because these profits are, for the most part, redundant revenues with no place to go except into speculative markets where so-called securities like CDOs congregate; household savings have declined and consumer borrowing has increased because wages and salaries have stagnated even as executive compensation has multiplied many times over.
Our problem is not morality; it’s surplus capital—a “global savings glut” that can’t be fixed without redistribution of income from capital to labor, from profits to wages, from savings to consumption, an economic repair that’s urgently needed both locally and globally.
THE MISSING M-WORD
So the Four M’s do better as accusations than as explanations. My accounting of the available explanations for economic crisis would be incomplete, however, if another M-word were left off the list. That word is Marxism. Hasn’t the academic Left offered a more or less Marxist analysis of the current crisis, which avoids accusation by asking the questions that the mainstream can’t? Questions like: What if nobody’s to blame? What if the deep structure of capitalism is the culprit? Then what?
The specter of Marx has, in fact, haunted debates about causes and cures of the current crisis, and not just at the invisible margins of classroom discussion, where the ghosts of theories past gather to scare unwitting students and irritate their parents. When mainstream economists and journalists turn back to Keynes, for example, you know that Marx is already waiting in the wings, because, as any number of qualified observers can tell you, the theory of business cycles to be derived from a study of Keynes—it’s called the Harrod-Domar model—was first sketched in Marx’s so-called reproduction schemes, in Volume 2 of Das Kapital, where the changing relation between capitalists’ investment and workers’ consumption became the cause of economic growth or crisis. And when cutting-edge literary magazines like n + 1 start interviewing or celebrating Marxist icons like David Harvey and Eric Hobsbawm as part of an effort to understand the unfolding crisis, you know that old-fashioned political economy has returned from the grave; you know that, like Freddy Krueger, these world-weary Leftists are talented apparitions with sequels to film.
Younger Marxists have contributed to the debates, of course, most notably and brilliantly Robert Brenner, but they often proceed as if the point is to prove that Marx was right about the trajectory of modern capitalism. In this sense, theory becomes the end rather than the means of understanding the evidence at hand: it’s as if the paradigm itself is the issue. Still, that reflexive bias, that self-searching theoretical agenda, is exactly what we should expect when the dominant paradigm—“normal science”—can’t account for the available evidence.
But here’s the rub: a resolute Marxist analysis of our present predicament is not a marginal event without consequence outside of academia. Instead, it’s already entered the mainstream of thinking about this predicament—or so we might surmise from the ecstatic reviews of Brenner and Harvey in reputable, indeed genteel publications like the Financial Times, the Los Angeles Times, the New York Times, the London Review of Books, and the Atlantic, where both were hailed as prophets. In a political culture famous for its paranoid style, how did the overtly Marxist analysis on offer from Brenner and Harvey attain this intellectual dignity and policy-relevant status? In short, both assume precisely what mainstream economists do—that private investment out of profits in manufacturing fuels growth as such by increasing the (fixed) capital stock per worker and improving labor productivity as a result.
Brenner thus explains the secular trend toward stagnation in the post-1973 period as the result of a falling rate of profit in manufacturing, which in turn reduced investment and productivity, ultimately forcing nonfinancial firms to find higher returns in speculative markets outside of goods production. This process is what Harvey calls “investment in asset values,” as against reinvestment of profits in the industry that produced them, and what Giovanni Arrighi called the “financialization” of assets as such in The Long Twentieth Century (1994), a book that serves as the backstory of most Marxist theorizing about contemporary issues.
The argument is plausible, to be sure, but it can’t address two obvious questions. First, why does the shriveled sector of manufacturing qualify as the benchmark of a postindustrial economy? What historical criteria give it this odd privilege? Competition from what were once less-developed countries has certainly squeezed domestic manufacturers; but the industrial labor force in the United States hasn’t grown since 1905, and the profits from this beleaguered sector have long represented a small and declining share of nonagricultural revenues. Second, if profits are in such short supply, how do we explain the “global savings glut”—the surplus capital that fueled the hostile takeovers and merger movements of the 1980s, the
dot.com craze of the 1990s, and then the housing bubble of the early twenty-first century? And here’s a harder fact: the huge discrepancy between investment and retained earnings that characterized both the 1920s and the period since 1983 had no measurable effect on labor productivity; in fact, productivity increases in these decades were phenomenal, just as they were in the immediate postwar moment, ca. 1946–1955, when the capital stock per production worker declined and yet output of goods and services increased rapidly in line with productivity. In any event, the rate of profit is too gross a number to tell us anything about capital invested in this sector or that, and, at least as presented, it ignores the most salient feature of the post-1973 period—the systematic redistribution of national income away from labor, toward capital, away from wages, toward profits.
Harvey draws more directly on Arrighi in explaining the current crisis, or rather the “laws of motion” that govern and limit the accumulation of capital: he has a great deal more to say about the history of capitalism than the causes of the contemporary debacle. Here surplus capital looks to be the culprit but it is entirely “fictitious,” something specious created by the new rules of debt leverage permitted by the deregulation of banking in the late 1990s. Crisis happens, it seems, when the “space-time configurations” required by orderly accumulation fracture, and the physical, brick-and-mortar limits of capital’s mobility suddenly intrude on the instantaneous exchanges allowed by computer programs and financial derivatives. But Harvey hedges his bets when it comes to natural limits on capital accumulation—“the category ‘nature’ is so broad and complicated that it can encompass virtually everything that materially exists”—and, as a result, he never has to explain how the current crisis exemplifies a radical disjuncture of normal “space-time configurations.”
So we have a rich, painstaking Marxist analysis of recent U.S. economic development, but it serves only as deep background on the story of our time. It doesn’t really attempt to explain the contemporary catastrophe except as yet another ugly instance of what capitalism makes inevitable. Even so, I must agree with Benjamin Kunkel of n + 1 when he declares that “it is only from a Marxian standpoint that the recent credit bubble can be understood.” How else do we grasp the long waves of economic development that made for this disaster? Who else are we secretly citing when we turn, in desperation for answers, to Keynes? Still, the question is, Which Marxian standpoint? “There is always more than one of them,” as Jacques Derrida insists in his most compelling work of mourning, Specters of Marx (1993). To my knowledge, there’s no Marxist account of the Great Depression, or of our own Great Recession—except for my own ecumenical rendition, presented in Chapter 3—that specifies a legible relation between the mysteries of the financial sector and the fortunes of the “real” economy, then and now.
But there is a Keynesian account of this relation that will inform my occasional borrowings from Marx, and will also determine the interest rate on these short-term loans. It’s not in The General Theory of Employment, Interest, and Money (1936), the great divide in twentieth-century economic theory. Instead, it’s in The Treatise on Money (1930), the sprawling two-volume study written before, during, and after the Crash. Here Keynes reported on “the great expansion of corporate saving” in the 1920s, by which he meant the remarkably increased volume of retained earnings that was neither reinvested nor returned to shareholders (as dividends). In other words, he meant what I have called redundant profits, surplus capital, and he meant what the Financial Times columnist Martin Wolf has more recently called “the persistent surplus in the financial sector.”
Keynes deftly used the American scene as his leading example: “In the case of the United States these internal resources of Joint Stock Corporations have been accumulating at a time when, owing to changes in methods of doing business, the amount of working capital has been decreasing rather than increasing, whilst expansion in fixed plant has been proceeding at a moderate rate. Thus industry had large liquid reserves which were available to be placed at the disposal of other developments, for example, building and instalment buying, either direct or through the banking system.”
Translation: Industrial corporations were awash in profits that had no remunerative outlet, so they placed them with the banks, as time deposits, or loaned them directly “on call” in the stock market and to consumers at interest. Either way, these superfluous profits inflated whatever bubbles were available, because they weren’t needed to expand output or productivity: the financial sector, mainly but not only the stock market, metastasized because it became the receptacle, and then the manager, of surplus capital generated elsewhere. As Keynes himself observed, a “profit inflation” coincided with a “deficiency of investment.” This insight into the 1920s and its consequence in the form of the Great Depression is, in my view, still useful, and is perhaps the single most useful insight to be drawn from the Marxian tradition that Keynes unintentionally reinvigorated. At any rate I’ll be using it in the next chapter to explain the Great Recession of our time.
My goal in appropriating Marx and Keynes, however, is to make them useful, not to prove them right. More to the point, my goal in noting the limits of extant explanations for the recent crisis is to show how each of them contains a certain truth, not to prove them wrong. Here I’m trying to follow the advice G. L. S. Shackle gave us in The Years of High Theory, 1926–1939 (1967), a book that traced the origins and echoes of the Keynesian Revolution. He duly noted that “the innovating theoretician needs a ruthless self-belief” because he or she is tearing up the roots of tradition, razing the old “intellectual dwelling places.” Even so, it’s Shackle’s concluding admonition that has stayed with me: “Yet reconstruction must inevitably use much of the old material. Piety is not only honourable, it is indispensable. Invention is helpless without tradition.”