Chapter 8

The Sign of Autumn

The unraveling of the neoconservative Project for a New American Century has for all practical purposes resulted in the terminal crisis of US hegemony—that is, in its transformation into mere domination.

—Giovanni Arrighi, The Long Twentieth Century[1]

In the modern global economy, there is one statistic that stands above all others as a measure of how things are going. That statistic is GDP growth. GDP, which stands for gross domestic product, is extraordinarily complicated to calculate, whether for a single country, an economic region like North America or eastern Europe, or the world as a whole. What GDP purports to measure, however, is simple: It is the monetary value of everything produced and sold in a given territory in a given year, encompassing both goods (clothing, appliances, groceries, raw materials, televisions) and services (health care, legal work, performances of all kinds, restaurants, educational instruction, and nail salons). When the Ford Motor Company sells a $35,000 car in the United States, U.S. GDP goes up by $35,000. When a barber cuts someone’s hair for $35, GDP increases by $35. And while the absolute level of GDP is important in its own right—in the United States, that level came in at $25.46 trillion in 2022—it is the change in GDP over time, or growth, that really matters. Growth is calculated by measuring total economic output at a given point and then comparing that figure with the level of output that existed a year earlier. When journalists and economic officials announce that growth currently stands at 2 percent, they mean that the total value of what an economy currently makes, buys, and sells is 2 percent greater than what that same economy made, bought, and sold a year earlier. When GDP growth is high enough, an economy is said to be in an expansion, which is generally what people want, so long as the expansion isn’t accompanied by something like undesirably high inflation. If GDP growth is negative over two consecutive financial quarters—that is, the economy is shrinking rather than growing—you have a recession, which everyone is always trying to avoid. If GDP growth is positive but too low to make people happy, an economy is said to be stagnating. Stagnation is almost as bad as recession. In some cases, it’s worse.

That growth should be the measure of an economy’s success or failure is a relatively new idea. Today it is such an omnipresent object of concern and discussion that one might be forgiven for assuming that it has always been so, that ancient, feudal, and other premodern economies were also directing their economic efforts squarely at driving up the production and consumption numbers. That assumption would be wrong. Human economic output barely increased at all until a few hundred years ago. What growth increases societies did manage were so minor, and occurred over such a long period of time, that they were completely imperceptible to the people who lived through them. The vast majority of the people who have ever lived experienced their worlds and societies as more or less unchanging, with human technological progress too halting and uncertain to transform much of anything over the span of a single lifetime. There is not much reason to try to measure something if you’re not aware it’s happening to begin with, and so it makes sense that GDP wasn’t even invented as an object of statistical interest until 1934, when the economist Simon Kuznets presented Congress with the first-ever calculation of the national income between 1929 and 1932. Within a quarter century, GDP growth would reign over economic policy the way the transit of Venus reigns over our love lives in astrology, with the ideal or “correct” levels of inflation, unemployment, and other economic metrics all derived at least in part from the one big number. In 1962, Arthur Okun, an economist with President Kennedy’s Council of Economic Advisers, decided that for every 2 percent increase in GDP, unemployment would drop by 1 percent. Economists were so confident by this time in the hard reality of GDP, and so certain in the predictive value of their models, that the idea quickly acquired the impressive name Okun’s law.

Economic growth went officially unnoticed until the interwar period not because people had failed to pay attention over the previous millennia but because growth really had taken up an unprecedented role in how people made and consumed the things their societies needed to function. Unlike all prior economic systems, capitalism, which began to develop in the fifteenth century but only started to achieve its explosive potential with the Industrial Revolution, is wholly dependent on growth. Its unique innovation was to mandate that the profits secured by any process of economic production—whether the harvest of a successful crop, the construction and sale of new housing, or the invention of a groundbreaking medical treatment—be invested back into the production process, thus enabling that process to expand. The mandate wasn’t formal. No laws stipulated that non-expanding business enterprises would be shut down. But by subjecting all economic production to the requirements of the market more thoroughly and rigorously than any prior economic system, capitalism ensured that if firms failed to grow, they would soon find themselves undercut by competitors and driven out of business. Growth is the point of capitalism. Like an engine in a car, it makes the economy go. It is a very powerful engine if conditions are right; capitalist economic growth transformed human existence more comprehensively and at greater speed than any other invention in history. But it comes with a downside as well. To complete the analogy, if the engine breaks down—that is, if growth slows too much—the car can’t go anywhere.

This basic fact is a crucial bit of context for those who want to understand why the United States launched the war on terror. In the years following the invasion of Iraq, commentators offered a wide range of explanations for the Bush administration’s decision. As we saw in the last chapter, some people said the invasion was a play for tighter control over the world’s oil supply. Others found an explanation in the realm of Freudian psychology, arguing that Bush wanted to succeed where his father had failed and get rid of Saddam for good. Still others saw it as a kind of national foreign policy reflex. Writing for The New York Times Magazine in September 2003, the Canadian academic and liberal politician Michael Ignatieff wrote that Iraq needed to be placed “in the long history of America’s overseas interventions.” “From the very beginning,” he wrote, “the American republic has never shrunk from foreign wars.”[2] But this explanation rather begged the question. Explaining that America invaded a foreign country because it has always invaded foreign countries doesn’t help anyone understand why America does that. There were also those who offered cynical, borderline-conspiracist, wag-the-dog-style explanations for the war, arguing that Bush was working Americans into a militarist frenzy so as to distract voters from his domestic policy failures. But none of these mainstream commentators ever noted the larger economic and geopolitical situation that framed the whole project: The war on terror was launched thirty years into an alarming and steady slowdown in global economic growth, and this slowdown threatened America’s ability to remain the world’s most powerful country.

For our purposes, the second thing to know about capitalism is that throughout its five-hundred-year history, there has always been one political entity that is more or less in charge of the whole system. I say “political entity” rather than “country” because the early history of capitalism predates the invention of the nation-state, particularly the nation-state in its modern form. Some 150 years before the 1648 Peace of Westphalia began to shape our ideas of statehood and sovereignty, a number of Italian city-states, including Venice and Genoa, began to use long-distance trade and high finance in such a way as to give birth for the first time to the recognizable features of a capitalist system. In his history of global capitalism, The Long Twentieth Century, Giovanni Arrighi explains that between 1450 and 1650, “the Italian city-states in general and Venice in particular” were able to seize “monopolistic control over a crucial link in the chain of commercial exchanges that connected Western Europe to India and China via the world of Islam.”[3] As the Spanish Empire found itself embroiled in a series of escalating (and very expensive) religious wars in the Islamic world as well as intra-European military conflict back home, innovative Italian merchant-bankers, working hand in hand with the respective ruling families of their city-states, skillfully navigated one crisis after another. Their banks lent huge sums of money to the warring Iberians at the same time as their own armies and traders established control over the trade routes that were providing all of western Europe with its wool, silk, and other valuable commodities. With the Spanish dependent on the Genoese to fund their imperial wars, and with much of the rest of the Mediterranean world dependent on Italian trading networks for goods, money and trade both concentrated themselves in the city-states. As the cumulative profits from the twinned practices of banking and trade accumulated, they could be invested back into the further expansion of banking and trade. As a result, the city-states emerged as the first recognizably capitalist powers in history. This was a new model of state power, one that would eventually supplant (at least in part) an older one centered on the direct seizure and control of land, or territory. “Territorialist rulers tend to increase their power by expanding the size of their container,” as Arrighi puts it. “Capitalist rulers, in contrast, tend to increase their power by piling up wealth within a small container and increase the size of the container only if it is justified by the requirements of the accumulation of capital.”[4] This is obviously a generalization: Nations have continued to pursue the expansion of their physical territory through the present day, as Russia is currently attempting in Ukraine. But it is a useful generalization because it centers our focus on what has become the primary driver of geopolitical power over the last few centuries. It is not the country with the largest landmass or the largest population that gets to sit on top of the pyramid; instead, it is the country that can both amass the most capital within its borders and effectively deploy that capital throughout the world economic system. Capital accumulation is the key.

One of Arrighi’s key insights is that capital accumulation happens in cycles rather than in a straight line that slopes upward at a constant rate. The Italians were in charge of the first cycle, and we’ll discuss the other three shortly. Arrighi also breaks down each cycle into two stages. First, there is the material expansion. That is when the leading power’s accumulation produces gains in the real economy—that is, more goods, more trade, and more access to the things and services that people use in their daily lives. When the Italian trade networks resulted in the increased production and use of wool, that was part of the city-states’ material expansion. Eventually, however, an expansion’s material phase runs up against some hard limits. One of these limits is increased competition. After watching the Italians amass enormous commercial and political power via their trade routes, other groups established their own competing trade networks, or else they sent their ships along the routes the Italians had already established in order to siphon business away from the city-states. This competition makes it harder to be confident that you’ll get the returns you want on the capital you invest. If your company invents a new product that people love, your initial profits are going to be high, because people are clamoring to get their hands on the thing that no one else makes, and you can basically charge whatever you want for it, within reason. But once there are ten other companies making their own versions of the same thing, profits go down as companies struggle for pricing or efficiency advantages around the margins. But capital can’t just sit around in a Genoese savings account either, because then its returns drop even further, essentially to zero. Something has to be done with it, because capital has to keep moving to survive. Its movement, or circulation, is what makes it capital. When a venture fund invests $1 million in a start-up, it is deploying capital; when an eccentric great-aunt hides $1 million in $100 bills under her floorboards, it’s just cash.

That’s where the second stage of an expansion begins. Sitting on top of a big pile of money but facing a dearth of sufficiently profitable real-economy opportunities for investment, the city-states gradually abandoned the commerce that had made them into great powers in the first place and started using their capital to finance the real-economy projects of other people instead. Over the course of the fifteenth and sixteenth centuries, the Genoese “switched from commodities to banking” and eventually “withdrew from commerce.”[5] Now their profits would come not from raw materials or other goods but from the interest they could charge on loans made to others (among other financial instruments). Arrighi calls this stage the “financial expansion,” and in agreement with the French historian Fernand Braudel, he calls the switch from material to financial expansion a “sign of autumn,” a clear signal that the city-states’ power had matured, was now on the wane, and would eventually end.[6]

But just because the city-states went into decline didn’t mean that capitalism was declining as well. Capitalism’s longer-term expansion was just beginning. While the Italians spent the final years of their preeminence lapping up the profits to be had from high finance, other nations were making use of that money to establish the beginnings of capitalism’s second cycle of material expansion. By the end of the sixteenth century, it was clear that the United Provinces, known today as the Netherlands, would be in charge of this second cycle. An eighty-year war for independence from the Spanish crown, which ended with the Peace of Westphalia, had sparked the Dutch ascent to the top. After Spain invaded the Provinces to enforce the collection of taxes, the Dutch rebelled and took to the water, using piracy and privateering—the latter term referring to state-sanctioned piracy—to drain money and resources from the Spanish and stock up the coffers back home. In addition, the Dutch were located in an ideal spot from which to control the trade routes that traversed the Baltic Sea, and these would become increasingly important to the European economy as a whole as the Mediterranean trade network managed by the Italians reached the point of exhaustion.[7] By the time the Dutch achieved full recognition as an independent state in the mid-seventeenth century, piracy had made them rich, the Baltic trade network had made them an indispensable center of European commerce, and nearly a century of maritime fighting had made them some of the most skilled sailors on earth at a time when the fortunes of both commerce and warfare were primarily determined on the seas. The Dutch were ready to take charge.

You don’t become the new leader of world capitalism by doing the same things your predecessors did. In addition to their geographic luck, the pirated wealth that made them rich from the outset, and their naval prowess, the Dutch were successful because they innovated. They commissioned the jurist and philosopher Hugo Grotius to write the book Mare liberum, an important early document of international law, which argued that the seas were international territory and thus free to use by any nation that wished to engage in trade. The Dutch also synthesized commercial, financial, and political power in ways the city-states had not. Venice, as one example, had established commercial, political, and military self-sufficiency, but at the cost of limiting itself to regional (as opposed to continental or global) power. Genoa, as a second, had projected its commercial and financial power across Europe and out into the wider world, but it was dependent on the Spanish Empire for military and political influence. The Dutch, by contrast, were the first capitalist power to establish the self-sufficiency of a modern state and project influence across the globe. Ships in the employ of the two major Dutch trading enterprises, the Dutch East India Company and the Dutch West India Company, could be chaperoned wherever they went by the large and technologically sophisticated Dutch navy. That navy could also enforce the administration of the global network of colonies from which traders returned to market with their goods, a network that included large swaths of Southeast Asia, portions of India and South Africa, and a big chunk of the present-day American mid-Atlantic, including New York City, which the Dutch founded as New Amsterdam. Just as important, the Dutch ensured that the money and goods generated by their trading empire, however far-flung their origins might be, eventually circulated back through the Provinces. Amsterdam became not just the political but also the physical center of world commerce. Its famous warehouses could store enough grain to feed all of Holland for a decade without beginning to run out of room, and its stock market—the first in history to remain in permanent session—attracted what Arrighi calls “idle money and credit” from all over Europe.[8] Nowhere else could that money be put to good, profitable use so quickly. This degree of ease in buying and selling company stock was unprecedented, and investors enamored of Amsterdam’s user-friendly financial system poured money into Dutch commerce, enabling even more expansion. As the author of Robinson Crusoe famously put it when not writing novels,

The Dutch must be understood as they really are, the Middle Persons in Trade, the Factors and Brokers of Europe…they buy to sell again, take in to send out, and the greatest Part of their vast Commerce consists in being supply’d from All Parts of the World, that they may supply All the World again.[9]

The Dutch demonstrated to the other European powers the benefits of centralizing within a single political entity all the component parts of long-range commerce, including the financing of large corporate enterprises, the use of military force to protect commercial interests, and the purchase, transport, storage, and sale of goods. Having watched the Dutch successfully implement this centralization, however, other powers began to imitate it, at which point the material phase of the Dutch expansion, like its Italian predecessor, hit its limits. Just as the Italians had, the Dutch sought to overcome these limits by switching their focus from commercial trade to financing the commercial efforts of others. Expensive and escalating conflicts among the European powers, which culminated in a wave of revolutions that swept across the continent in the eighteenth century, meant that there was plenty of demand for all the money the Dutch had to lend out. From the perspective of Dutch financiers, it was a time of exuberance, but there was ultimately no getting around the fact that autumn had arrived for history’s second great capitalist power. Eventually drawn into Europe’s intra-continental wars, the Dutch saw their navy destroyed, their commercial influence diluted by competition, and demand for their financing eroded.

It was the British who destroyed the Dutch navy in the Fourth Anglo-Dutch War of 1780–84, and it was the British who replaced the Dutch at the top of world capitalism through the end of the nineteenth century. Building on the innovations pioneered by the Italians and the Dutch—the synthesis of trade, profit seeking, and high finance in the first case, with the additional backing of domestic military strength in the second—the British exploited Dutch weakness to make London the new financial center and then revolutionized how the world produced and transported goods with its technical advances in heavy industry. The revolution started in the field of what are called capital goods—that is, the things people need to build in order to be able to make lots of other things. Railways are capital goods, as are machines that produce textiles and the iron that goes into a steam engine, and the British were at the vanguard of all three. Iron smelting came first, and although England soon had the capacity to produce more iron than could be profitably employed at home, this surplus capacity was a powerful spur to economic growth in two ways. First, iron producers went looking for new ways to use the materials they could now manufacture on a large scale, and they found what they were looking for in railways and steamships. Second, England’s liberal trade policies helped the ironworks to expand beyond their domestic market and create demand for capital goods all over the world. Wherever British capital goods gained a foothold, other kinds of commerce could soon follow, from the export of British textiles to new markets to the import of raw materials and other commodities from those new markets. These transactions often took place at gunpoint, if not always literally. Like the Dutch, the British used their formidable navy to clear away obstacles to the spread of their economic empire, and in cases of particularly tenacious resistance, British administrators were installed to keep recalcitrant peoples in line. By the early twentieth century, the British Empire was the largest in world history, administering colonies and territories that accounted for about a quarter of the world’s land surface.

By that point, however, autumn had arrived again, and in the usual manner. Beginning around 1870, the global expansion of trade and commerce that had been kicked off by Britain’s colonialism and technical advancements in mechanization began to generate competition. “An increasing number of business enterprises from an increasing number of locations across the UK-centered world-economy,” Arrighi writes, “were getting in one another’s way in the procurement of inputs and in the disposal of outputs, thereby destroying one another’s previous ‘monopolies’—that is, their more or less exclusive control over particular market niches.”[10] As competition drove down profitability, those who held the U.K.’s capital began to shift their focus away from the material expansion of the real economy and toward the allure of high finance, while in the meantime ongoing economic competition slowly but steadily escalated to the generalized conflict of the world wars. The U.K. would retain its position atop the financial world well into the twentieth century, but the material center of global capitalism was already shifting across the Atlantic Ocean. The American century was under way.

It was only when the United States took up its leadership position at the head of global capitalism that world GDP growth truly exploded. Although retroactively ascribing growth figures to economies that did not think in terms of growth is a somewhat dicey exercise, estimates combining data from the World Bank with research from the Organisation for Economic Co-operation and Development (OECD) can help to sketch a general picture. Over the entirety of the first millennium of the Common Era, world GDP barely increased by 15 percent, expanding from $182 billion in the year 1 (in 2011 dollars) to $210 billion in the year 1000, for an annual growth rate of just over one-hundredth of 1 percent (on a per capita basis, there was no growth at all, because population growth kept pace with the millennium’s meager increases in economic output). Over the subsequent five hundred years, growth more than doubled to $430 billion, but annual growth of two-tenths of a percent would still have been more or less imperceptible over the course of a human life. The combined efforts of the Italians, Dutch, and British made it possible for growth to quadruple between 1500 and 1870, but it was only after 1870, with America’s post–Civil War industrialization and arrival on the world stage as an economic force, that growth began to accelerate at the rates we’ve all become used to over the last century. By 2000, the level of total world GDP stood at more than $63 trillion, eighteen times its level when the twentieth century began.[11]

To live through the hundred years from 1870 to 1970 was to experience the fastest and most profound transformation of daily life in history. As Robert J. Gordon puts it in his book The Rise and Fall of American Growth,

Manual outdoor jobs were replaced by work in air-conditioned environments, housework was increasingly performed by electric appliances, darkness was replaced by light, and isolation was replaced not just by travel, but also by color television images bringing the world into the living room. Most important, a newborn infant could expect to live not to age forty-five, but to age seventy-two. The economic revolution of 1870 to 1970 was unique in human history, unrepeatable because so many of its achievements could happen only once.[12]

That it was America that got to lead the world through this historically unrepeatable series of transformations cannot be chalked up to any kind of national genius, nor can we credit the Italians, Dutch, and British with “deserving” their periods at the top of the economic hierarchy more than their competitors. It would be more accurate to say that capitalism itself, an economic system that operates according to a number of internal and impersonal laws, went “looking” for a new leader as its old one failed, like a parasite—please excuse the word’s pejorative connotations—that goes looking for a new host after the old one has outlived its usefulness. America was in the right place at the right time. Because it was connected to Europe by ties of ancestry, diplomacy, and trade, the United States was able to share in the fruits of Great Britain’s technological advances without too much delay. And because it was separated from Europe by an ocean as well as the Monroe Doctrine of 1823, the United States was not subject to the same competitive pressures that constrained other nations that might otherwise have looked to supplant the British Empire. Much has been made of the fact that America didn’t pursue a colonial empire in the manner of the European powers, but to really believe this requires a degree of historical amnesia. Americans spent centuries fighting to displace the hundreds of Native societies that were spread across the continent when the first colonists arrived. What distinguishes the United States from the European empires, then, isn’t its supposed lack of interest in colonies but rather its unique willingness to follow the colonization process all the way to the end. This process of “internal” expansion also provided the United States with abundant and cheap land, which both made it possible for the country to accept huge numbers of immigrant workers and helped to serve as a release valve for social pressures and conflicts in the major urban-industrial centers. Combined with the government’s relatively more isolationist policies until the Japanese bombing of Pearl Harbor, these advantages made it possible for American capitalism to grow and develop secure in the knowledge of its territorial inviolability while the rest of the capitalist world tore itself apart in military conflict.

These advantages had essentially fallen into America’s lap, but America made good use of them. Britain’s advances in mechanization had made the dream of mass production possible, but it was Americans who made the dream real. The railway and telegraph facilitated the transport and distribution of goods across an entire continent. Frederick Taylor and Henry Ford invented manufacturing systems that made mass production much more cost effective than operating on a smaller scale. And the advent of advertising agencies, chain stores, and nationally distributed mail order catalogs made it possible to sell goods directly to anyone in the country. Urban and rural Americans alike were being integrated into an economic project that was producing and distributing goods at the fastest pace in history. And the effects of this integration cannot be understood simply in terms of the numbers of shoes or cars or even complete, ready-to-assemble houses (yes, really) that were sold by the Sears, Roebuck catalog during the first half of the century; the American capitalist expansion fundamentally changed people’s relationship to the economy. Many goods and services that had been either produced within the home for domestic use or else exchanged only among members of the same community were folded into the logic of mass production and profit seeking. Instead of sewing clothes at home, families began to purchase their outfits from companies that might be located thousands of miles away. Fireplaces and kerosene lamps, each of which had to be individually maintained and monitored, gave way to networked electricity grids woven through the infrastructure of entire cities. Local musicians lost work to professionals whose performances could now be recorded and heard anywhere in the world. In sum, a whole universe of household tasks that had previously required enormous expenditures of time and muscle power could now be completed using cars, boilers, washing machines, vacuum cleaners, and dozens of other appliances that are categorized today as “consumer durables,” meaning relatively expensive consumer goods that are expected to last for years after purchase. The Italian, Dutch, and British expansions had all effected historic transformations in how commerce worked, how governments and economies interacted, and how the capital goods that made large-scale manufacturing possible were produced. But under none of those three empires did capitalism make itself felt in people’s homes and daily lives the way it did under the Americans.

As the United States rode the consumer durables expansion up the ladder of world power during the first half of the twentieth century, patriotism itself became entwined with the economic miracle the country was in the process of engineering. It would not be completely unfair to describe the “American dream” as fundamentally a vision of life made comfortable by the availability and affordability of various consumer durables, particularly the car and the appliance-filled, owner-occupied, single-family dwelling. As this vision shored up national self-belief at home, it also advertised the American way of life to people around the world. Just because America would not be pursuing a formal empire in the British manner did not mean it would put any less effort into projecting influence abroad. If the American economy was going to grow at its full potential, it would have to integrate as much of the rest of the world as possible into its system of manufacture and trade.

By placing itself at the head of a global economic system rather than trying to function as a self-contained economic powerhouse, the United States would accomplish two things. First, being able to sell manufactured goods to the rest of the world protected the United States in the event that domestic demand alone couldn’t soak up everything America’s factories were turning out, shielding the United States from the kinds of rapid price fluctuations and profitability squeezes that can throw a more isolated economy into crisis. Second, the more America invested and sold internationally, the more it familiarized other countries with the standard of living it was achieving domestically, which in turn made it easier for the rest of the world to accept America’s economic and political leadership. As Arrighi points out in The Long Twentieth Century, this is a crucial step for a capitalist power to take. All dominant powers use coercion and violence to get what they want, but they can’t only use coercion or violence without triggering a general war that will see them supplanted at the top of the pyramid. They must also credibly claim to be acting on behalf of larger, more general interests, such as market liberalization or economic prosperity for all. This claim is “always more or less fraudulent,” Arrighi writes—that is, the dominant power’s self-interest will always come first—but if it is “at least partly true and adds something to the power of the dominant group,” then a state can be said to have achieved hegemony, the biggest geopolitical prize there is.[13]

The United States achieved hegemony with the end of World War II. The old order had been consumed by fire, and America was the only industrial power in a position to put a new one in place. Europe and East Asia were substantially in ruins, and decolonization was swallowing up the remnants of the old empires. But America had sacrificed much less than the war’s other combatants in terms of people and matériel, it had used the war as an opportunity to expand its manufacturing capacity, and the geographic protection afforded by two oceans had left its domestic infrastructure almost totally unscathed by combat. Even before the war’s formal end, the United States set out to remake the shattered economies of Western Europe and East Asia in its own image. The Bretton Woods Conference in 1944 saw representatives from forty-four Allied nations agree to a new monetary regime under which countries would guarantee the stability of exchange rates between their own currencies and the U.S. dollar, which at a stroke turned the dollar into the currency that no one in the world could afford to do without. The United States spent billions of dollars on postwar reconstruction in Europe, guaranteeing demand for American materials and products for decades, and the outbreak of the Korean War gave the United States an excuse to invest heavily in Japan as well. (Within thirty years, Germany and Japan, America’s primary antagonists during the war, would be two of the most dynamic economies in the world.) Finally, the enormous levels of military spending made possible by the beginning of the Cold War allowed the United States to pump even more money into the world economy.

The result of all of this was twenty-five years during which the world economy grew at the fastest rates ever recorded. The French, rounding the time frame up by five years, simply named the period Les Trente Glorieuses. In the 1950s, world real GDP—“real,” as opposed to “nominal,” being the version of GDP that accounts for the distorting effect of inflation on output figures—grew at an average annual rate of 4.5 percent. In the 1960s, that figure jumped to roughly 5.5 percent, which is astounding.[14] A car company that started the decade making some 100 cars a week and then grew at a rate of 5.5 percent each year would be making some 170 cars a week ten years later, a 70 percent increase. Apply that increase to the economic production of everything in the world, and it is not hard to understand why so many people felt themselves to be living in the midst of an economic miracle. Anxieties regarding the Soviet menace made the United States willing to export capital investment as well as technical know-how to Germany and Japan, kicking off a global manufacturing boom that consistently routed healthy profits back to American firms.[15] And at home, the government took halting but ultimately effective steps to maintain social peace by routing enough of the windfall back to the workers who ultimately made it possible. Part of this was accomplished through the GI Bill, which funneled immediate financial benefits to war veterans and all but guaranteed broad access to education, employment, and homeownership. Another part saw the government cultivating stable relationships with the country’s major labor unions, which in turn kept their rank and file in line and avoided the kinds of disruptive labor actions that had defined the period between the two world wars. A third part was accomplished by the civil rights movement, which was to a significant extent an effort to win for Black Americans an acceptable share of the postwar boom’s benefits.[16]

This approach was so successful that it was easy to miss the source of its fragility, which was that the whole thing depended on growth continuing to increase at its spectacular pace. Beginning in the 1970s, however, the American expansion, just like the three major capitalist expansions that preceded it, hit its limits, and growth began to slow down. Manufacturing outside the United States had advanced to the point that international firms no longer served just to satisfy the excess demand for manufactured goods that American firms lacked the capacity to produce. Now those firms began to compete directly with American companies for market share, including in the previously impregnable American domestic market. As low-cost firms began to eat into the profits that American manufacturers were by this point accustomed to, Aaron Benanav writes, “rates of industrial output growth in the U.S. began to decline starting in the late 1960s.”[17] This was the beginning of the deindustrialization process that came to dominate American political debate during election campaigns. The United States responded by taking the dollar off the gold standard and dismantling the Bretton Woods monetary system in 1971. The hope was that eliminating the dollar’s gold peg and allowing its price to fall would make American-manufactured goods more competitive internationally (the more expensive your currency, the more expensive your exports will be to international buyers). That worked to an extent for a little while, but ultimately it generalized the problem rather than solving it. Competition for manufacturing market share increased around the world, and the resulting decline in profits discouraged new investment.

People didn’t know it at the time, but the world had entered what is now a full half century of slowing global growth, and the trend shows no signs of reversing. Average annual GDP growth was just over 4 percent in the 1970s, a decline of nearly 1.5 percent from the prior decade. In the 1980s, it fell further, to around 3 percent. In the 1990s, it barely topped 2 percent.[18] The global economic pie has continued to grow since the 1970s, but at a slowing pace that is insufficient to keep the capitalists happy. Reluctant to invest in new business lines or expanded capacity that can’t provide the required rates of return, business have instead routed growing shares of their capital into asset markets, whether through share buybacks, increased dividend payouts, or other financial instruments. These investments are certainly profitable, but the only thing they grow is paper wealth. Financialization does nothing to grow the part of the economy that actually makes things for people to use. The economic historian Robert Brenner sums it up well:

The fundamental source of today’s crisis is the steadily declining vitality of the advanced capitalist economies over three decades, business cycle by business cycle, right into the present. The long-term weakening of capital accumulation and of aggregate demand has been rooted in a profound system-wide decline and failure to recover the rate of return on capital, resulting largely—though not only—from a persistent tendency to over-capacity, i.e., oversupply, in global manufacturing industries.[19]

In short, the world has too many factories, not enough demand to buy all the things those factories are capable of producing, and declining growth as a result, which only compounds the problem. Individual companies and countries have still found success in this environment, but only at the expense of others; slow growth turns the global economy into a zero-sum game that produces at least one loser for every winner rather than growing the pie quickly enough for most people to do well. One of the terms economists use for this situation is “secular stagnation.”[*]

The growth slowdown has caused all kinds of problems, some of which I’ll discuss shortly. But the first problem to highlight is the threat that secular stagnation poses to America’s position of global leadership. To be a little crude about it, America became the most powerful country in the world because it was the first country that figured out how to manufacture consumer durables like cars and washing machines on a large scale and how to do so at sufficiently low cost that many people could afford to buy them. Then it not only sold those goods to the rest of the world, but it also provided enough capital investment and technical expertise for other countries to learn how to make those goods themselves. Thus permitted to hitch their wagons to America’s growth engine, countries in Western Europe and East Asia (today they’re referred to as “developed nations”) helped to support the United States as the lead player in global power politics.

But as growth slows and the benefits of voluntary subordination to American interests decline, that support becomes harder to ensure. The world basically has enough cars and washing machines—far too many cars, in fact—and America can’t reignite global growth by producing more of them. The problem is that nobody seems to know what America should produce instead, what combination of goods and services and raw materials can not only satisfy some hitherto unnoticed global economic need but also help America maintain its competitive economic advantages. But just because an economic solution to the problem of secular stagnation hasn’t presented itself doesn’t mean it’s not worth trying political or military solutions instead. At its foundation, that’s what the war on terror is: a potential solution to the problem of slowing global growth and America’s declining power.


The most important economic problem caused by slowing global growth is one that economic statistics don’t do a very good job of capturing. One consequence of increased competition in manufacturing is that the amount of money firms spend on wages for human employees declines relative to the amount of money they spend on technological improvements to the production process. This is because spending x amount of money on upgrading the machines in your factory will produce a greater increase in output and productivity—the latter defined as output per worker—than spending that same amount on hiring more workers. Pursuing cost efficiency above all else in an effort to eke out a higher profit margin, companies can reduce the relative share of their spending that goes to labor costs in two ways. One is to focus on outsourcing and replace the workers you currently employ with workers elsewhere—say, in China—who will do the same job for less money. Another is to hire fewer workers than you used to. Since the 1970s, capitalists have done both. They’ve chased pools of inexpensive labor all over the globe, dividing up the production process into a bunch of little parts and then sending each individual part to wherever it can be completed on the cheap. And they have also shifted away, wherever possible, from employing people full time, relying instead on contract labor, piecework, interns, adjuncts, and so on. The result has been a stunning increase in the number of people globally who cannot survive on the amount of work that is available to them in the formal economy, and who must piece together a constantly changing series of part-time and temporary jobs, none of which include benefits or robust worker protections, in order to get by.

It is this global collapse in stable, full-time employment that economic statistics have failed to capture. In his recent book, Automation and the Future of Work, Aaron Benanav notes that when growth first began to slow in the 1970s, unemployment rates went up in a number of wealthy countries, and “outside of the United States, they remained stubbornly high for decades.”[20] This was a serious problem for government unemployment insurance programs, which had all been designed to handle temporary unemployment increases resulting from the normal ups and downs of the business cycle rather than permanently elevated unemployment caused by the fact that the golden age of manufacturing was over. “To coax the unemployed back to work,” Benanav writes, “governments began to reduce labor market protections and scale back unemployment benefits” (President Clinton’s “welfare reform” initiative is probably the most notorious example of this phenomenon in the United States). With the safety net disappearing, Benanav writes, “few workers remain unemployed for long. No matter how bad labor market conditions become, they still have to try to find work, since they need to earn an income in order to survive.”[21] Shunted out of full-time or union jobs into at-will employment and part-time work, these workers find themselves caught in the teeth of a brutal shift in the labor market, but one that unemployment statistics do not reflect. That’s because unemployment figures primarily count people who are both not currently working at all and actively in search of full-time work. If your work life consists entirely of driving a car for Uber a couple of hours each week, you don’t count as unemployed, no matter how much trouble you’re having paying the bills, and no matter how desperately you’re searching for a salaried job with benefits. And if you become so discouraged by the whole thing that you give up and drop out of the workforce entirely, you don’t count as unemployed, either, even though “unemployed” is exactly what you are. The result is persistent underemployment rather than unemployment, a situation in which the economy’s vanishing core of full-time jobs is replaced by gig work that may provide a means of temporary survival but cannot offer the kinds of security and stability around which it’s possible to build a dignified life.

And that’s in the wealthy countries. In middle- and low-income nations, the consequences of secular stagnation have been catastrophic. In his 2006 book Planet of Slums, the great historian and political writer Mike Davis pointed out that since the 1970s the world’s poor countries have urbanized at a rate that puts Europe’s experiences of nineteenth-century urbanization to shame.[22] But whereas those European experiences of urbanization were part of an industrial revolution that saw growth accelerate dramatically, the more recent urbanization of what used to be called the Third World has largely occurred in places where economic growth is either low, slowing down, or nonexistent. Megacities such as Lagos (home to 20.7 million people in 2002), Cairo (14.2 million), Manila (26.4 million), Mexico City (24.7 million), São Paulo (22.7 million), and Jakarta (28.6 million) have grown so much not because they are hotbeds of economic opportunity but because the best way for people to survive when they cannot find secure, salaried labor is for them to live around as many other people as possible. The growth of these sprawling urban agglomerations accounted for some two-thirds of the world’s human population increase between 1950 and 2006, and the expectation is that after 2020 or so, population growth will happen entirely within cities until peaking at around 10 billion near the midpoint of the twenty-first century. These megacities are not monuments to the possibilities of human achievement, however. An urban archipelago centered on Lagos and running along the Gulf of Guinea from Benin to Accra, for example, now has a population comparable to that of the East Coast of the United States, with 60 million people living on a six-hundred-kilometer strip of land. Back in 2006, Davis predicted that by 2020 this supersize urban agglomeration would “probably” constitute “the biggest single footprint of urban poverty on earth.”[23]

The term for the people who live in this global slum archipelago is “surplus population.” That sounds offensive, but it is not meant in any kind of moral sense. They are surplus with respect to the economic system they inhabit, surplus in the sense that the economy has no need for their labor and is unable to grow at a rate that would provide them with formal employment. Of course, their labor remains necessary in a few respects. For one thing, mega-retailers such as Walmart, Amazon, and Alibaba often rely on byzantine networks of subcontracting and outsourcing to manufacture the cheap goods they sell to middle- and high-income consumers, and researchers have found on multiple occasions that these networks extend into slums and other places that members of the surplus population call home. For another, people still need to eat even if they cannot get hired. Those who are shut out of the formal economy will rely instead on informal work—domestic labor, scavenging, temporary construction work paid in cash, or small-scale buying and selling on the gray or black markets. Davis estimated that the global surplus population numbered around one billion in 2006.[24] He expected that number to grow quickly, and in 2018 the United Nations reported that two billion people—more than 60 percent of the world’s workers, and more than a quarter of the total human population—earned their livelihoods primarily through informal work.

Because these people are not technically unemployed in the eyes of those who tabulate labor statistics, this surge in informal work is not reflected in standard forms of economic data. But it is the single most important economic phenomenon of the past several decades, and it defines the gap separating the world’s rich countries from its poor ones. When you hear the terms “developed countries” and “developed markets,” it is useful to understand them as meaning “places where people have comparatively more access to formal employment.” “Emerging markets” are places where people have less access, or none. The UN found that in Africa, 86 percent of employment is informal. In Asia-Pacific, it’s 68 percent, and in the Americas it’s 40 percent, with the vast majority of informal work concentrated in countries other than the United States. In Arab states, 69 percent of workers are stuck in the informal economy.[25]

Perhaps you can see where I’m going with this. While terrorism has existed for centuries—with John Brown as America’s first celebrity terrorist—many historians argue that the contemporary version first emerged in 1972, when members of the Palestinian group Black September invaded the Olympic Village at the Munich Summer Games, killed two Israeli athletes, and took nine others hostage. That was the first time a terrorist group could rely on a global mass media to publicize their attack to the whole world in real time. That attack inaugurated a new surge of terrorist activity around the world, one that has not yet abated today (though it might have peaked with al-Qaeda’s attacks on the United States). The Munich attack took place just as the long global growth downturn was beginning, and it was a message from the future. Israel had occupied the Palestinian territories five years earlier at the conclusion of the Six-Day War, completely shutting out Palestinians from meaningful participation in the global or even regional economy. The occupation turned Palestinians into members of the global surplus population at a stroke.

It is exactly that kind of terrorism that America mobilized to fight beginning in the late summer of 2001. It is politically or religiously motivated violence. It alternately targets citizens of the developed world, symbols of the developed world’s power, or domestic institutions or political officials who are seen as cooperating with the developed world’s economic sacrifice of poor and middle-income countries. Even when its targets are incoherent or its stated rationale conspiratorial, it expresses a version of the emerging world’s anger and hopelessness in the most brutal way available. Just because it is neither politically constructive nor morally justifiable does not render terrorism incomprehensible, as most American political figures have claimed over the prior two decades. It is a demand that all of this be made to stop, and the people making the demand are sometimes willing to die in order to express themselves.

Since 2001, anyone suggesting that contemporary terrorism has anything to do with economic exploitation or inequality has been met with reactions ranging from pointed criticism to howling condemnation. Many of these reactions have simply been attempts by the American right to score political points by denouncing people like Susan Sontag or the Reverend Jeremiah Wright, Barack Obama’s former pastor in Chicago. I don’t think these critics deserve much serious consideration, given the extent to which they rely on crude moral denunciation in making their points. (It’s like arguing that man-made climate change plays a role in the increased severity of flooding and hearing, in reply, “You’re saying the flood victims deserved it!”)

There is a different criticism of the economic explanation for terrorism, however, that deserves more attention. Objecting to the idea that terrorism amounts to a cri de coeur emanating out of the ranks of the global poor, a number of scholars have correctly noted that most terrorist leaders do not actually come from the poorest segments of their societies. Instead, they tend to be middle class and even educated. Osama bin Laden, after all, was not just middle or even upper class but full-on 1 percent rich. The son of a billionaire construction magnate in Saudi Arabia, bin Laden studied at elite universities in the Middle East and the U.K. and eventually inherited between $25 and $30 million from his father. This is a truly exceptional level of wealth, and it is by no means representative of terrorists on average. Nevertheless, it is true that most members of extremist groups were not languishing in poverty when they decided to start learning about homemade explosives.

To describe extremists as middle-class and dismiss the economic issue entirely, however, is to miss the fact that there are major variations in wealth and social status within classes. In 2011, the political scientist Alexander Lee published a paper investigating whether “poverty and lack of education play a role in participation in political violence.” While conceding that many studies had “concluded that these groups are composed of people who are wealthier and better educated than the average member of the societies from which they recruit,” Lee argued that researchers had missed some pretty important trees for the forest. Rather than situating terrorists within the class structures of their societies as a whole, Lee considered where extremists fell economically within the smaller group of people who were active participants in politics. He chose this as a starting point for his research because “terrorists…are not drawn from a random sample of the population but, rather, from those who have acquired information about the political process, are connected to politicized social networks, and are able to devote time and energy to political involvement.” Around the world, the poor are relatively less politicized than the wealthier members of society, whether because they don’t have the time to participate or because they don’t see what the point would be (wealth and income are the best predictors of whether a given American votes or not). Lee found that terrorists tended to come from the bottom rungs of the politicized group.[26] In general, extremists are people who have just enough money and education to understand how the political system works and how they might get involved, but not enough to meaningfully influence its direction or improve their own lives. They live in a world in which low growth and soaring economic inequality are sucking the vitality out of the global economy, and they live in countries in which there are few political freedoms.[27] They may be educated, and they may have more money than the national average, but an education that isn’t followed by opportunities to use it isn’t worth much, as the persistent exodus of highly skilled university graduates from middle- and low-income countries suggests. Terrorists are people who are stuck. What’s worse, they know it.

One of the defining features of terrorism in its contemporary form is the exploitation of mass media to disseminate news of attacks around the world and amplify the extremists’ messages. But mass media has likely fueled terrorism as well. The period that saw the failure of the postwar dream of universal high growth is also the period during which mass media became truly global, allowing the rich West to blanket the earth with images of its consumerism and success. That combination is a perfect recipe for resentment, which the writer Pankaj Mishra has described in his 2017 book, Age of Anger, as one of the defining sentiments of our age. Although resentment-fueled extremism has flourished all over the world in recent decades, the Middle East proved to be particularly susceptible to its lure, not because of any belief inherent to Islam, but because the geopolitical pressures on the region have been especially brutal for the people who live there. Studded with American military bases, menaced by a hostile Israel, predominantly ruled by authoritarians (some of them dynastic) who enjoy the backing of this or that rich Western government, and reliant on oil exports to a degree that stifles productive investment in other kinds of economic growth, many Middle Eastern societies have essentially been frozen in place, leaving their citizens with no way to share in the material prosperity and freedom of political action they see on their television and computer screens every day. In this light, religion doesn’t so much provide the basic motivation for political violence as it constructs an edifice of meaning and redemption around social frustrations that already permeate daily life. The moment when people decide there is no hope for constructive change or progress in this world is also the moment when they will begin to listen to those who can promise to deliver those things in the next world. “Those who perceive themselves as left or pushed behind by a selfish conspiratorial minority,” Mishra writes, “can be susceptible to political seducers from any point on the ideological spectrum.”[28] In the case of al-Qaeda and similar groups, those seducers arrived brandishing a Koran.

Of course, violent fundamentalist groups aren’t the only problem associated with stagnant global growth. Low growth also produces increased numbers of migrants, as people look to escape from their exploited homelands and make a new life in the rich countries that have benefited the most from our inequitable global economic arrangements. It produces political instability as well, because autocratic leaders who cannot offer their citizens jobs or prosperity even if they want to find themselves relying more and more on violence and repression to maintain the political order that keeps them in power. And it produces the slums that Mike Davis described so vividly—places in which housing is inadequate, basic utilities are unreliable, socioeconomic betterment is a pipe dream, and local governance is often managed by competing groups of violent gangs.

It is in the light of these wider problems of global governance that the economic rationale behind America’s war on terror becomes even more compelling. To say it plainly: In the wake of September 11, the United States found itself at the head of a global economic order that had been founded on a growth surge that was slowly but surely running out of steam. To stand back and watch as China took the top spot for itself was unthinkable, nor was it an option to find a more equitable way to share control over the functioning of the world economy. For one thing, the benefits of being on top were just too great, allowing Americans to develop and sustain levels of consumption that far outstripped those of any other group of people in world history. For another, “sharing” isn’t how capitalism works on the global scale. With those options off the table, the United States faced a whole series of problems stemming from the kinds of authoritarian political rule, endemic urban poverty, and social instability that plagued the countries where growth had failed most spectacularly. The government’s military planners often refer to these countries as weak or failed states, and while the United States believed for some time that the problems facing these states could be kept more or less at arm’s length, September 11 was thought to reveal that belief as illusory. Going forward, America would need to confront these problems more directly, with a decentralized, expansive, and flexible military effort that wouldn’t be bound by any conventional definitions of victory or defeat. That’s where the expanded war on terror came from. It might not be able to solve the problems facing a low-growth world, but it could at least tolerably insulate the United States from their effects. The war is a tool for managing the very surplus populations that the end of American-led economic prosperity helped to create, people whom the United States now finds itself unable and unwilling to help.


America’s military planners didn’t use the term “surplus populations” as they outlined how the country should respond to the rising instability of the late twentieth and early twenty-first centuries, nor did the Bush administration talk about slowing global growth as it made the case for the war on terror. Neither omission should be surprising. Strategists and soldiers aren’t tasked with curing the maladies that afflict the parts of the world in which America has decided it has a vital economic or social interest. Their job is to use military force to get rid of the symptoms, whether those be food riots carried out by poor people or suicide attacks planned by religious young men who resent the fact that society failed to reward their efforts to get educated with a reliable income. And for George W. Bush and Barack Obama (or any president, for that matter), acknowledging that slowing global growth was a characteristic tendency of capitalism as an economic system was out of the question. That idiosyncratic problems might pop up over the course of capitalism’s journey through history—whether recessions, waves of speculative frenzy, or market crashes—was fine. Those could be talked about and addressed. But to suggest that some of these problems might be inherent to the system itself, requiring not just targeted interventions or reforms but potentially replacement by a new system, was impossible.

Despite this code of silence around some of capitalism’s more deeply rooted problems, however, Pentagon strategists were clear-eyed about the kind of world they confronted and open about their desire to manage the surplus populations whose numbers were increasing almost everywhere. Following the 1993 Mogadishu debacle, in which Somali forces shot down three Black Hawk helicopters and killed eighteen American soldiers, Army theoreticians and eggheads working for the RAND Corporation began to churn out all kinds of papers and memoranda on how the military might reorient itself toward prolonged, sporadic, low-intensity urban conflict. Three years after Mogadishu, the house journal of the Army War College, Parameters, published an article on the swiftly emerging era of urban conflict. Major Ralph Peters began with a flourish:

The future of warfare lies in the streets, sewers, high-rise buildings, industrial parks, and the sprawl of houses, shacks, and shelters that form the broken cities of our world. We will fight elsewhere, but not so often, rarely as reluctantly, and never so brutally. Our recent military history is punctuated with city names—Tuzla, Mogadishu, Los Angeles, Beirut, Panama City, Hue, Saigon, Santo Domingo—but these encounters have been but a prologue, with the real drama still to come.[29]

Peters called cities “the post-modern equivalent of jungles and mountains—citadels of the dispossessed and irreconcilable,” and he wrote that “a military unprepared for urban operations across a broad spectrum is unprepared for tomorrow.” One year later, a National Defense Panel review imagined what the world might look like in 2020 and then identified several areas of concern that the U.S. military would need to address if it wanted to be able to cope with this coming world. One was terrorism; “missile proliferation and a host of transnational dangers may play a more prominent role,” the panel warned. Another was “the need to prepare ourselves to conduct operations in urban environments.”[30] Without ever actually using the term, the panel highlighted the world’s growing surplus populations as one of the key threats that would menace American supremacy through the first decades of the twenty-first century. The report cited “social and demographic trends that threaten to outstrip the ability of many countries to adapt,” including “rapid population growth in regions ill-prepared to absorb it, refugee migration and immigration, chronic unemployment and underemployment, and intensified competition for resources, notably energy and water.”[31]

The RAND Corporation warned about the “urbanization of insurgency,”[32] and a theoretical journal run by the U.S. Air Force described the difficulties involved with carrying out combat operations in slums, a kind of battlefield that is “decreasingly knowable because it is increasingly unplanned.”[33] The Bush administration might have tried to convince Americans that the war on terror and especially the invasion of Iraq would go smoothly, conjuring visions in which grateful throngs of the world’s oppressed peoples would toss flowers and garlands at America’s soldiers before embarking on the construction of dynamic market economies. But the military theoreticians who tried to delineate the contours of the coming era were less optimistic, and they occasionally slipped into dystopian gloom. “Consider just a few of the potential trouble spots where US military intervention or assistance could prove necessary in the next century,” Peters wrote.

Mexico, Egypt, the sub-continent with an expansionist India, the Arabian Peninsula, Brazil, or the urbanizing Pacific Rim. Even though each of these states or regions contains tremendous rural, desert, or jungle expanses, the key to each is control of an archipelago of cities. The largest of these cities have populations in excess of 20 million today—more specific figures are generally unavailable as beleaguered governments lose control of their own backyards. Confronted with an armed and hostile population in such an environment, the US Army as presently structured would find it difficult to muster the dismount strength necessary to control a single center as simultaneously dense and sprawling as Mexico City.

…In fully urbanized terrain…warfare becomes profoundly vertical, reaching up into towers of steel and cement, and downward into sewers, subway lines, road tunnels, communications tunnels, and the like….The broken spatial qualities of urban terrain fragments units and compartmentalizes encounters, engagements, and even battles. The leader’s span of control can easily collapse, and it is very, very hard to gain and maintain an accurate picture of the multidimensional “battlefield.”

Peters’s litany of difficulties continued for an additional twenty-five hundred words, without any evident improvement in his mood. “Noncombatants, without the least hostile intent,” he wrote, “can overwhelm the force.” “Atrocity is close-up and commonplace.” “Casualties can soar….[U]rban operations result in broken bones, concussions, [and] traumatic impact deaths.” Soldiers with “untempered immune systems” would also confront “a broad range of septic threats” resulting from “the appalling sanitation in many urban environments.” “Soldiers will die simply because they were looking the wrong way, and even disciplined and morally sound soldiers disinclined to rape can lose focus in the presence of female or other civilians.” “Vertical fighting is utterly exhausting.”[34]

What you have here is a detailed and more or less accurate description, before the fact, of the war on terror that the United States spent the first two decades of the twenty-first century fighting. Like the war itself, Peters’s list of places where the United States might have to send its military spans every continent except for Antarctica, and each location is home to at least one of the mega-slums that we have identified as a signature product of stagnating growth within global capitalism. His description of urban conflict, with America’s troops shooting their way through tower block stairwells and blind alleyways, reads like the script for a Call of Duty: Modern Warfare game. His emphasis on the dangers of concussions anticipates the rise of traumatic brain injury as the signature combat wound of the Iraq War. And his chilling warnings about the difficulties of distinguishing enemy combatants from civilians in urban combat environments—“atrocity is up-close and commonplace”—have been more than borne out by the results of the past twenty years.

There is one other curious aspect of Peters’s essay that I would like to discuss. Throughout the paper, Peters emphasizes the importance of training to prepare the U.S. military for urban conflict. He captures perfectly the juncture at which the Army found itself immediately after September 11. “The modern and post-modern trend in Western militaries has been to increase the proportion of tasks executed by machines while reducing the number of soldiers in our establishments,” he writes. “We seek to build machines that enable us to win while protecting or distancing the human operator from the effects of combat. At present, however, urban combat remains extremely manpower-intensive—and it is a casualty producer.” Fully mechanized war might be feasible in the long term, but for now the front lines would remain the responsibility of real, human soldiers, and those soldiers would need a lot of new equipment to keep them safe. Soldiers would need “new forms of armor,” “eye protection…given the splintering effects of firefights in masonry and wood environments,” and “protective headgear” that could guard against “accidental blows from falls or collapsing structures” just as well as they guarded against “enemy fire.” Guns would have to be “compact and robust, with a high rate of fire and very lightweight ammunition.” In addition, soldiers would need the capability for “thermal or post-thermal imaging” to help them see through walls and around corners, perhaps even equipment that could “differentiate between male and female body heat distributions and that will even be able to register hostility and intent from smells and sweat.” This panoply of gadgets would have to be balanced, however, against the need to reduce the amount of weight each soldier carries “dramatically…since fighting in tall buildings requires agility that a soldier unbalanced by a heavy pack cannot attain.” Still, the equipment would inevitably add up, meaning that “soldiers will need more upper body strength.” Again, we see here an aspect of the war on terror fantasized into being in advance of its realization, as Peters conjures up the image of the Special Forces super-soldier that has become such an icon of twenty-first-century conflict.

In the paper’s second-to-last paragraph, however, a different kind of fantasy, or perhaps nightmare, appears. Following a list of bureaucratic reforms that might help to facilitate the military’s transition to twenty-first-century warfare, Peters returns to the issue of training: “None of the sample measures cited above is as important as revolutionizing training for urban combat.” But where to train these soldiers? Current facilities, designed to teach recruits to fight in “villages or small towns,” were clearly inadequate. Peters briefly raises the possibility of “building realistic ‘cities’ in which to train” but concludes that such an endeavor would be “prohibitively expensive.” Then Peters has a better solution in mind. “Why build that which already exists?” He writes,

In many of our own blighted cities, massive housing projects have become uninhabitable and industrial plants unusable. Yet they would be nearly ideal for combat-in-cities training. While we could not engage in live-fire training (even if the locals do), we could experiment and train in virtually every other regard. Development costs would be a fraction of the price of building a “city” from scratch, and city and state governments would likely compete to gain a US Army (and Marine) presence, since it would bring money, jobs, and development—as well as a measure of social discipline.

At the very end of a paper otherwise focusing entirely on the difficulties facing American troops as they try to navigate the dangerous, seething cities of the rest of the world, we are suddenly presented with a shocking vision of the hopeless state of America’s own cities. These are apparently places where “locals” engage in “live-fire training”—presumably in the course of carrying out their violent gang wars—and governments are so desperate for not only “money, jobs, and development” but also a modicum of “social discipline” that they would welcome and even “compete” to host an occupying force of U.S. military trainees.

How did things get so bad at home, anyway? That larger question is beyond Peters’s remit, but his description still offers some clues as to what the answer might be. As one ideal training site, we have the “unusable” “industrial plants” that huddle ominously on the outskirts of America’s urban landscapes. As another, we have “housing projects.” These projects might once have symbolized the optimism of Lyndon Johnson’s Great Society, a project undertaken when America’s growth engine was roaring, but now they are “uninhabitable,” although Peters must know that many people do, in fact, continue to inhabit them. The factories became unusable because growth slowed and the corporations that built them stopped investing in their operation, and the projects, which once housed a class of people referred to as the working poor, are now little more than decrepit containers in which to store people who are still impoverished but increasingly struggle to find any work at all.

This pessimistic vision may seem like a non sequitur, a somewhat paranoid conception of domestic social decay shoehorned into what is otherwise a coherent (if very contestable) account of what the United States will need to do to maintain its global dominance in the twenty-first century. It is not. Underneath the racism that animates the passage, Peters also gestures toward an awareness that global stagnation is producing deindustrialization, persistent underemployment, and growing surplus populations inside the United States as well. During the first decade of the war on terror, the kinds of training, tactics, and equipment Peters urged the military to develop would be tested and refined throughout Iraq, Afghanistan, Yemen, Pakistan, Somalia, and elsewhere. Following the global financial crisis, however, many Americans would come to realize that economic stagnation and precarity were coming for them as well. In turn, the government realized that this new kind of war fighting also had many useful domestic applications. If the military the United States had built and the tactics it had developed could be sent to foreign countries to manage surplus populations and their attendant problems—from political instability to guerrilla insurgencies to migration—then those same tactics could also be used to manage urban poverty, the discontent of the chronically underemployed, and unwanted migrants at home.

Skip Notes

* This phrase reveals a lot about the extent to which capitalists have naturalized an economic system that is, in reality, man-made. In finance-speak, “secular” means anything not pertaining to the business cycle, which is the term for the process by which capitalist economies grow, eventually overheat, suffer some kind of crisis, struggle through a recession, and then begin to grow again. One expects growth to fall in the latter stages of the business cycle, but if growth falls even once the effects of the business cycle have been accounted for, then you have a “secular” decline in growth. What’s revealing about the term is what it doesn’t say. If anything not related to the business cycle is “secular,” then the business cycle itself is divine.