9

REINVENTING SOCIETY

IT MIGHT SEEM THAT the logical response of a nation caught in a predicament of the sort that faces the United States would be to bite the bullet, back away from empire in a deliberate fashion, and use the last bit of income from the tribute economy to pay for the expenses of rebuilding a domestic economy of a more normal kind. At the same time, there are compelling reasons why very few empires in history have had the great good sense to manage their decline in this manner. Imperial China did it in the fifteenth century, scrapping a burgeoning maritime empire in the Indian Ocean,1 and of course Britain did it after 1945, though that was largely because a 500-pound gorilla called the United States was sitting on Britannia’s prostrate body, informing her politely that in future, the global empire would be American, thank you very much; other than that, examples are few and far between.

The logic here is easy to follow. Any attempt to withdraw from imperial commitments will face concerted resistance from those who profit from the status quo, while those who claim to oppose empire are rarely willing to keep supporting a policy of imperial retreat once it turns out, as it inevitably does, that the costs of that policy will include a direct impact on their own incomes or the value of their investments. Thus politicians who back withdrawal from empire can count on being pilloried by their opponents as traitors to their country, and abandoned by erstwhile allies who dislike empire in the abstract but want to retain lifestyles that only an imperial tribute economy can support. Since politicians are, after all, in the business of getting into office and staying there, their enthusiasm for such self-sacrificing policies is understandably limited.

The usual result is a frantic effort to kick the can as far as possible down the road, so that somebody else has to deal with it. Most of what’s going on in Washington, DC, these days can be described very exactly in those terms. Despite popular rhetoric, America’s politicians are not unusually wicked or ignorant; they are, by and large, as ethical as their constituents, and rather better educated — though admittedly neither of these is saying much. What distinguishes them from the statesmen of an earlier era is that they are face to face with an insoluble dilemma that their predecessors in office spent the last few decades trying to ignore.

As the costs of empire rise, the profits of empire dwindle, the national economy circles the drain, the burden of deferred maintenance on the nation’s infrastructure grows, and the impact of the limits to growth on industrial civilization worldwide becomes ever harder to evade, they face the unenviable choice between massive trouble now and even more massive trouble later; being human, they repeatedly choose the latter, and console themselves with the empty hope that something might turn up. It’s a common hope these days. The current frenzy in the media about how shale gas is going to make the United States a net energy exporter gets a good share of its impetus from the same delusional hope — though admittedly the fact that a great many people have invested a great deal of money in companies in the fracking business, and are trying to justify their investments using the same sort of reasoning that boosted the late housing bubble, also has more than a little to do with it.

There’s likely to be plenty more of the same thing in the decades ahead. Social psychologists have written at length about what James Howard Kunstler has usefully termed the psychology of previous investment,2 the process by which people convince themselves to throw bad money after good, or to remain committed to a belief system even though all available evidence demonstrates that it isn’t true and doesn’t work. The critical factor in such cases is the emotional cost of admitting that the decision to buy the stock, adopt the belief system, or make whatever other mistake is at issue, was in fact a mistake. The more painful it is to make that admission, the more forcefully most people will turn away from the necessity to do so, and it’s safe to assume that they’ll embrace the most consummate malarkey if doing so allows them to insist to themselves that the mistake wasn’t a mistake after all.

As America stumbles down from its imperial peak, in other words, the one growth industry this country will have left will consist of efforts to maintain the pretense that America doesn’t have an empire, that the empire isn’t falling, and that the fall doesn’t matter anyway. (Yes, those statements are mutually contradictory. Get used to it; you’ll be hearing plenty of statements in the years to come that are even more incoherent.) As the decline accelerates, anyone who offers Americans a narrative that allows them to pretend they’ll get the shiny new future that our national mythology promises them will be able to count on a large and enthusiastic audience. The narratives being marketed for this purpose need not be convincing; they need not even be sane. So long as they make it possible for Americans to maintain the fiction of a brighter future in the teeth of the facts, they’ll be popular.

The one bit of hope I can offer here is that such efforts at collective make-believe don’t last forever. Sooner or later, the fact of decline will be admitted and, later still, accepted; sooner or later, our collective conversation will shift from how America can maintain perpetual growth to how America can hold onto what it has, then to how America can recover some of what it lost, and from there to figuring out how America — or whatever grab bag of successor societies occupies the territory currently held by the United States — can get by in the harsh new world that grew up around it while nobody was looking. It’s a normal process in an age of decline, and can be traced in the literature of more than one civilization before ours.

It bears remembering, though, that individuals are going through the same process of redefinition all by themselves. This is primarily a matter of expectations, and of the most pragmatic sort of economic expectations at that. Consider a midlevel managerial employee in some corporation or other whose job, like so many other jobs these days, is about to go away forever. Before the rumors start flying, she’s concerned mostly with clawing her way up the corporate ladder and increasing her share of the perks and privileges our society currently grants to its middle classes. Then the rumors of imminent layoffs start flying, and she abruptly has to shift her focus to staying employed. The pink slips come next, bearing bad news, and her focus shifts again, to getting a new job; when that doesn’t happen and the reality of long term joblessness sinks in, a final shift of focus takes place, and she has to deal with a new and challenging world.

This has already happened to a great many people in America. It’s going to happen, over the years ahead, to a great many more — probably, all things considered, to a large majority of people in the American middle class, just as it happened to a large majority of the industrial working class a few decades back. Not everyone, it has to be said, will survive the transition; alcoholism, drug abuse, mental and physical illness, and suicide are among the standard risks run by the downwardly mobile. A fair number of those who do survive will spend the rest of their lives clinging to the vain hope that something will happen and give them back what they lost.

It’s a long, rough road down from empire, and the losses involved are not merely material in nature. Basing one’s identity on the privileges and extravagances made possible by the current US global empire may seem like a silly thing to do, but it’s very common. To lose whatever markers of status are respected in any given social class, whether we’re talking about a private jet and a Long Island mansion, a fashionable purse and a chic condo in an upscale neighborhood, or a pickup and a six-pack, can be tantamount to losing one’s identity if that identity has no more solid foundation — and a great many marketing firms have spent decades trying to ensure that most Americans never think of looking for more solid foundations.

This has sweeping implications for those who are trying to figure out how to live their lives in a world in which the conventional wisdom of the last three hundred years or so has suddenly been turned on its head. The first and, in many ways, the most crucial point is that you are going to have to walk the road down from empire yourself. Nobody else is going to do it for you, and you can’t even assume that anybody else will make it easier for you. What you can do, to make it a little easier than it will otherwise be, is to start walking it before you have to.

That means, to return to a slogan I introduced in another book, using LESS — Less Energy, Stuff, and Stimulation.3 The more energy you need to maintain your everyday lifestyle, the more vulnerable you’ll be to sudden disruptions when the sprawling infrastructure that supplies you with that energy starts having running into serious trouble. Today, routine blackouts and brownouts of the electrical grid, and rationing or unpredictable availability of motor fuel, have become everyday facts of life in Third World nations that used to have relatively reliable access to energy. As America’s global empire unravels and the blowback from a century of empire comes home to roost, we can expect the same thing here. Get ready for that in advance, and you won’t face a crisis when it happens.

The same is true of the extravagant material inputs most Americans see as necessities, and of the constant stream of sensory stimulation that most Americans use to numb themselves to the unwelcome aspects of their surroundings and their lives. We will all be doing without those at some point. The sooner we learn how to get by in their absence, the better off we’ll be — and the sooner we get out from under the torrent of media noise we’ve been taught to use to numb yourself, the sooner we can start assessing the world with a relatively clear head, and the sooner we’ll notice just how far down the arc of America’s descent we’ve already come.

Using LESS isn’t the only thing that’s worth doing in advance, of course. It’s crucial to develop the skills needed to produce goods or provide services for other people, using relatively simple tools powered, if at all possible, by human muscles. As the imperial wealth pump winds down and the United States loses the ability to import cheap goods and cheap labor from abroad, people will still need goods and services, and will pay for them with whatever measure of value is available — even if that amounts to their own unskilled labor. There are plenty of other steps that can be taken to prepare for life in a post-imperial society, and the sooner those steps get taken, the less traumatic the transition to that society is likely to be.

That process is going to require the recovery or reinvention of many of the things this nation chucked into the dumpster with whoops of glee as it took off running in pursuit of its imperial ambitions. The basic skills of democratic process are among the things on that list, and so are the even more basic skills of learning and thinking that undergird the practice of democracy.

All that remains crucial. Still, it so happens that a remarkably large number of the other things that will need to be put back in place are all variations of a common theme. What’s more, it’s a straightforward theme — or, more precisely, it would be straightforward if so many people these days weren’t busy trying to pretend that the concept at its center either doesn’t exist or doesn’t present the specific challenges that have made it so problematic in recent years. I am speaking of the mode of collective participation in the use of resources, extending from the most material to the most abstract, that goes most often these days by the name of “the commons.”

The redoubtable green philosopher Garrett Hardin played a central role decades ago in drawing attention to the phenomenon in question with his essay “The Tragedy of the Commons.”4 It’s a remarkable work, and it’s been rendered even more remarkable by the range of contortions engaged in by thinkers across the economic and political spectrum in their efforts to evade its conclusions. Those maneuvers have been reasonably successful; there was, for example, a flurry of claims in the media a few years back that the late Nobel Prize-winning economist Elinor Ostrom had “disproved” Hardin with her work on the sustainable management of resources.

In point of fact, she did no such thing. Hardin demonstrated in his essay that an unmanaged commons faces the risk of a vicious spiral of mismanagement that ends in the common’s destruction; Ostrom got her Nobel, and deservedly so, by detailed and incisive analysis of the kinds of management that prevent Hardin’s tragedy of the commons from taking place.5 A little later, we’ll get to why those kinds of management are exactly what nobody in the mainstream of American public life wants to talk about just now; the first task at hand is to walk through the logic of Hardin’s essay and understand exactly what he was saying and why it matters.

Hardin asks us to imagine a common pasture, of the sort that was standard in medieval villages across Europe. The pasture is owned by the village as a whole; each of the villagers has the right to put his cattle out to graze on the pasture. The village as a whole, however, has no claim on the milk the cows produce; that belongs to the villager who owns any given cow. The pasture is a collective resource, from which individuals are allowed to extract private profit; that’s the basic definition of a commons.

In the Middle Ages, such arrangements worked well across Europe because they were managed by tradition, custom, and the immense pressure wielded by informal consensus in small and tightly knit communities, backed up where necessary by local manorial courts and a body of customary law that gave short shrift to the pursuit of personal advantage at the expense of others. The commons that Hardin asks us to envision, though, has no such protections in place. Imagine, he says, that one villager buys additional cows and puts them out to graze on the common pasture. Any given pasture can only support so many cows before it suffers damage; to use the jargon of the ecologist, it has a fixed carrying capacity for milk cows, and exceeding the carrying capacity will degrade the resource and lower its future carrying capacity. Assume that the new cows raise the total number of cows past what the pasture can support indefinitely, so once the new cows go onto the pasture, the pasture starts to degrade.

Notice how the benefits and costs sort themselves out. The villager with the additional cows receives all the benefit of the additional milk his new cows provide, and he receives it right away. The costs of his action, by contrast, are shared with everyone else in the village, and their impact is delayed, since it takes time for pasture to degrade. Thus, according to conventional economic theory, the villager is doing the right thing. Since the milk he gets is worth more right now than the fraction of the discounted future cost of the degradation of the pasture he will eventually have to carry, he is pursuing his own economic interest in a rational manner.

The other villagers, faced with this situation, have a choice of their own to make. (We’ll assume, again, that they don’t have the option of forcing the villager with the new cows to get rid of them and return the total herd on the pasture to a level it can support indefinitely.) They can do nothing, in which case they bear the costs of the degradation of the pasture but gain nothing in return, or they can buy more cows of their own, in which case they also get more milk, but the pasture degrades even faster. According to conventional economic theory, the latter choice is the right one, since it allows them to maximize their own economic interest in exactly the same way as the first villager. The result of the process, though, is that a pasture that would have kept a certain number of cattle fed indefinitely is turned into a barren area of compacted subsoil that won’t support any cattle at all. The rational pursuit of individual advantage thus results in permanent impoverishment for everybody.

This may seem like common sense. It is common sense, but when Hardin first published “The Tragedy of the Commons” in 1968, it went off like a bomb in the halls of academic economics. Since Adam Smith’s time, one of the most passionately held beliefs of capitalist economics has claimed that individuals pursuing their own economic interest, without interference from government or anyone else, will reliably produce the best outcome for everybody. Defenders of free market economics still make that claim, as if nobody but the Communists ever brought it into question. That’s why very few people like to talk about Hardin’s work these days; it makes it all but impossible to uphold a certain bit of popular, appealing, but dangerous nonsense.

Does this mean that the rational pursuit of individual advantage always produces negative results for everyone? Not at all. The theorists of capitalism can point to equally cogent examples in which Adam Smith’s invisible hand passes out benefits to everyone, and a case could probably be made that this happens more often than the opposite. The fact remains that the opposite does happen, not merely in theory but also in the real world, and that the consequences of the tragedy of the commons can reach far beyond the limits of a single village.

Hardin himself pointed to the destruction of the world’s oceanic fisheries by overharvesting as an example. If current trends continue, many of my readers can look forward, over the next couple of decades, to tasting the last seafood they will ever eat. A food resource that could have been managed sustainably for millennia to come is being annihilated in our lifetimes, and the logic behind it is precisely that of the tragedy of the commons: participants in the world’s fishing industries, from giant corporations to individual boat owners and their crews, are pursuing their own economic interests, and exterminating one fishery after another in the process.

Another example? The worldwide habit of treating the atmosphere as an aerial sewer into which wastes can be dumped with impunity. Every one of my readers who burns any fossil fuel, for any purpose, benefits directly from being able to vent the waste CO2 directly into the atmosphere, rather than having to cover the costs of disposing of it in some other way. As a result of this rational pursuit of personal economic interest, there’s a very real chance that most of the world’s coastal cities will have to be abandoned to the rising oceans over the next century or so, imposing trillions of dollars of costs on the global economy.

Plenty of other examples of the same kind could be cited. At this point, though, I’d like to shift focus a bit to a different class of phenomena, and point to the Glass-Steagall Act, a piece of federal legislation that was passed by the US Congress in 1933 and repealed in 1999. The Glass-Steagall Act made it illegal for banks to engage in both consumer banking activities such as taking deposits and making loans, and investment banking activities such as issuing securities; banks had to choose one or the other. The firewall between consumer banking and investment banking was put in place because in its absence, in the years leading up to the 1929 crash, most of the banks in the country had gotten over their heads in dubious financial deals linked to stocks and securities, and the collapse of those schemes played a massive role in bringing the national economy to the brink of total collapse.

By the 1990s, such safeguards seemed unbearably dowdy to a new generation of bankers, and after a great deal of lobbying the provisions of the Glass-Steagall Act were eliminated. Those of my readers who didn’t spend the last decade hiding under a rock know exactly what happened thereafter: banks went right back to the bad habits that got their predecessors into trouble in 1929, profited mightily in the short term, and proceeded to inflict major damage on the global economy when the inevitable crash came in 2008.

That is to say, actions performed by individuals (and those dubious “legal persons” called corporations) in the pursuit of their own private economic advantage garnered profits over the short term for those who engaged in them, but imposed long-term costs on everybody. If this sounds familiar, it should. When individuals or corporations profit from their involvement in an activity that imposes costs on society as a whole, that activity functions as a commons.

If that commons is unmanaged, the tragedy of the commons results. The American banking industry before 1933 and after 1999 functioned, and currently functions, as an unmanaged commons; between those years, it was a managed commons. While it was an unmanaged commons, it suffered from exactly the outcome Hardin’s theory predicts; when it was a managed commons, by contrast, a major cause of banking failure was kept at bay, and the banking sector was more often a source of strength than a source of weakness to the national economy.

It’s not hard to name other examples of what might be called “commons-like phenomena” — that is, activities in which the pursuit of private profit can impose serious costs on society as a whole — in contemporary America. One that bears watching these days is food safety. It is to the immediate financial advantage of businesses in the various industries that produce food for human consumption to cut costs as far as possible, even if this occasionally results in unsafe products that cause sickness and death to people who consume them; the benefits in increased profits are immediate and belong entirely to the business, while the costs of increased morbidity and mortality are borne by society as a whole, provided that your legal team is good enough to keep the inevitable lawsuits at bay. Once again, the asymmetry between benefits and costs produces a calculus that brings unwelcome outcomes.

The American political system, in its pre-imperial and early imperial stages, evolved a distinctive response to these challenges. The Declaration of Independence, one of the wellsprings of American political thought, defines the purpose of government as securing the rights to life, liberty, and the pursuit of happiness. There’s more to that often-quoted phrase than meets the eye. In particular, it doesn’t mean that governments are supposed to provide anybody with life, liberty, or happiness; their job is simply to secure for their citizens certain basic rights, which may be inalienable — that is, they can’t be legally transferred to somebody else, as they could under feudal law — but are far from absolute. What citizens do with those rights is their own business, at least in theory, so long as their exercise of those rights does not interfere too severely with the ability of others to do the same thing. The assumption, then and later, was that citizens would use their rights to seek their own advantage, by means as rational or irrational as they chose, while the national community as a whole would cover the costs of securing those rights against anyone and anything that attempted to erase them.

That is to say, the core purpose of government in the American tradition is the maintenance of the national commons. It exists to manage the various commons and commons-like phenomena that are inseparable from life in a civilized society, and thus has the power to impose such limits on people (and corporate pseudopeople) as will prevent their pursuit of personal advantage from leading to a tragedy of the commons in one way or another. Restricting the capacity of banks to gamble with depositors’ money is one such limit; restricting the freedom of manufacturers to sell unsafe food is another, and so on down the list of reasonable regulations. Beyond those necessary limits, government has no call to intervene. How people choose to live their lives, exercise their liberties, and pursue happiness is up to them, so long as it doesn’t put the survival of any part of the national commons at risk.

As far as I know, you won’t find that definition taught in any of the tiny handful of high schools that still offer civics classes to young Americans about to reach voting age. Still, it’s a neat summary of generations of political thought in pre-imperial and early imperial America. These days, by contrast, it’s rare to find this function of government even hinted at. Rather, the function of government in late imperial America is generally seen as a matter of handing out largesse of various kinds to any group organized or influential enough to elbow its way to a place at the feeding trough. Even those people who insist they are against all government entitlement programs can be counted on to scream like banshees if anything threatens those programs from which they themselves benefit; the famous placard reading “Government Hands Off My Medicare” is an embarrassingly good reflection of the attitude that most American conservatives adopt in practice, however loudly they decry government spending in theory.

A strong case can be made, though, for jettisoning the notion of government as national sugar daddy and returning to the older notion of government as guarantor of the national commons. The central argument in that case is simply that in the wake of empire, the torrents of imperial tribute that made the government largesse of the recent past possible in the first place will go away. As the United States loses the ability to command a quarter of the world’s energy supplies and a third of its natural resources and industrial product, and has to make do with the much smaller share it can expect to produce within its own borders, the feeding trough in Washington, DC — not to mention its junior equivalents in the fifty state capitals, and so on down the pyramid of American government — is going to run short.

In point of fact, it’s already running short. That’s the usually-unmentioned factor behind the intractable gridlock in our national politics: there isn’t enough largesse left to give every one of the pressure groups and veto blocs its accustomed share, and the pressure groups and veto blocs are responding to this unavoidable problem by jamming up the machinery of government with ever more frantic efforts to get whatever they can. That situation can only end in crisis, and probably in a crisis big enough to shatter the existing order of things in Washington, DC. After the rubble stops bouncing, the next order of business will be piecing together some less gaudily corrupt way of managing the nation’s affairs.

That process of reconstruction might be furthered substantially if the pre-imperial concept of the role of government were to get a little more air time these days. At some point in the not too distant future, the political system of the United States of America is going to tip over into explosive crisis, and at that time ideas that are simply talking points today have at least a shot at being enacted into public policy. That’s exactly what happened at the beginning of the three previous cycles of anacyclosis traced out earlier in this book. In 1776, 1860, and 1933, ideas that had been on the political fringes not that many years beforehand redefined the entire political dialogue, and in all three cases this was possible because those once-fringe ideas had already been widely circulated and widely discussed. It didn’t matter that most of the people who circulated and discussed them never imagined that they would live to see those ideas put into practice.

There are plenty of ideas about politics and society in circulation on the fringes of today’s American dialogue, to be sure. I’d like to suggest, though, that there’s a point to reviving an older vision of what government can do, and ought to do, in the America of the future. The return to an older American concept of government as the guarantor of the national commons is one of the crucial steps that might just succeed in making a viable future for the post-imperial United States. A viable future, mind you, does not mean one in which any significant number of Americans retain any significant fraction of the material abundance we currently get from the wealth pump of our global empire.

The end of American empire, it deserves repeating, means the end of a system in which the five percent of humanity that live in the United States get to dispose of a quarter of the planet’s energy and a third of its raw materials and industrial product. Even if the fossil fuels that undergird the industrial product weren’t depleting out of existence — and of course they are — the rebalancing of global wealth driven by the decline of one empire and the rise of another will involve massive and often traumatic impacts, especially for those who have been living high on the hog under the current system and will have to get used to a much smaller portion of the world’s wealth in the years immediately ahead. If you live in the United States or its inner circle of allies — Canada, Britain, Australia, Japan, and a few others — this means you.

I want to stress this point, because habits of thought already discussed make it remarkably difficult for most Americans to think about a future that isn’t either all warm fuzzy or all cold prickly. If an imagined future is supposed to be better than the one we’ve got, according to these habits of thought, it has to be better in every imaginable way, and if it’s worse, it has to be worse just as uniformly. Suggest that the United States might go hurtling down the far side of its imperial trajectory and come out of the process as a Third World nation, as I’ve done here, and you can count on blank incomprehension. Suggest further that the nation that comes out the other side of this project might still be able to provide a range of basic social goods to its citizens, and might even recover some of the values it lost a century ago in the course of its headlong rush to empire, and expect self-righteous anger.

Now in fact I do suggest this is possible, and I’ve already sketched out some of the steps that individuals might choose to take to lay the foundations for that project. Still, it’s also worth noting that the same illogic shapes the other end of the spectrum of possible futures. These days, if you pick up a book offering a vision of a better future or a strategy to get there, it’s usually a safe bet that you can read the thing from cover to cover no reference whatsoever to any downsides, drawbacks, or tradeoffs that might be involved in pursuing the vision or enacting the strategy. Since every action in the real world has downsides, drawbacks, and tradeoffs, this is not exactly a minor omission, nor does the blithe insistence on ignoring such little details offer any reason to feel confident that the visions and strategies will actually work as advertised.

Those of my readers who have been following conversations in the American alternative scene for any length of time, for example, will have encountered any number of enthusiastic discussions of relocalization:7 the process, that is, of disconnecting from the vast and extravagant global networks of production, consumption, and control that define so much of industrial society, in order to restore or reinvent local systems that will be more resilient in the face of energy shortages and other disruptions, and provide more security and more autonomy to those who embrace them.

A very good case can be made for this strategy. The extreme centralization of the global economy has become a source of massive vulnerabilities straight across the spectrum from the most abstract realms of high finance right down to the sprawling corporate structures that put food on your table. Shortfalls of every kind, from grain and fuel to financial capital, are becoming a daily reality for many people around the world as soaring energy costs put a galaxy of direct and indirect pressures on brittle and overextended systems. That’s only going to become worse as petroleum reserves and other vital resources continue to deplete. As this process continues, ways of getting access to necessities that are deliberately disconnected from the global economic system, and thus less subject to its vulnerabilities, are going to be well worth having in place.

At the same time, participation in the global economy brings with it vulnerabilities of another kind. For anyone who has to depend for their survival on the functioning of a vast industrial structure which is not answerable to the average citizen, any talk about personal autonomy is little more than a bad joke, and the ability of communities to make their own choices and seek their own futures in such a context is simply wishful thinking. Many people involved in efforts to relocalize have grasped this, and believe that deliberately standing aside from systems controlled by national governments and multinational corporations offers one of the few options for regaining personal and community autonomy in the face of an increasingly troubled future.

There are more points that can be made in favor of relocalization schemes, and you can find them on pro-relocalization websites all over the internet. For our present purposes, though, this fast tour of the upside will do, because each of these arguments comes with its own downside, which you most likely won’t find mentioned anywhere on those same websites.

The downside to the first argument is plain: When you step out of the global economy, you cut yourself off from the imperial wealth pump that provides people in America with the kind of abundance they take for granted. The lifestyles that are available in the absence of that wealth pump are far more restricted, and far more impoverished, than most would-be relocalizers like to think. Peasant cultures around the world are by and large cultures of poverty, and for good reason: by the time you, your family, and the others in your village have provided food on the table, thatch on the roof, a few necessary possessions, and enough of the local equivalent of cash to cover payments to the powers that be, whether those are feudal magnates or the local property tax collector, you’ve just accounted for every minute of labor you can squeeze out of a day.

That’s the rock on which the back-to-the-land movement of the Sixties broke. The life of a full-time peasant farmer scratching a living out of the soil is viable, and it may even be rewarding, but it’s not the kind of life that the pampered youth of the Baby Boom era were willing to put up with for more than a fairly brief interval. It may well be that economic relocalization is still the best available option for dealing with the ongoing unraveling of the industrial economy — in fact, I’d agree that this is the case — but I wonder how many of its proponents have grappled with the fact that what they’re proposing may amount to nothing more than a way to starve with dignity while many others are starving without it.

The downside to the second argument is subtler, but even more revealing. The best way to grasp it is to imagine two relocalization projects, one in Massachusetts and the other in South Carolina. The people in both groups are enthusiastic about the prospect of regaining their autonomy from the faceless institutions of a centralized society, and just as eager to bring back home to their own communities the power to make choices and pursue a better future. Now ask yourself this: what will these two groups do if they get that power? And what will the people in Massachusetts think about what the people in South Carolina will do once they get that power?

I’ve conducted a modest experiment of sorts along these lines, by reminding relocalization fans in blue states what people in red states are likely to do with the renewed local autonomy the people in the blue states want for themselves, and vice versa. Every so often, to be sure, I run across someone — more often on the red side of the line than the blue one — whose response amounts to “let ’em do what they want, so long as they let us do what we want.” Far more often, though, people on either side are horrified to realize that those on the other side of America’s cultural divide would use relocalization to enact their own ideals in their own communities.

More than once, in fact, the response has been a flurry of proposals to hedge relocalization about with restrictions so that it can only be used to support the speaker’s own political and social agendas, with federal bureaucracies hovering over every relocalizing community, ready to pounce on any sign that a community might try to do something that would offend sensibilities in Boston and San Francisco, on the one hand, or the Bible Belt on the other. You might think that it would be obvious that this would be relocalization in name only; you might also think that it would be just as obvious that those same bureaucracies would fall promptly into the hands of the same economic and political interests that have made the current system as much of a mess as it is. Permit me to assure you that in my experience, among a certain segment of the people who like to talk about relocalization, these things are apparently not obvious at all.

Despite all this, however, relocalization is among the best options we have. The fact that it comes with significant downsides, drawbacks, and tradeoffs does not nullify that. Every possible strategy, again, has downsides, drawbacks, and tradeoffs; whatever we choose to do to face the end of America’s age of empire, as individuals, as communities, or as a nation, problems are going to ensue and people are going to get hurt. Trying to find an option that has no downsides simply guarantees that we will do nothing at all — and in that case, equally, problems are going to ensue and people are going to get hurt. That’s how things work in the real world.

Thus I’d like to suggest that a movement toward relocalization is another crucial ingredient of a viable post-imperial America. In point of fact, we’ve got the structures in place to do the thing already; the only thing that’s lacking is a willingness to push back, hard, against certain dubious habits in the US political system that have rendered those structures inoperative.

Back in 1787, when the US constitution was written, the cultural differences between Massachusetts and South Carolina were very nearly as sweeping as they are today. That’s one of the reasons why the constitution as written left most internal matters in the hands of the individual states, and assigned to the federal government only those functions that concerned the national commons as a whole: war, foreign policy, minting money, interstate trade, postal services, and a few other things. The list was expanded in a modest way before the rush to empire, so that public health and civil rights, for example, were brought under federal supervision over the course of the 19th century. Under the theory of government described earlier in this chapter, these were reasonable extensions, since they permitted the federal government to exercise its function of securing the national commons.

Everything else remained in the hands of the states and the people. In fact, the tenth amendment to the US constitution requires that any power not granted to the federal government in so many words be left to the states and the people — a principle which, perhaps not surprisingly, has been roundly ignored by everyone in Washington, DC, for most of a century now. Under the constitution and its first nineteen amendments, in fact, the states were very nearly separate countries which happened to have an army, navy, foreign policy, and postal system in common.

Did that system have problems? You bet. What rights you had and what benefits you could expect as a citizen depended to a huge extent on where you lived — not just which state, but very often which county and which township or city as well. Whole classes of citizens might be deprived of their rights or the protection of the laws by local politicians or the majorities that backed them, and abuses of power were pervasive. All of that sounds dreadful, until you remember that the centralization of power that came with America’s pursuit of empire didn’t abolish any of those things; it simply moved them to a national level. Nowadays, serving the interests of the rich and influential at the expense of the public good is the job of the federal government, rather than the local sheriff, and the denial of civil rights and due process that used to be restricted to specific ethnic and economic subgroups within American society now gets applied much more broadly.

Furthermore, one of the things that’s rendered the US government all but incapable of taking any positive action at all in the face of a widening spiral of crises is precisely the insistence, by people in Massachusetts, South Carolina, and the other forty-eight states as well, that their local views and values ought to be the basis of national policy. The rhetoric that results, in tones variously angry and plaintive, amounts to “Why can’t everyone else be reasonable and do it my way?” — which is not a good basis for the spirit of compromise necessary to the functioning of democracy, though it makes life easy for advocacy groups who want to shake down the citizenry for another round of donations to pay for the never-ending fight.

One of the few things that might succeed in unsticking the gridlock, so that the federal government could get back to doing the job it’s supposed to do, would be to let the people in Massachusetts, South Carolina, and the other forty-eight states pursue the social policies they prefer on a state by state basis. Yes, that would mean that people in South Carolina would do things that outraged the people in Massachusetts, and people in Massachusetts would return the favor. Yes, it would also mean that abuses and injustices would take place. Of course abuses and injustices take place now, in both states and all the others as well, but the ones that would take place in the wake of a transfer of power over social issues back to the states would no doubt be at least a little different from the current ones.

Again, the point of relocalization schemes is not that they will solve every problem. They won’t, and in fact they will certainly cause new problems we don’t have yet. The point of relocalization schemes is that, all things considered, if they’re pursued intelligently, the problems that they will probably solve are arguably at least a little worse than the problems that they will probably cause. Does that sound like faint praise? It’s not; it’s as much as can be expected for any policy this side of Neverland. In the real world, every solution brings problems of its own.

Now in fact, relocalization has at least two other benefits that tip the balance well into positive territory. The larger the area that has to be governed from a single political center, all things considered, the more energy and resources will be absorbed in the process of governing. This is why, before the coming of the industrial age, nations on the scale of the present United States of America rarely existed, and when they did come into being, they generally didn’t last for more than a short time. In an age of declining energy availability and depleting resources, the maintenance costs of today’s sprawling, centralized United States government won’t be affordable for long. Devolving all nonessential functions of the central government to the individual states, as the US Constitution mandates, might just cut costs to the point that some semblance of civil peace and democratic governance can hang on for the long term.

That probably doesn’t seem like much to those whose eyes are fixed on fantasies of a perfect world, and are convinced they can transform it from fantasy to reality as soon as everyone else stops being unreasonable and comes to agree with them. Still, it’s better than most potential outcomes available to us in the real world — and again, we don’t live in Neverland.

Furthermore, the political evolution of the United States over the last century has concentrated so many of the responsibilities of government in Washington, DC, that the entire American system is beginning to crack under the strain. I’d like, to pursue this point a little further, to offer two predictions about the future of American government.

The first is that the centralization of government has almost certainly reached its peak, and will be reversing in the decades ahead of us. The second is that, although there will inevitably be downsides to that reversal, it will turn out by and large to be an improvement over the system we have today. These predictions unfold from a common logic; both are consequences of the inevitable failure of overcentralized government.

It’s easy to get caught up in abstractions here, and even easier to fall into circular arguments around the functions of political power that attract most of the attention these days — for example, the power to make war. It’s more useful to start with a function of government slightly less vexed by misunderstandings. The one I have in mind is education.

In the United States, for a couple of centuries now, the provision of free public education for children has been one of the central functions of government. Until fairly recently, in most of the country, it operated in a distinctive way. Under legal frameworks established by each state, local school districts were organized by the local residents, who also voted to tax themselves to pay the costs of building and running schools. Each district was managed by a school board, elected by the local residents, and had extensive authority over the school district’s operations.

In most parts of the country, school districts weren’t subsets of city, township, or county governments, or answerable to them; they were single-purpose independent governments on a very small scale, loosely supervised by the state and much more closely watched by the local voters. On the state level, a superintendent of schools or a state board of education, elected by the state’s voters, had a modest staff to carry out the very limited duties of oversight and enforcement assigned by the state legislature. On the federal level, a bureaucracy not much larger supervised the state boards of education, and conducted the even more limited duties assigned it by Congress.

Two results of that system deserve notice. First of all, since individual school districts were allowed to set standards, chose textbooks, and manage their own affairs, there was a great deal of diversity in American education. While reading, writing, and ’rithmetic formed the hard backbone of the school day, and such other standard subjects as history and geography inevitably got a look-in as well, what else a given school taught was as varied as local decisions could make them. What the local schools put in the curriculum was up to the school board and, ultimately, to the voters, who could always elect a reform slate to the school board if they didn’t like what was on the local curriculum.

Second, the system as a whole gave America a level of public literacy and general education that was second to none in the industrial world, and far surpassed the poor performance of the far more lavishly funded education system the United States has today. Earlier in this book I encouraged readers to compare the Lincoln-Douglas debates of 1858 to the debates in our latest presidential contest, and to remember that most of the people who listened attentively to Lincoln and Douglas had what then counted as an eighth-grade education. The comparison has plenty to say about the degeneration of political thinking in modern America, but it has even more to say about the extent to which the decline in public education has left voters unprepared to get past the soundbite level of thinking.

Those of my readers who want an even more cogent example should leaf through a high school textbook from before the Second World War. You’ll find that the levels of reading comprehension, reasoning ability, and mathematical skill expected as a matter of course from ninth-graders in 1930 is hard to find among American college graduates today. If you have kids of high school age, spend half an hour comparing the old textbook with the one your children are using today. You might even consider taking the time to work through a few of the assignments in the old textbook yourself.

Plenty of factors have had a role in the dumbing-down process that gave us our current failed system of education, to be sure, but the centralization of control over the nation’s educational system in a few federal bureaucracies played a crucial role. To see how this works, again, a specific example is useful. Let’s imagine a child in an elementary school in Lincoln, Nebraska, who is learning how to read. Ask yourself this: of all the people concerned with her education, which ones are able to help that individual child tackle the daunting task of figuring out how to transform squiggles of ink into words in her mind?

The list is fairly small, and her teacher and her parents belong at the top of it. Below them are a few others: a teacher’s aide if her classroom has one, an older sibling, and a friend who has already managed to learn the trick. Everyone else involved is limited to helping these people do their job. Their support can make that job somewhat easier — for example, by making sure that the child has books, by seeing to it that the classroom is safe and clean, and so on — but they can’t teach reading. Each supporting role has supporting roles of its own; thus the district’s purchasing staff, who keep the school stocked with textbooks, depend on textbook publishers and distributors, and so on. Still, the further you go from the child trying to figure out that C-A-T means “cat,” the less effect any action has on her learning process.

Now let’s zoom back twelve hundred miles or so to Washington, DC, and the federal Department of Education. It’s a smallish federal bureaucracy, which means that in 2011, the latest year for which statistics have been published as of this writing, it spent around $71 billion. Like many other federal bureaucracies, its existence is illegal. I mean that quite literally; the US Constitution assigns the federal government a limited range of functions, and “those powers necessary and convenient” to exercise them. By no stretch of the imagination can managing the nation’s public schools be squeezed into those limits. Only the Supreme Court’s supine response to federal power grabs throughout the twentieth century allows the department to exist at all.

So we have a technically illegal bureaucracy running through $71 billion of the taxpayers’ money in a year, which is arguably not a good start. The question I want to raise, though, is this: what can the staff of the Department of Education do that will have any positive impact on that child in the classroom in Lincoln, Nebraska? They can’t teach the child themselves; they can’t fill any of the supporting roles that make it possible for the child to be taught. They’re more than a thousand miles away, enacting policies that apply to every child in every classroom, irrespective of local conditions, individual needs, or any of the other factors that make teaching a child to read different from stamping out identical zinc bushings.

There are a few — a very few — things that can usefully be done for education at the national level. One of them is to make sure that the child in Lincoln is not denied equal access to education because of her gender, her skin color, or the like. Another is to provide the sort of overall supervision to state boards of education that state boards of education traditionally provided to local school boards. There are a few other things that belong on the same list. All of them can be described as measures to maintain the commons.

Public education is a commons. The costs are borne by the community as a whole, while the benefits go to individuals: the children who get educated, the parents who don’t have to carry all the costs of their children’s education, the employers who don’t have to carry all the costs of training employees, and so on. Like any other commons, this one is vulnerable to exploitation when it’s not managed intelligently, and like most commons in today’s America, this one has taken plenty of abuse lately, with the usual consequences. What makes this situation interesting, in the sense of the apocryphal Chinese proverb, is that the way the commons of public education is being managed has become the principal force wrecking the commons.

The problem here is precisely that of centralization. The research for which economist Elinor Ostrom won her Nobel Prize showed that, by and large, effective management of a commons is a grassroots affair; those who will be most directly affected by the way the commons is managed are also its best managers.6 The more distance between the managers and the commons they manage, the more likely failure becomes, because two factors essential to successful management simply aren’t there. The first is immediate access to information about how management policies are working, or not working, so that those policies can be changed if they go wrong; the second is a personal stake in the outcome, so that the managers have the motivation to recognize when a mistake has been made, rather than allowing the psychology of previous investment to seduce them into pursuing a failed policy right into the ground.

Those two factors don’t function in an overcentralized system. Politicians and bureaucrats don’t get to see the consequences of their failed decisions up close, and they don’t have any motivation to admit that they were wrong and pursue new policies — quite the contrary, in fact. Consider, for example, the impact of the No Child Left Behind Act, pushed through Congress by bipartisan majorities and signed with much hoopla by George W. Bush in 2002. In the name of accountability — a term that in practice means “finding someone to punish” — the Act requires mandatory standardized testing at specific grade levels, and requires every year’s scores to be higher than the previous year’s, in every school in the nation. Teachers and schools that fail to accomplish this face draconian penalties.

My readers may be interested to know that in 2014, by law, every child in America must perform at or above grade level. It’s reminiscent of the imaginary town of Lake Wobegon — “where all the children are above average” — except that this is no joke. What’s left of America’s public education system is being shredded by the efforts of teachers and administrators to save their jobs in a collapsing economy, by teaching to the tests and gaming the system, under the pressure of increasingly unrealistic mandates from Washington. Standardized test scores have risen slightly; meaningful measures of literacy, numeracy, and other real-world skills have continued to move raggedly downward. You can bet that the only response anybody in Washington is going to be willing to discuss is yet another round of federal mandates, most likely even more punitive and less effective than the current set.

Though I’ve used education as an example, nearly every part of American life is pervaded by the same failed logic of over-centralization. Consider the Obama administration’s giddy pursuit of national security via drone attacks. As currently operated, Predator drones are the ne plus ultra in centralized warfare. The president himself must authorize each drone attack. The drone is piloted via satellite link from a base in Nevada, and you can apparently sit in the situation room in the White House and watch the whole thing live. Hundreds of people have been blown to kingdom come by these attacks so far, in the name of a war on terror that Obama’s party used to denounce.

Now of course that habit only makes sense if you’re willing to define young children and wedding party attendees as terrorists, which seems a little extreme to me. Leaving that aside, though, the key question is whether the program is working. Since none of the areas under attack are any less full of anti-American insurgents than they ever were, and the jihadi movement has been able to expand its war dramatically over the last few years into North Africa, the answer is pretty clearly no. However technically superlative the drones themselves are, the information that guides them comes via the notoriously static-filled channels of intelligence collection and analysis, and the decision to use them takes place in the even less certain realms of tactics and strategy. Nor is it exactly bright, if you want to dissuade people from seeking out Americans and killing them, to go around vaporizing people nearly at random in parts of the world where avenging the murder of a family member is a sacred duty.

In both cases, and others like them, we have alternatives, but they all require the recognition that the best response to a failed policy isn’t a double helping of the same. That recognition is nowhere in our collective conversation at the moment. It would be useful if more of us were to make an effort to put it there, but there’s another factor in play. One of the few good things about really bad policies is that they’re self-limiting; sooner or later, a system that insists on embracing them is going to crash and burn. Once the smoke clears, it’s not too hard for the people standing around the crater to recognize that something has gone very wrong. In that period of clarity, it’s possible to make a great many changes, especially if there are clear alternatives available and people advocating for them.

In the great crises that ended each of America’s three previous rounds of anacyclosis — in 1776, in 1861, and in 1933 — a great many possibilities that had been unattainable due to the gridlocked politics of the previous generation suddenly came within reach. In those past crises, the United States was an expanding nation, geographically, economically, and in its ability to project power in the world; the crisis immediately ahead bids fair to arrive in the early stages of the ensuing contraction. That difference has important effects on the nature of the changes before us.

Centralized control is costly — in money, in energy, in every other kind of resource. Decentralized systems are much cheaper. In the days when the United States was mostly an agrarian society, and the extravagant abundance made possible by a global empire and reckless depletion of natural resources had not yet arrived, the profoundly localized educational system I sketched out earlier was popular because it was affordable. Even a poor community could count on being able to scrape together the political will and the money to establish a school district, even if it was only a one-room schoolhouse with one teacher taking twenty-odd children a day through grades one through eight. That the level of education that routinely came out of such one-room schoolhouses was measurably better than that provided by today’s multimillion-dollar school budgets is just one more irony in the fire.

On the downside of America’s trajectory, as we descend from empire toward whatever society we can manage to afford within the stringent limits of a troubled biosphere and a planet stripped of most of its nonrenewable resources, local systems of the one-room schoolhouse variety are much more likely to be an option than centralized systems of the sort we have today. That shift toward the affordably local will have many more consequences, and the next chapter will address some of the most important of those.