BLIND SPOTS
The Failure of Economic Leadership
So what happens to a country when key industries collapse, when incomes stagnate, when ordinary people get by through borrowing, and when its top competitors become its leading creditors?
Nothing good.
In the United States, the most prominent vestige of three decades of debt-financed expansion is the high level of polarization in wealth and income. As I wrote, with Hockett and Roubini in “The Way Forward,” the credit bubble of the 2000s exacerbated
a trend toward wider income disparities in the United States that already had been steadily growing since the early 1980s. . . . But the problem went beyond bifurcated wages. It also involved a major change in the shares of income received by labor and capital. Because many workers were no longer sharing the fruits of the economy’s impressive productivity gains, capital was able to claim a much larger share of the returns, further widening wealth and income inequality which by 2008 had reached levels not seen since the fateful year of 1928.1
During the Gilded Age of the l890s, and again after World War I, the incomes of the top wealthiest American families grew so disproportionately large that those in the middle of the socioeconomic ladder saw their real incomes eroded despite the growth of the overall economy.
This time around, as inequality soared, starting in the 1980s, ordinary Americans were able to sustain their living standards by borrowing trillions of dollars to finance homes, consumer goods, education, and medical expenses.
This was a twenty-first-century version of the feudal lord financing the annual planting and harvesting by his marginally enfranchised serfs. In a feudal system, though, the serfs who couldn’t pay their debts would remain permanently indentured—along with their offspring—until the debts were repaid. Not so in a modern free society populated by citizens who are not serfs. When the incomes of Americans proved unable to support the debt they had incurred, or when they saw their collateral for those debts fall in value below the amounts owed, they defaulted. And they continue to default on their debts as of this writing, albeit at a slower pace.
At the same time, large corporate profits rose after the Great Recession, eliminating the weakest of competitors and leaving the survivors to thrive. As CNNMoney commented on 2011 corporate profits: “The Fortune 500 generated a total of $824.5 billion in earnings last year, up 16.4% over 2010. That beats the previous record of $785 billion, set in 2006 during a roaring economy.”2
Why did this happen, especially amid near zero aggregate GDP growth in the developed world? The answer is simple. Against a glut of global labor, there is no incentive or need for employers to raise the incomes of labor.
To see how dramatically workers have been losing out, consider wages as a percentage of GDP over the past sixty years. Back in the heyday of industrial capitalism, from the 1940s through the early 1970s, the share of national product going to workers every year was routinely over 50 percent of GDP. Labor unions were powerful and even top corporate leaders espoused the belief that workers should get a fair share of the prosperity they helped create, and that a modern economy worked best when labor, capital, and government worked in concert to produce steadily growing and shared prosperity. The highest-paid executive in America in 1950 was General Motors’s Charlie Wilson, who made $663,000—or no more than 40 times what average line workers in the company made.
The early postwar period is often referred to as the Great Compression. The extreme inequality of the Gilded Age and the 1920s dissipated and all boats really did rise equally as productivity increased and the economy expanded. Indeed, between 1945 and 1973, the wages of U.S. workers rose in almost perfect tandem with their productivity. American workers got what they deserved, and U.S. executives and shareholders did plenty well.
All this started to change in the 1970s. Wages as a share of GDP first dipped below 50 percent around 1974, and never again would labor command half of all the country’s output. If you look at a graph charting the rise of both productivity and wages since World War II, you’ll see that the two lines begin to diverge in the early 1970s and the gap just keeps getting wider and wider in the following decades.3 Workers were keeping less and less of the wealth they were producing. Owners of capital were keeping more and more.
The age of labor was over. The second age of capital had begun. By 2011, wages as a share of GDP had fallen to 43 percent. That same year, corporate profits soared to over 60 percent of GDP—the highest levels since before the Great Depression.
Labor simply is no longer a scarce or valuable resource worth “paying up” for (with limited exceptions in the high-value-added sectors). Toss in some technological developments that “saved” even more labor and you get the picture.
If you keep unit pricing static, the less paid to labor, the higher the return to capital. The greater the polarization of wealth, the fewer individuals, per capita, participating in corporate (capital) ownership and the returns thereon. And the greater the polarization of wealth and income, the greater the need to supplement income with borrowing. The virtuous circle of investment and economic growth devolved to a vicious circle of debt, disinvestment, and slump.
One would expect wealth and income polarization to be a typical attribute of emerging nations. Whether the barons of nascent, relatively free-market democracies or the oligarchs of authoritarian “banana republics,” we forgive the rush to wealth and the shortsightedness present in nations just beginning to flex their economic muscle. Yet a key feature of advanced nations (until the present age, it seems) had been a widespread belief that economic strength was ensured by an economically—and, of course, politically—broadly enfranchised population. After all, an industrial economy’s workers are also its consumers, as Henry Ford famously argued in the 1920s when he began paying his workers the then unheard-of wage of five dollars a day. “It is not the employer who pays the wages,” Ford once said. “Employers only handle the money. It is the customer who pays the wages.”
What we are seeing today is the emergence of a global plutocracy, not only in the get-rich-quick emerging nations but in the advanced nations as well, where a Second Gilded Age has been under way.
This plutocracy’s members tend to be only marginally concerned about the countries in which they actually live, seeing themselves instead (because they run corporations that operate globally) as citizens of the world. They have not merely enjoyed wealth and privilege, but have—perhaps to rationalize their station in a meritocracy in which luck also plays a significant role, or perhaps to justify it—formed an aristocracy of ideas, of self-generated and self-sustaining ideologies, and have promoted a “philanthrocapitalism” that has rendered the noblesse oblige of the original Gilded Age positively quaint by comparison.
In her 2012 book, Plutocrats: The Rise of the New Global Super-Rich and the Fall of Everyone Else, the journalist Chrystia Freeland writes that “the ambition of the philanthro-capitalists doesn’t stop at transforming how charity works. They want to change how the state operates, too. These are men who have built their business by achieving the maximum impact with the minimum effort—either as financiers using leverage or technologists using scale. They think of their charitable dollar the same way. . . . The plutocrat-as-politician is becoming an important member of the world’s governing elite . . . [and] can use his own money to bankroll his campaign directly, and also to build a network of civic support through the less explicitly political donations of his personal foundation. Some farsighted plutocrats try to use their money not merely to buy public office for themselves but to redirect the reigning ideology of a nation, a region, or even the world.”4
In fact, the thinking advanced by some American, British, and even continental European members of this vaunted group would make the industrialists—even, perhaps, the royalty of yore—blush. And what they are saying is often mimicked by members of the political classes, especially some of those who entered politics after having made their own fortunes.
It’s not easy to describe the plutocratic worldview precisely, even though I have heard it recited many times in different ways in the business and political circles I travel in. Even though politicians and the media both give this view a wide berth, here are a few key implicit points you often hear from some of today’s super-wealthy (and perhaps more so from their sycophants):
■ That they are smarter than other people, and especially politicians, bureaucrats, school superintendents, and old-style nonprofit leaders. Why? Because they have made their fortune reinventing whole sectors, such as technology and finance.
■ That because they are so smart, and so good at solving problems and reinventing things, they deserve their fortunes, however crazily outsized these may be compared to what ordinary people earn. More than that, they deserve to be running large swaths of U.S. policy, such as education—and maybe even the executive branch, where their business smarts can be used to better manage the economy, as Romney repeatedly suggested.
■ That if we want more people like them around—the people who actually create the jobs and create revolutionary new products and services—we need to incentivize risk taking and wealth creation, not squelch it. We need to stop pushing antiquated public policies that hold the “job creators” back, whether it’s by taxes on capital gains, restrictions on educated immigrants, or any number of silly red-tape rules that govern the business sector, including the blizzard of regulations in the Dodd-Frank Wall Street Reform and Consumer Protection Act.
■ That the main reason Americans are holding them back—with taxes and regulation and their biases against independently wealthy candidates or billionaires dominating education policy—is because so many citizens, 47 percent according to Mitt Romney, are “takers” who benefit from keeping the wealthy down and who rely on a welfare state largely financed by the wealthy.
Sounds pretty awful laid out that way, and very self-serving, but is it that far off the mark? Of course, not all of those with substantial wealth embrace these views. Some believe they got rich because of plain good fortune and no small amount of sweat. Many are happy to pay higher taxes and believe strongly in the social safety net. And many do think that strong government watchdogs are needed to keep business honest and the financial system stable.
Still, it’s hard to avoid the feeling that a good chunk of the developed world’s upper class really do have a serious blind spot in these matters, and really do believe that they deserve, if anything, even more influence in society than they presently have.
In December 2012, I had a chance to spend a moment with the technology visionary, entrepreneur, and venture capitalist Marc Andreessen. Andreessen, you may recall, created the first commercially viable Web browser, Mosaic, and went on to found Netscape and invest in and nurture many other technology companies that have become household names.
With a net worth estimated at over half a billion dollars in 2012, and a wife who is the daughter of Silicon Valley’s wealthiest real estate owner, Andreessen’s family is comfortably within the top 0.01 percent of U.S. households—about 11,500 families—perhaps even near the top 0.001 percent. Andreessen, who was born and raised in the Midwest and now resides in Silicon Valley, supported the presidential bid of Barack Obama in 2008 and then switched his allegiance to back Mitt Romney in 2012.
Andreessen spoke to a room of tech-fascinated young and middle-aged businesspeople, reporters, and economic policy veterans, all very interested in what he had to say—much of which was quite visionary, almost stunningly so. It was clear why Andreessen is so well regarded and has excellent timing that leads to extraordinary success—and he’s a pretty nice fellow, too. He—and especially his wife—are also massively philanthropic and, apparently, socially concerned, something I truly admire and respect. But then, as he was discussing all the wonders that technology would bring—for instance, the elimination of retail stores and the potential for a crash in the value of commercial real estate (presumably retail and office), disintermediation of traditional industries, and “onshoring” of production as robotics eliminated the emerging markets’ labor advantage—someone asked him what that technology would do in terms of the elimination of jobs, which are already less than plentiful.
Without jobs, the questioner asked, how is the middle class going to keep paying for all the great new products and services that get cooked up by the tech sector?
I then witnessed an interesting transformation. Here was a clearly brilliant man, not to the manor born in any respect, faced with a very relevant—and troubling—question. But what poured out of Andreessen in reply was not a visionary answer, but a fairly pointed diatribe that began with the words, “The middle class is a myth, it no longer exists.”
Andreessen’s view can be summarized in three points, all of which I was stunned to hear, coming from a smart futurist like him:
■ The American middle class was an anomaly resulting from the World War II decimation of Europe and Japan, leaving the United States the only producing nation for the world and allowing U.S. labor an enormous advantage which, in Andreessen’s view, continued over some three decades until the emergence of the Japanese forced out inefficient industries that overrewarded “high school graduate” labor.
■ The U.S. labor force is unskilled and undereducated and there are thousands of jobs going unfilled for tech-savvy engineers, designers, and programmers. If only Americans understood this and were trained with the right skills.
■ The federal government needs to “get out of the way” and let innovators innovate so they can provide the necessary jobs and supply plentiful and ever-cheaper goods.
It’s hard to know where to start in responding to these views, which are not uncommon among America’s super-wealthy. For starters, during the best years the United States ever had—from 1945 through 1975—government did anything but “get out of the way.” To the contrary, government was closely involved in building the key foundations for the creation of national wealth. By building the public university systems and heavily subsidizing private universities, government helped create the largest pool of human capital the world had ever seen. By funding technological innovation, both directly through spending on science and indirectly through military projects, government laid the groundwork for innumerable innovations—including the Internet that Andreessen used to make his fortune. And by funding the Interstate Highway System and an advanced national air control system, government helped foster greater mobility for goods, services, and ideas. Tight regulation of the financial sector also ensured that the economy ran pretty smoothly, in contrast with more recent times.
As for all those unskilled or mistrained workers, they’re an example of what happens when we take a laissez-faire approach to something as important as human-capital allocation. In countries like Denmark, government plays a key role in ensuring that there are enough trained workers for key growth sectors by investing enormous sums in redeploying and retraining labor as the economy changes. The United States doesn’t do that, instead leaving human-capital decisions up to thousands of individual educational institutions. Sometimes it seems that the University of Phoenix—the behemoth for-profit college with 500,000 students—plays a bigger role in shaping human capital than the federal government. More about that later, too.
One could also note that in the United States and Europe, increasing the number of trained tech workers would barely make a dent in the oversupply of domestic labor, to say nothing of global excess labor. And there’s a curious thing about the tech industry demanding immigration-law changes in the United States and claiming that there is an insufficient number of trained high-tech workers. As Ross Eisenbrey of the Economic Policy Institute wrote in a February 2013 New York Times editorial:
If anything, we have too many high-tech workers [in the U.S.]: more than nine million people have degrees in a science, technology, engineering or math field, but only about three million have a job in one. That’s largely because pay levels don’t reward their skills. Salaries in computer- and math-related fields for workers with a college degree rose only 4.5 percent between 2000 and 2011. If these skills are so valuable and in such short supply, salaries should at least keep pace with the tech companies’ profits, which have exploded. And while unemployment for high-tech workers may seem low—currently 3.7 percent—that’s more than twice as high as it was before the recession. If there is no shortage of high-tech workers, why would companies be pushing for more? Simple: workers under the H-1B [guest worker visa] program aren’t like domestic workers—because they have to be sponsored by an employer, they are more or less indentured, tied to their job and whatever wage the employer decides to give them.5
Finally, as one who is inundated with hardware, software, and free or nearly free apps that seem to be able to do everything except call my mother and tell her how things are going in my life, I am seeing no lack of innovation in the tech sector and certainly no evidence that big government is stifling the Marc Andreessens of the world.
But Andreessen’s Q&A response was not the real story I took from that evening. After the formal presentation I took him aside for a few moments and posed a different set of questions. What, I asked, did he think would happen to wages and prices in a world already oversupplied with labor and manufacturing capacity when additional technological efficiencies eliminate even many service employees—the service sector that now provides 70 percent of all employment? From where would demand derive? Furthermore, if Andreessen was correct about the coming lack of demand for commercial real estate, for example, what would become of the wealth lodged in those assets as they deflated in value? And finally, with a decline in top line pricing power—even for tech companies with best-of-class products—as wages (the ability to pay) declined in nominal terms, what did he think would happen to the value of companies in which he was investing his and his clients’ money? (Apple’s fourth-quarter 2012 financial results, released at the time of this writing, brought this trend into full focus after my encounter with Andreessen—with gross margins depressed as the company’s groundbreaking and superlative products are now going head to head with those just as good and cheaper.)
Needless to say, Andreessen is less of a macroeconomic visionary than a tech revolutionary. A very smart man, like many others in his field, able to see trends clearly and to tap into developments the average intellectual is generally unaware of, Andreessen had no answer—nor did I expect he would. He is an engineer, not an economist.
Clearly, this is not the time to have our best and brightest in the developed world abandon aspects of their intellect to convenient or self-serving points of view. It is time to ask if they have actually thought through what they are saying. The tech arena has always had its versions of the confidence fairy and exceptionalism described in earlier pages—it’s called “the next big thing.” Yet given that so many of the big things of the past fifteen years have disrupted or disintermediated more jobs-intensive alternatives in the service sector, one might ask, Who is going to buy the next big thing? And how are they going to be able to pay for it?
Might it be reasonable for tech companies, even those with enormously competitive products and services, to test their pricing assumptions—not just relative to competitors but relative to what their customers will likely be able to afford to pay in nominal terms as global wages and prices adjust? Sure, Apple sells plenty of iPhones and iPads in developing markets—and will sell more—but what exactly is going to happen if consumers in the largest consuming nations keep making less and less money as capital keeps capturing the bulk of productivity gains? Our present-day technology industrialists need to recall the admonitions of Henry Ford.
Too many business leaders simply aren’t asking such big macroeconomic questions. And too many businesses are subscribing to ideologies that either ignore—or offer the wrong answers to—these questions.
I’ll say more about all this shortly. But first let me discuss another consequence of today’s stagnant and uneven economy that also is too often ignored by the business and political leadership of the developed world: the unraveling of the social fabric.
In all parts of the world, developed or otherwise, economic enfranchisement is a key to maintaining stable, democratic governance. I would go so far as to assert that ultimately it is the best ensurer of private-property rights and the rule of law. An otherwise free society is asking for trouble, big trouble, if a large segment of its population can’t find good-enough jobs or enjoy decent living standards. This is especially true if widespread hardship exists alongside growing affluence at the top of the economic ladder.
What kind of trouble am I talking about? Historically, three outcomes have tended to flow from mass economic disenfranchisement. First, private property, general wealth, and income may become vulnerable to appropriation via taxation, government seizure, or populist revolt. Second, the rule of law and democratic governance may be compromised or suspended, either by revolutionaries seeking “justice” or by oligarchs seeking to protect their private ownership. And three, growing crime extracts its own economic penalty in the guise of theft, corruption, or loss of physical security or life itself.
If you think none of these things could happen here in the United States, or in Europe, think again. In fact, some of this is already happening. The upsurge of populism in the United States between 2009 and 2011 stands as a major hint as to how economic instability can translate to political instability.
The Tea Party, which emerged largely in reaction to the Wall Street bailouts and the massive fiscal and monetary stimulus actions of the federal government, reshaped American politics in very short order. Among other things, the Tea Party’s emergence served to further endanger all moderate Republicans—respected centrists like Richard Lugar, who suffered from a conservative backlash in 2012, were knocked out in primaries. In turn, a more extremist GOP has pushed the United States to the brink of economic disaster on several occasions—first with the budget ceiling battle of 2011 and then with the fiscal cliff standoff in late 2012. Also, it’s fair to say that if another financial crisis hit today, an emergency measure like TARP (the Troubled Asset Relief Program) would not pass the Tea Party–dominated House.
The rise of the Tea Party shows how economic and political instability can interact and feed on itself. Economic disruption tends to empower fringe leaders and these leaders in turn can take steps that amplify such disruption and create further chaos.
Occupy Wall Street proved far less powerful than the Tea Party, but showcased the potential of another kind of populism—one aimed at taking down rich people. A backlash against wealth is inevitable—and given popular support in the United States for tax increases on the wealthy, is really already under way—if most people are suffering, year after year, even as those at the top see big income gains. That’s been the situation in the United States for several years now, and the only wonder is that Occupy Wall Street took so long to emerge. I wouldn’t be at all surprised if, given today’s wealth polarization, another, perhaps stronger, anti-rich movement emerges in coming years.
Of course, social and political instability is far more widespread in Europe. Austerity has created massive pain in Spain, Ireland, Italy, Portugal, and Greece, much of it inflicted on people who had nothing to do with the reckless borrowing, both private and public, that brought these countries to their knees. Violent street protests have become common in Europe, and far-right extremist parties, such as the Golden Dawn in Greece, have emerged to command large new followings. Meanwhile, the economic pain throughout Europe has reduced birthrates and further slowed population growth, trends that pose an obvious threat to economic growth.
Some Europeans are simply giving up on life altogether. As CNBC reported in fall 2012: “A growing number of global and European health bodies are warning that the introduction and intensification of austerity measures has led to a sharp rise in mental health problems with suicide rates, alcohol abuse and requests for anti-depressants increasing as people struggle with the psychological cost of living through a European-wide recession.”6
In the UK, the suicide rate in 2011 was 15 percent higher than in 2007, before everything fell apart.7 Greece, which had one of the lowest suicide rates in Europe back when times were good, has been grappling with a skyrocketing suicide rate since 2010 as the country struggles with an economic calamity on par with America’s Great Depression.8 Several people have shot themselves to death in public squares in downtown Athens. Japan has a cultural history of suicide among those who have been shamed or believe themselves to have become a burden to others. Suicide in Japan has now become epidemic, with the number of deaths per capita rising by more than 35 percent from 1995 to 2009 as the “lost decades” took their toll. (The rate has slowed in just the past few years.)9
If you think the United States has been spared untimely deaths, you might consider that suicide rates there, among those thirty-five to sixty-four years of age, rose by 30 percent from 1999 to 2010, and among men in their 50s, by nearly 50 percent during the same period.10 As another example of distress, consider how many disturbed people have been engaging lately in murderous rampages:
During the six painful years following the peak of the credit and asset bubble in 2006, from 2007 through 2012, the average number of people killed or injured per year in incidents of mass murder in the United States was nearly three times the average killed or injured per year in the quarter century from 1982 through 2006.11 Bad enough, but consider this: during the happy-go-lucky bubble period, from 2000 through 2006—a period of perhaps the lowest financial stress for the broad population of the United States (despite significant geopolitical challenges)—we saw the lowest average number of people killed or injured in incidents of mass murder of any such period within the thirty-one years from 1982 through 2012, a mere 41 percent of that just over three decades’ average.
The deadliest of all thirty-one of the foregoing years was the final one—2012, which alone saw deaths and injuries from such incidents at a rate over 500 percent of the rate during the entire thirty prior years, punctuated by the December murders of twenty of the youngest students in an elementary school in Newtown, Connecticut, together with six of their teachers and administrators.
Now, mass-murder statistics may not make a wholly convincing argument for present conditions rending the social fabric of the United States, but the foregoing comparison of the bubble period to the post-crash period is at least worthy of comment.
Ultimately, economic disenfranchisement needs to be examined from multiple perspectives, and while some indicators—such as suicide—are dramatic and very visible, others—such as the erosion of the rule of law or appropriation of income and wealth—are far more gradual.
The bottom line is that extreme things happen to the social fabric of societies with bad economies and high inequality.
Perhaps the biggest cost of the long downturn is the decimation of a generation of young people who have been unfortunate enough to come of age during these trying economic times. Young people were already struggling economically before the financial crisis, facing high education costs, a challenging job market, and daunting home prices. The policy analyst Tamara Draut explored these problems in her prescient 2006 book, Strapped: Why America’s 20- and 30-Somethings Can’t Get Ahead.12
Of course, now things are worse—a lot worse. And not just in the United States, but across the developed world.
The United States has seen a very significant drop in the labor force participation rate since it peaked at the beginning of the millennium. Beginning in 2007 that fall became a free fall, with millions of people exiting the labor force. While much of that decline was attributable to frustrated job seekers being unable to find employment, the conventional wisdom has held that the overall aging of the American population was resulting in lower participation rates. This, at first blush, seems to make sense. But when one looks under the hood of U.S. statistical employment data, a different picture emerges. As it turns out, older Americans are returning to the labor force, not leaving it, probably because of a need to keep working in the face of eroded wealth. Instead, it is younger Americans whose labor force participation rates have declined.
As my friend and fellow member of the New America Foundation’s World Economic Roundtable, Steve Blitz, chief economist of ITG Investment Research, wrote in December 2012: “By some measures, the labor force participation rate of 16- to 24-year-olds has dropped from near 70% to around 55%. For those between 25 and 34, the rate is down 80% from close to 90% a decade ago. . . . The absence of so many young people from the labor force will have profound consequences for the economy. If these trends persist, future spending patterns will likely be very different than the one we have generally experienced during the past 50 years or so.”
More than that, though, these figures show how in the United States, as in much of Europe, the pain of economic distress has fallen squarely on the young. In the more labor-protective markets of the European Union, not only is demand for labor slack, but there is a fear of hiring younger people as full-time employees because of the difficulty in laying them off in a downturn. As Steven Erlanger wrote in The New York Times in December 2012:
Throughout the European Union, unemployment among those aged 15 to 24 is soaring—22 percent in France, 51 percent in Spain, 36 percent in Italy. But those are only percentages among those looking for work. There is another category: those who are “not in employment, education or training,” or NEETs, as the Organization for Economic Cooperation and Development calls them. And according to a study by the European Union’s research agency, Eurofound, there are as many as 14 million out-of-work and disengaged young Europeans, costing member states an estimated €153 billion, or about $200 billion, a year in welfare benefits and lost production—1.2 percent of the bloc’s gross domestic product. . . . As dispiriting, especially for the floating generation, is that 42 percent of those young people who are working are in temporary employment, up from just over one-third a decade ago, the Eurofound study said. Some 30 percent, or 5.8 million young adults, were employed part time—an increase of nearly 9 percentage points since 2001.13
The economic distress being wrought upon the young is not confined only to the existing shortage of jobs, but has far more insidious and longer-term economic and social consequences.
Millions upon millions of our well-educated young are not developing the skill sets and getting the experience they need to be competitive in their domestic labor markets, much less helping their national economies compete on a global basis. As Blitz notes above, the spending patterns of the present generations of sixteen- to thirty-four-year-olds are likely to be very different than those of prior generations, with the former having little in the way of resources to consume during what are typically the peak consumption years of their thirties and forties. They are also less likely to form families and to invest in major capital items such as homes.
Of course, present pressures on our younger generation go even beyond the unemployed and underemployed. I had a conversation in mid-2012 with a woman in her late twenties—well employed and very well educated—about deflation and shopping. I asked whether she had noticed, in her own Internet-based retail behavior, that she seldom had difficulty buying goods online at marked-down prices if she merely waited for merchants to clear their inventories or by simply seeking a lower price via shopping aggregators such as Google Shopping and others. Or for that matter, whether she engaged in “store-switching” (either online or in bricks-and-mortar retail stores) in order to obtain a lower price for the same or similar goods, a practice that economists have noted contributes to hidden deflation, as it is not accurately reflected in the calculation of many consumer price indices.
What I got back was a surprising, good-natured rebuke: “What do you mean by ‘shopping’? My friends and I don’t shop much. After rent, food, the payments on our student loans, and an occasional drink or meal with each other, there’s nothing left to shop with.”
Apparently, for all my concerns about the unemployment of our young adult population, and their prospects for the future, I had missed seeing that many of those gainfully employed today entered the workforce heavily indebted and are not following older patterns of consumption.
One last thing about young people is that inherited money is unlikely, as some researchers once suggested, to bail them out down the line. Inheritance is generally a positive social trend, in spite of the conventional wisdom that the passing down of wealth serves to make the rich even richer and further increases inequality. To the contrary, the math of inheritance is that it tends to deconcentrate wealth (unless one has only a single heir and bequeaths all of one’s estate to him or her). In fact, intergenerational transfer has far more of an impact on the less wealthy (as a percentage of net worth and of lifetime income) than it does on the affluent. That’s why many hoped that when trillions of dollars—in some estimates, as much as $41 trillion—were passed down to younger Americans in coming decades it could serve to solve some pressing economic problems, such as the failure of many households to build much (or any) retirement wealth. Young people may be bruised by today’s economy, the logic went, but they’ll be okay when their parents kick off and pass down even modest estates.
Alas, though, this great avalanche of money from the old to the young is unlikely to descend, according to a 2011 study by Edward N. Wolff of New York University and Maury Gittleman of the U.S. Bureau of Labor Statistics, “Inheritances and the Distribution of Wealth, or, Whatever Happened to the Great Inheritance Boom?”14
Wolff and Gittleman show evidence of a decline in the frequency of bequests and suggest that this trend presages far smaller inheritances in the future than previous scholarship had suggested. The reasons are complex:
Life spans rose over this period [from the 1980’s to present]. Since elderly people were living longer, the number of bequests per year declined. . . . As people live longer, their medical expenses might rise as they age and, as a result, less money is transferred to children at time of death. [Also] the share of estates dedicated to charitable contributions might be rising over time. This trend may be particularly characteristic of the rich.
Changes in the economy are changing the social fabric of the United States and other advanced countries in profoundly troubling ways. We have lived through five years of depressed household formation rates and the phenomenon of “boomerang kids” who move back in with their parents because they are unemployed or making too little to afford lodgings, even with roommates. In our urban enclaves, we are seeing a dramatic rise in what experts now refer to as “transitional age youth”—grown, educated young adults who are literally on the street because not only are they unemployed but their families have also been hard hit and are unable to take them back in.15 Household formation rates recovered in 2011 and 2012, but we literally lost millions of households that normally would have formed from 2007 through 2011.
Even the face of panhandling has been altered. Few of us walking around American and European cities can avoid noticing that those sitting with signs and begging for money are not the old, crusty down-and-outs of yore. They are our sons and daughters. Occasionally, I take note of what they are reading while they panhandle; they are not uneducated, either.
The trends discussed in this chapter make for an alarming mix: wealth and income polarization. Crime, social stress, and dislocation. And above all, a demographic mess as one generation that was destined to retire, now can’t (and will live a long time, depleting its wealth) along with another generation that wishes to work, form families, and thrive, but for whom there is no room.
And these, of course, are the secondary symptoms of our economic times—the primary one being a lack of growth, underemployment of available labor (and, therefore, the weak position of labor relative to capital), and the global imbalances themselves. We are confronted, therefore, with the challenge of understanding how to address and remedy those primary symptoms and—in so doing—cure the secondary impacts.