CHAPTER II

RESURRECTING SCROOGE: RHETORIC AND POLICY IN A CULTURE OF CRUELTY

In 1843, Charles Dickens visited Cornwall, in Southwest England. There he encountered children laboring in the tin mines that were the centerpiece of local economic production. The deplorable conditions he witnessed, along with a recently released Parliamentary report that exposed the nationwide scandal of child labor, led him to begin work on a political pamphlet, the title of which was to be “An Appeal to the People of England, on Behalf of the Poor Man’s Child.” Dickens hoped that such a treatise might stir the conscience of the British and move the nation to end such practices as had by then become all too common.1 While at work on the pamphlet, however, Dickens ultimately concluded that his point might best be made within the boundaries of a fictional story. Upon making the switch from political screed to novella, Dickens wrote to one of the Parliamentary commissioners who had issued the child labor report, exclaiming that the story he had in mind would “come down with twenty times the force—twenty thousand times the force—I could exert by following out my first idea.”2

And so began the process by which Dickens’s A Christmas Carol would come to be: as a call to charity and compassion in a nation turned hard and cold by the vicissitudes of Victorian working conditions and Poor Laws, intended to wring out every last drop of labor from those at the bottom of England’s class structure, while greatly enriching those at the top of it. Beginning in October, Dickens’s frenzied creative pace allowed him to finish the book in a mere six weeks. He self-published its first run in time for the Christmas holidays, shortly after which it became a literary sensation. To this day, it has never been out of print.

If you are familiar with it, you will doubtless recall one of the story’s early scenes, in which two men enter the business of Ebenezer Scrooge and his former (and recently deceased) partner, Jacob Marley, hoping to procure alms for the poor at Christmas time. It is worth excerpting Dickens here, at some length.

“At this festive season of the year, Mr. Scrooge,” said the gentleman, taking up a pen, “it is more than usually desirable that we should make some slight provision for the Poor and Destitute, who suffer greatly at the present time. Many thousands are in want of common necessaries; hundreds of thousands are in want of common comforts, sir.”

“Are there no prisons?” asked Scrooge.

“Plenty of prisons,” said the gentleman, laying down the pen again.

“And the Union workhouses?” demanded Scrooge. “Are they still in operation?”

“They are. Still,” returned the gentleman, “I wish I could say they were not.”

“The Treadmill and the Poor Law are in full vigour, then?” said Scrooge.

“Both very busy, sir.”

“Oh! I was afraid, from what you said at first, that something had occurred to stop them in their useful course,” said Scrooge. “I’m very glad to hear it.”

“Under the impression that they scarcely furnish Christian cheer of mind or body to the multitude,” returned the gentleman, “a few of us are endeavouring to raise a fund to buy the Poor some meat and drink and means of warmth. We choose this time, because it is a time, of all others, when Want is keenly felt, and Abundance rejoices. What shall I put you down for?”

“Nothing!” Scrooge replied.

“You wish to be anonymous?”

“I wish to be left alone,” said Scrooge. “Since you ask me what I wish, gentlemen, that is my answer. I don’t make merry myself at Christmas and I can’t afford to make idle people merry. I help to support the establishments I have mentioned — they cost enough; and those who are badly off must go there.”

“Many can’t go there; and many would rather die.”

“If they would rather die,” said Scrooge, “they had better do it, and decrease the surplus population. ”3

For Scrooge, the answer to the problems of the poor and destitute was simple: ship them off to workhouses established during that time to allow the wretched of England to labor away their unpaid debts, imprison them if they would not work; but by no means should one extend to such human refuse compassion or charity of any kind. To Scrooge, the poor had it coming. In his estimation, their economic failings merely reflected their far greater moral ones; beggars were beggars for want of industriousness, or acumen, or drive and determination. They were, in the parlance of the modern era, “takers” not “makers,” and as such should be left to their own devices. And if they died, well then, such passing would merely reduce the “surplus population” of persons greedily thieving all that oxygen from their betters, and especially from men such as Ebenezer Scrooge.

Of course, as Dickens unfolds the story, Scrooge learns the true importance not merely of Christmas but of compassion and kindness more generally. He is visited by Marley’s ghost, who seeks to warn him of the moral error of his ways—ways that Marley himself had all too gladly practiced while alive—and then by three additional ghosts (of Christmas past, present and future) who provide him with visions that cause him to rethink his miserly and caustic manner, and to understand not only the plight and struggle of others but even the sources of his own cold and bitter heart. He is transformed.

Which brings us to the present, 172 years after Dickens. For if Scrooge were merely a fictional character, like so many others typically overdrawn and caricatured, we could perhaps leave him within the pages of his book, only to be dusted off during the holiday season along with other characters of Dickens’s creation, like Tiny Tim. But sadly, the relevance of Scrooge goes beyond the confines of A Christmas Carol. Just as Dickens saw Scrooge, in part, as a representation of his own cruel father, and more broadly of the era’s contemptible attitudes toward the poor held by so many among the affluent, so too must we interpret his more comprehensive meaning for a new era. Unfortunately, many of the calumnies heaped upon the Victorian poor and working class are not unknown in our time. In many ways they are making something of a comeback. And while Dickens himself was clear that Scrooge was the heel, the villain and the bad guy, it appears that in modern America there are some who have missed that small detail, and are essentially seeking to resurrect Scrooge as some great moral philosopher. Even worse, there are many who have institutionalized “Scroogism” as a predatory financial system that both disadvantages the poor and needy and aims to eliminate any real safety net to assist them when the money runs out.

Whereas Dickens intended for readers to be appalled by the cruel and callous soliloquies of Scrooge (and rest assured, they were), we can hear many of the same kinds of things being said in the United States today, which, although updated for modern times, signal a contempt for the poor no less certain than that which animated Dickens’s famous character. And the judgmentalism on display regarding the have-nots goes hand in hand with a valorization of the wealthy, with which Scrooge would have been all too familiar. It is the new “Scroogism” and its historical antecedents to which I now turn.

Past as Prologue: The Origins of Class and Cruelty in America

In some ways, it might actually be too forgiving to America to suggest that Scrooge has been resurrected. After all, to resurrect someone it is necessary for the object of reanimation to have first died. Yet, if anything, Scroogism has been the norm for most of American history, interrupted by occasional bouts of compassionate reform, but never fully discarded. If we think of various historical moments—from the social gospel movement of the early 1900s to the New Deal in the 1930s and Great Society of the 1960s—as the social policy equivalents of religious reformations, let us be clear that most of the nation’s history has been marked by the social policy equivalent of the Inquisition.

Indeed, blaming the poor for their condition has been a long-standing tradition. As Georgetown professor of law Peter Edelman notes:

Beginning with the Bible and continuing through the Elizabethan poor laws, throughout history there has been an instinctive belief among some that the poor have no one to blame but themselves. A special version of this illusion exists in the United States, the Horatio Alger mythology that one makes it (or doesn’t) on his or her own. The pioneer spirit and rugged individualism—values to be admired on the whole—contribute to the American version of the “blame the poor” story.4

Frankly, the poor were always especially troubling to Protestant zealots, and never more so than when engaged in public begging. As Max Weber notes in The Protestant Ethic and the Spirit of Capitalism, “Begging on the part of one able to work, is not only the sin of slothfulness, but a violation of the duty of brotherly love.”5 Protestant leaders like John Calvin and Martin Luther believed that poverty was evidence of sin and that the poor deserved neither charity nor public forbearance; and this they insisted upon even as the proliferation of the poor in Europe stemmed directly from the private and forcible enclosure of public lands, which drove previously self-sufficient farmers from their livelihoods. In other words, even when systemic factors beyond the control of the poor were responsible for rising destitution, church leaders found fault with those in need. Throughout the middle of the last millennium, Europe increasingly developed means of punishment and public degradation for the poor, from whippings to debtors’ prisons, all of which were thought to help cure the character deficiencies from which the destitute were believed to suffer. Central to England’s harsh treatment of the poor was a belief that it was only the threat of crushing destitution that could possibly encourage them to work. As British physician and clergyman Joseph Townsend put it in 1786:

The poor know little of the motives which stimulate the higher ranks to action—pride, honour and ambition. In general it is only hunger which can spur and goad them on to labour.6

Once in the colonies, political and religious elites continued their harsh rhetoric and treatment of the needy and impoverished. Preachers like Cotton Mather insisted that when it came to the unemployed, the proper policy was to “let them starve.”7 In the eighteenth century, workhouses for the poor, as well as what were called “bettering” houses (in which not only hard work but moral instruction was prescribed) spread throughout the colonies. As with their European counterparts, these institutions were designed to be misery-inducing places, so undesirable as to convince even the laziest of the poor to take any job, no matter how lowly, so as to avoid them.8 Any forms of actual monetary relief for the poor—of which there were few, either in Europe or the colonies—were set at such a level as to be well below the lowest wage available in the workforce. In this way, it was thought, the poor would take jobs no matter how miserable, as doing so would still ensure they were better off than if they relied on cash relief.

By the nineteenth century, behavioral pathologies such as laziness or alcohol abuse were the presumed culprits for poverty. Armed with such presumptions about the poor, policymakers established little in the way of safety nets to catch those in need.9 Regular moralizing about the vice of poverty and the virtue of wealth was commonplace. In the years after the Civil War, Russell Conwell—who was a minister, author, graduate of Yale Law School, and a founder of Temple University—became famous for a lecture he would deliver thousands of times nationwide. Called the “Acres of Diamonds” speech, its message was simple: anyone can get rich if they try. As Conwell put it:

I say that you ought to get rich, and it is your duty to get rich . . . The men who get rich may be the most honest men you find in the community . . . That is why they are rich . . . I sympathize with the poor, but the number of poor who are to be sympathized with is very small. To sympathize with a man whom God has punished for his sins . . . is to do wrong . . . Let us remember there is not a poor person in the United States who was not made poor by his own shortcomings. . . .10

In keeping with the notion that the poor were to blame for their plight, “outdoor relief”—basically, public assistance outside the confines of a workhouse—was eliminated in most all major cities of the United States in the 1870s, due to a growing belief that “indiscriminate charity” indulged the bad habits of the poor and rendered them incapable of personal betterment. Even in the aftermath of the greatest economic crisis the nation had seen to that point—the Depression that began in 1873—it was common to hear condemnations of any kind of relief for the poor. In Chicago, a relief organization that had been established in the wake of the great fire two years earlier, refused to disburse the $600,000 in its coffers to persons who were out of work because of the downturn. Its director insisted that unemployed men “loafing around the streets” could find a job easily were they “not too lazy to look for it.”11 This kind of thinking dovetailed directly with the desires and interests of industrial aristocrats. The business class sought to limit or end government support for the poor because they increasingly needed low-wage workers to stoke the engines of their own profitability. If the poor and desperate had alternatives to low-wage and dangerous labor, industrialists feared their business interests would suffer.12 To make the financial minority richer, it was necessary that others be made and kept destitute.

The Reformation: From Social Gospel to the New Deal and Beyond

By the latter decades of the nineteenth century, however, a kind of reformation was beginning to take hold in the form of the social gospel movement. Although the movement encompassed a broad range of theologians who differed as to their public policy preferences, the uniting strand of the social gospel was the idea that Christians should apply religious morality to social problems and involve the church in addressing many of the pressing issues of the day, including poverty and the exploitation of workers.

Although instrumental to the progressive movement of the late nineteenth and early twentieth centuries, there were still elements of judgmentalism inherent in the new and emerging liberal Christianity. Preachers of the social gospel stressed the need for the poor to live moral and sober lives, and sought to establish institutions that would instruct them as to proper work habits and lifestyles and thereby improve them, much as the workhouses had claimed to do. But whereas the workhouse movement had been rooted in a belief that the poor were solely to blame for their condition, social gospel theologians acknowledged that the institutional forces of industrial capitalism were creating massive social and moral dislocations that required public action, and especially the attention of committed and affluent Christians. In Progress and Poverty, economist and social gospel thinker Henry George criticized wealthy churchgoers for sitting comfortably in finely apportioned pews while exhibiting little concern about “the squalid misery that is festering but a square away.”13 Although it must be noted that George was a vicious racist whose repugnant broadsides against Chinese labor marked him as firmly committed to a whites-only vision of economic justice, he stands as an example of the developing consciousness around the gaps between rich and poor.14

Theologian Walter Rauschenbusch penned perhaps the most significant articulation of social gospel thinking in his 1907 book, Christianity and the Social Crisis. Therein, he disputed the commonly held notion that “religion is purely personal; or that God is on the side of the rich,” and argued that Christian civilization was obligated to fight inequality, poverty and the abuse of workers, among other injustices.15 By the end of the first decade of the 1900s, most all of the mainline Protestant denominations had adopted the “Social Creed of the Churches,” which called for an end to child labor, the creation of disability insurance, and the shortening of the workweek. Speaking for Catholics, Pope Leo XIII’s 1891 encyclical, Rerum Novarum, postulated the basis for social justice activism among American Catholics by calling for a more humane capitalism, including support for labor unions.16 Although the Pope opposed government-mandated redistribution of the wealth that had become so concentrated in the hands of a few—he preferred redistribution motivated by Christian compassion and charity—his view of the rich could not have been much less charitable. As he explained it:

The whole process of production as well as trade in every kind of good has been brought almost entirely under the power of a few, so that a very few rich and exceedingly rich men have laid a yoke almost of slavery on the unnumbered masses of non-owning workers.17

During this period, Christian churches helped establish settlement houses intended to Christianize residents as well as to enrich their intellectual, academic and cultural attachments. Others, like Jane Addams, created secular settlements such as Hull House in Chicago, the purpose of which was to enhance opportunities for working-class women to learn literature, history and art, among other subjects. Hull House residents also learned to conduct social science research in the surrounding neighborhood concerning the social dynamics of inequality, especially as it affected recent immigrants.18 Though even here the politics and philosophy of the settlement houses was complicated—Addams, for instance, believed that women’s proper role was in the home, caring for children, rather than working to help support a family—the efforts were yet evidence of a slowly liberalizing attitude toward persons in need.19

But despite the efforts of social gospel activists, there were still few government-sponsored programs for the poor and unemployed during the early part of the twentieth century, and those that did exist tended to operate at the state or local level. It was only after the onset of the Great Depression, when millions had been thrown into destitution by the collapse of the American economy, that the federal government began to establish broad-based programs to support those in need. These programs included cash-based income support (originally known as Aid to Dependent Children, or ADC) as well as large-scale public housing initiatives, public works programs like the Civilian Conservation Corps (CCC) and Works Progress Administration (WPA), retirement insurance in the form of Social Security, and other government interventions in the economy, like the minimum wage, all intended to promote economic recovery, lessen the extremes of impoverishment, and generally promote the national welfare.

And the evidence makes clear that such efforts succeeded: government programs to put people back to work on any number of important infrastructure and community improvement projects not only stabilized the economy but also crafted a sense of mutual aid and national purpose. Although the programs of the New Deal were hardly as inclusive as they should have been—persons of color were discriminated against in public works efforts, home loans and cash assistance, and the programs tended to prioritize the employment needs of men over those of women—the general tenor of the times was that government had a direct role to play in addressing joblessness and improving the economic health of the country. As a 2010 report from the Urban Institute reminds us:

The WPA (Works Progress Administration) achieved remarkable scale by putting more than 3 million unemployed Americans back to work at its peak in 1938. Its most enduring legacy is found in its contributions to the nation’s infrastructure. Under the program, the nation built or reconstructed 617,000 miles of new roads, 124,000 bridges and viaducts, and 35,000 buildings. It also financed a wide array of other labor-intensive work projects, including the construction of sidewalks, street curbs, school athletic fields, parks, playgrounds, and landing fields as well as national landmarks such as the Philadelphia Art Museum and New York City’s Central Park Zoo and LaGuardia Airport.20

Although the business class opposed virtually every one of these government initiatives, and for the very same reasons one still hears today—they amounted to intrusive interventions in the free market, they raised the cost of doing business, and they elevated tax rates—for the most part, these efforts proved popular. Roosevelt would be elected four times (no longer allowed thanks to the Twenty-Second Amendment to the Constitution), in large measure because of widespread support for his economic policies. In other words, beginning in the early 1930s, American “Scroogism” was on the ropes, discredited by a capitalist economy that had proven incapable of producing acceptable levels of access to opportunity and mobility for the general population.

Additionally, labor militance during this period boosted the number of workers in trade unions as well as support for unions among the American people. The threat of mass strikes and revolutionary organizing was sufficiently frightening to the ruling class that many of their number—despite their dislike of New Deal policy—relented and came to accept the emerging social contract. After all, in the minds of the financial aristocracy, social programs that reformed capitalism were preferable to a revolution that might end it outright.21 Mindful of the Russian revolution of 1917, and afraid that socialist upheaval in the United States might lead to a similar overturning of the social order here, capitalists embraced a two-pronged program to allow for reform but to ensure their continued hegemony over the nation’s economy. First, they backed overt political repression—for instance, crackdowns on socialist and communist organizing, and violence against militant union efforts—and second, they grudgingly accepted the broad contours of the American version of the limited welfare state.

Support for government social programs and state intervention in the economy would remain relatively strong throughout the period following World War II. Bolstered by the concrete benefits such efforts afforded—the low-interest loan program created by the Federal Housing Administration helped produce the white middle class,22 and the G.I. Bill provided concrete job and educational opportunities to returning (especially white) soldiers23—the welfare state enjoyed widespread support. So too, the relatively high taxes levied upon the upper middle class and affluent so as to fund such efforts also remained relatively uncontroversial throughout this period. As mentioned previously, both major political parties generally accepted the notion of an activist state, intervening on behalf of working people and families. There was no “tax revolt” movement, no Tea Party screaming about being “taxed enough already” and no broad-based backlash to “big government,” despite the fact that taxes throughout the 1950s were always two to three times higher on most taxpayers than they are today.24 Except in the minds of certain persons on the far right, those in libertarian circles, or novelists and itinerant pseudo-philosophers like Ayn Rand (who believed that any intervention by the state in the workings of the free enterprise system amounted to tyranny), the general notion that government had an obligation to ensure a modicum of opportunity was taken as a given. But soon the reformation would give way to a retrenchment, in which some would seek a restoration of the prior order. This restoration, pushed for by a business class and a conservative movement beholden to it, would begin the resurrection of Scroogism in the modern era.

The Restoration: Backlash, Reaganism and the Liberal Capitulation

By the mid-1960s, during the height of the American civil rights movement, attention began to shift from some of the more basic demands of that struggle—like desegregation of schools, voting rights and anti-discrimination protections in the workplace—to bread-and-butter economic justice matters like jobs, housing and economic development in marginalized communities of color. Not content to accept integration into what even Martin Luther King Jr. came to see as a “burning house,” racial justice activists demanded higher wages, community empowerment, fair housing laws and an assault on poverty as the next stage of the freedom struggle.25 So too, new attention to the mostly white poor of the Appalachian region focused national eyes upon the ongoing problem of communities living in poverty and hunger.

President Lyndon Johnson was forced to address these matters, and the escalating anger in the nation’s cities, which spilled over into open rebellions throughout most of his presidency. Ultimately, he succeeded in pushing through a number of programs under the rubric of fighting a “war on poverty.” These efforts, part of what became known as the “Great Society” initiative, went beyond merely expanding pre-existing social welfare programs such as food stamps, public housing and cash assistance. In addition to these older efforts, the Johnson years witnessed the establishment of Medicare and Medicaid to ensure some degree of health care security for the poor and elderly, as well as community development initiatives, pre-school education programs, and other efforts intended to tackle persistent urban poverty. Although these efforts proved largely successful in a short time—contrary to popular perception, as we’ll discuss below—it was during this time and shortly after that backlash to the so-called welfare state began to flower.

Whereas government initiatives on behalf of the poor and unemployed had remained popular for roughly three decades, by the early 1970s, discontent over such programs was growing. When he ran for the Republican presidential nomination in 1976, former California Governor Ronald Reagan regularly capitalized on that souring public mood toward welfare with various stories of fraud and abuse in government antipoverty programs. Although many of the stories he told were as fictional as the movies in which he had once starred (including a claim about a lavish public housing project with a gym and a swimming pool), they were political dynamite, playing upon growing resentments about supposedly lazy welfare recipients who were collecting handouts while hard-working taxpayers struggled to make ends meet. The racial subtext of these appeals was hard to miss, in part because by then welfare programs had generally been racialized in the white imagination by media representations of the urban poor, but also because Reagan telegraphed that subtext in ways that were hardly subtle. His most notorious story involved a “woman from Chicago,” who, according to Reagan:

. . . used 80 names, 30 addresses, (and) 15 telephone numbers to collect food stamps, Social Security, veterans’ benefits for four nonexistent deceased veteran husbands, as well as welfare. Her tax-free cash income alone has been running $150,000 a year.26

Later in the campaign, Reagan would boost the presumed profligacy of her fraudulent ways by insisting she had been operating in fourteen states using 127 names and fifty addresses “in Chicago alone.” According to Reagan she also had “three new cars [and] a full-length mink coat, and her take is estimated at a million dollars.” While the woman in Reagan’s story (whom he identified by name in some speeches as Linda Taylor)27 was not entirely fabricated—Taylor had indeed been charged with welfare fraud a few years prior—he grossly exaggerated the extent to which she had bilked the taxpayers. Ultimately, Taylor would be found guilty of having scammed a total of $8,000 in cash welfare benefits, rather than $1 million; and rather than eighty names (let alone 127) used to defraud the government, she had used four bogus aliases to do so.

But the facts didn’t matter to Reagan or to a public predisposed to believe just about any story they were told about persons on public assistance. All the better if those persons were designated as being “from Chicago”—a large urban area with lots of black folks in it—as opposed to a place like Charleston, West Virginia, where there were no doubt also a few folks gaming the system about whom he could have spoken, but whence the story would not have had nearly the same political impact. Importantly, the fact that Taylor had been caught in her deceits suggested that the larger welfare system of which she had been a part was not as broken as Reagan had claimed. It was the way in which her actions stood out from the norm that had made them newsworthy. But to Reagan and those of his mindset, there was nothing wrong with turning someone like Taylor into a stereotype for welfare recipients more broadly, nor was there anything untoward about using her as a prop in his campaign against them.

By the same token, Reagan once told a tale of “strapping young bucks” buying T-bone steaks with food stamps: a phrase calculated to conjure images not only of welfare fraud, but fraud specifically committed by black men, as the term had long been a well-understood Southern euphemism for physically imposing African American males.28 Though Reagan and his supporters would deny the racial coding behind the images he crafted, it was hard to escape the conclusion that, at least implicitly, Reagan was hoping to play upon white anxieties about urban blacks in the post civil–rights era, at a time when resentment about the gains of the 1960s were reaching a fever pitch.

After being elected president, Reagan succeeded in slashing spending on public housing initiatives as well as cash welfare and food stamps, and he continued cuts in other Great Society initiatives that had been on the chopping block since the presidency of Richard Nixon.29 By the time Reagan left office in 1989, most programs had survived, but the real dollar value of benefits had been slashed to the point that they were less capable of boosting the living standards of the poor beyond mere subsistence. Indeed, according to Reagan’s first budget director, David Stockman, Reagan’s early policies—massive tax cuts on the wealthiest Americans, combined with a huge buildup of the Pentagon budget—were calculated to produce such a substantial budget deficit that Congress would be forced to cut safety net programs.30 The deficit was made to balloon so that those cuts could then be made in the name of a balanced budget rather than the ideological mindset that truly undergirded them.

Reagan succeeded in reducing the size of antipoverty initiatives in part because of his uncanny ability to put forward a cohesive narrative—a story religiously scripted by the conservative movement dating back to the crushing defeat of Barry Goldwater in 1964—which portrayed the poor and those receiving assistance as undeserving, and as persons rendered lazy by an overindulgent federal government. The idea that there was now a “culture of poverty,” especially in urban communities of color, became conventional wisdom. Originally conceived by anthropologist Oscar Lewis in his study of poor communities in Mexico,31 the culture of poverty thesis chipped away at the structuralist theories that had been used to explain inequality, impoverishment and social marginality since the Great Depression. Whereas the previous several generations had largely accepted the notion that families became poor because of circumstances beyond their control, earlier notions that placed the onus of responsibility on the poor themselves had now re-emerged in full force. Books such as George Gilder’s Wealth and Poverty and Charles Murray’s Losing Ground, both of which insisted that government antipoverty programs had created dependence and engendered all manner of social ill, became policy bibles for the Reaganites. Invigorated by this traditional “blame-the-victim” mentality (as it was termed in the early 1970s by psychologist William Ryan),32 conservatives set about to dismantle much of the existing welfare state, emboldened by a public (especially a white public) that had increasingly turned against the very kinds of programs that only a generation before had proved popular.

It is hard to exaggerate how effective the conservative narrative has been in terms of its impact on the national consciousness. First, backlash to the welfare state has persuaded large swaths of the American public that antipoverty programs have been monumental failures and that such programs are to blame for virtually every social problem imaginable, even though the evidence debunks each of these notions. Second, the power of the reactionary narrative has proven so substantial as to force even erstwhile liberals to abandon any focus on fighting poverty as one of their principal concerns. From Bill Clinton to Barack Obama, Democratic Party presidential candidates and the party itself have largely gone silent on the concerns of the poor, rarely mentioning them on the campaign trail, choosing instead to speak of their desire to help the “middle class.” For most politicians, the poor are an afterthought—or worse, sacrificial lambs to be offered up for political slaughter.

In 1996, President Clinton signed into law a welfare bill that substantially reduced benefits for millions of families based almost entirely on conservative “culture of poverty” notions.33 Among the changes: recipients of cash aid were limited to two consecutive years of assistance or five years over the course of their lifetime, regardless of local economic conditions; benefits were slashed for children born after the initial receipt of assistance; and most important, automatic eligibility was terminated. Whereas eligibility for cash aid (then known as AFDC, or Aid to Families with Dependent Children) had been automatic for families below poverty prior to reform, after reform, aid was distributed in block grants to the states based on the amount of funding those states had been disbursing as of 1996. Even if more families found themselves in need due to worsening economic conditions, the amount of available cash assistance—now called TANF (Temporary Assistance for Needy Families)—would basically be frozen at 1996 levels, creating an incentive to disallow new cases and to cut the rolls, regardless of whether work opportunities were available. Since that time, cash welfare rolls plummeted from over thirteen million to only 3.6 million today. Although states could have taken the surplus money that was left after cutting so many from the program, and perhaps plowed those funds back into other job or education initiatives intended to address economic inequity, few have done so. Instead, most have taken the savings and diverted them into other, often unrelated programs. Few, if any, of the benefits were passed on to the needy.34 Although the early years after passage of the reform bill brought praise from many quarters as the number of recipients fell and work rates for single moms increased, once the economy soured, those signs of progress and promise evaporated, much as reform critics had predicted they would.35

Despite significant reductions in the number and percentage of Americans receiving assistance after the 1996 reform, the narrative of welfare abuse, dependency and the “culture of poverty” have continued as if nothing had changed. Likewise, harsh judgments about the poor and struggling remained the norm, even as the economy fell apart, leaving millions in conditions over which they hardly had control. Even in the midst of the recession, with millions out of work and wages stagnant, rhetoric aimed at discrediting government intervention to help those in need could be heard regularly. In the middle of the housing crisis, as families were losing their homes by the hundreds of thousands—many after having been roped into financial instruments like adjustable rate mortgages that had blown up—CNBC commentator and recognized “Godfather of the Tea Party movement” Rick Santelli launched his now-famous rant in which he berated the “losers” who wanted the government to come to their rescue.36 Bail out the banks? Of course. Bail out the homeowners from whom the banks had extracted all that money? Not a chance. The poor and those losing their homes were, to the Rick Santellis of the world, victims not of the economic system or predatory lenders, but of their own cultural and intellectual deficiencies. Radio talk show host Bill Cunningham expressed the typical conservative belief about the poor on his program in 2008 when he claimed: “People are poor in America . . . not because they lack money; they’re poor because they lack values, morals, and ethics.”37

Even President Obama has fed culture-of-poverty notions through his rhetoric and public policy pronouncements. For instance, on more than one occasion he has implored black fathers, and only black fathers, to take “personal responsibility” for their children—the presumption being that they are relieved of this duty thanks to government programs.38 In 2012, he even lectured the black male graduates of Morehouse—one of the nation’s finest schools—not to blame others for their shortcomings and to “be responsible.”39 That anyone, least of all the president, would think Morehouse men need to be cajoled into hard work merely suggests how deep conservative thinking about the culture of poverty truly runs—and especially with regard to its racialized component.

In some ways, this liberal capitulation to culture-of-poverty thinking has been a long time coming. Ever since Daniel Patrick Moynihan’s 1965 report on the “state of the black family” was released, which suggested there was a culture of deviance in the “urban ghetto” that was perpetuating black poverty, many liberals have been given to viewing impoverished communities, and especially those in urban centers with high concentrations of families of color, through a lens of group defect.40 Moynihan was a devoted Democrat, an adviser to Lyndon Johnson who helped design many of the Great Society programs for which Johnson would become known; yet, as with many white liberals, Moynihan found it easier, ultimately, to view people struggling with poverty as the problem, rather than the people and system perpetuating their impoverishment. And he certainly found it easier to seek to “fix” those same poor people, rather than attempt to seriously transform or radically alter the economic and social realities that had come to normalize conditions of injustice and poverty.

Bashing the War on Poverty: The Presumption of Failure, The Reality of Success

It is widely believed—to the point of being very nearly a matter of secular political faith—that antipoverty initiatives have been a massive failure. After all, since the 1960s, hundreds of billions (even trillions) of dollars have been spent on such efforts, and yet the poor are still with us, and the percentage of persons in poverty isn’t much lower today than it was in the early 1970s. But to claim that we fought a war on poverty “and poverty won,” as Reagan often quipped, overlooks the evidence suggesting that safety-net programs lessen hardship for millions. From 1959 to 1973, during which period programs like food stamps and cash assistance were dramatically increased and entirely new programs (including Medicare, Medicaid and President Johnson’s urban empowerment initiatives) were developed, the percentage of Americans living in poverty was cut in half, from 22.4 percent to only 11.1 percent.41 This included a reduction in African American poverty from just over fifty-five percent of all blacks in 1959 to slightly more than thirty-three percent by 1970.42 Although social programs were not the only factor driving the reduction in poverty during this period—the economy was also undergoing stronger than average growth—such a decline in the poverty rate certainly suggests that safety-net programs played a role and goes far towards debunking the idea that such efforts were counterproductive or kept recipients “locked in poverty.” To insist, as some have, that welfare programs have made African Americans worse off than under segregation (or even slavery)43 is not only to grotesquely diminish the horrors of those systems, but to demonstrate a profound and undiluted ignorance about the actual effects of antipoverty initiatives.44 It is to suggest that black folks were better off with poverty rates that were far higher, not to mention lower graduation rates, higher rates of hunger, and worse health outcomes—all of which were realities in the years prior to the supposedly horrible government programs about which conservatives have such fits.

Secondly, to claim that antipoverty efforts don’t work because poverty rates have barely budged lately, regardless of program spending, ignores the way that poverty rate information is tabulated. When calculating income, government benefits like SNAP and the refundable portion of the Earned Income Tax Credit—both of which boost the income and living status of those who receive them—are not counted as income. This creates the appearance that the programs “don’t work,” because those receiving benefits from SNAP or the EITC (or housing benefits) will still be poor in official tables, even though they may actually be living at a level equal to those with an above-poverty income. So despite the fact that the programs actually have improved the lives of millions of people, they receive no credit for having done so and come to be seen as failures that should be scrapped or scaled back. If they were included in government tabulations of poverty, these programs would reveal themselves to be quite successful. If SNAP benefits were counted as income, four to five million fewer people would have been categorized as poor in 2013—roughly a twelve percent reduction in poverty from just that one program.45 Likewise, another three million or so would have been removed from the poverty category by the EITC that year. There are also many people who are not counted as poor today but who would be if it were not for the existence of antipoverty efforts and forms of income support. For instance, there would have been nearly two million more people in poverty in 2012 had it not been for unemployment insurance benefits, which are counted as income for the purpose of tabulating government data on income and poverty rates.46

Of course, it’s not only the raw financial benefit of safety-net programs that matters. More important, these programs meet the specific needs for which they were created, and far better than most realize. For instance, when evaluating the success or failure of the food stamp (SNAP) program, the primary issue is not whether this program eliminated or even reduced poverty. First, because benefits are not counted as income, it cannot have much effect on the official poverty rate. Second, the program was not intended to end poverty, but rather to improve the food and nutritional security of poor people, thereby blunting the most extreme conditions of poverty. So the primary matter is whether the program worked on those terms, and the literature in this regard is quite unambiguous. According to a study for the National Bureau of Economic Research, access to the food stamp program has improved childhood nutrition in particular, thereby contributing to substantial reductions in obesity, high blood pressure and diabetes among recipient households.47 Access to food stamps has also been correlated with an eighteen percent boost in high school graduation rates, likely due in large part to better nutritional health provided by access to the program, and its corollary effect on academic performance.

Likewise, the Earned Income Tax Credit should be judged not on whether it eliminated poverty—again, because benefits are not counted as income for the calculation of the official poverty rate, by definition, it cannot accomplish this task—but whether it achieved its more limited purpose of “making work pay” by subsidizing low-wage employment, since one can only get the benefits by having a job. A second and related matter is whether or not the EITC helps reduce reliance on other benefits like cash welfare. When it comes to these and related matters, the EITC scores well: EITC expansions are credited with being the most important factor in boosting work by single mothers from 1993 to 1999, as well as the key to reductions in traditional welfare caseloads. In fact, the EITC was a much bigger contributor to employment by low-income single moms and substantial reductions in cash welfare rolls than even the strict time limits and other punitive elements of the welfare reform legislation that were passed for those purposes.48

And finally, public health care benefits are best judged not by their ability to reduce poverty per se, but by how well they do what they are intended to do: namely, improve the health outcomes of persons who would otherwise go without care. Medicaid expansion in the 1980s and 1990s, for instance, is credited with reducing childhood deaths among poor kids by more than five percent, as well as reducing infant mortality and low birth weight among babies born to poor moms by 8.5 and 7.8 percent respectively.49 Although those children may still be poor, it is worth noting that they are, importantly, still alive—an outcome that would be considered a victory by most, and yet which prompts no such accolades from those on the right for whom such successes are apparently trivial.

Victim Blaming, Poverty Shaming and Culture Defaming in Modern America

Beyond the all-too-common belief that antipoverty programs don’t work, there is a far more pernicious narrative about which compassionate Americans should be concerned. It is a narrative that not only calls into question the practical efficacy of such efforts, but seeks to demonize those who rely on them. Those who craft the rhetoric of modern-day Scroogism do so by way of three principal devices: first, by expressing blatantly dehumanizing views about poor people, the unemployed and those on various forms of public assistance; second, by way of poverty denialism (essentially the idea that the poor and unemployed don’t really have it that bad); and finally, by way of the “hammock theory” of government aid, which purports to prove that welfare programs are so generous they create long-term dependence and contribute to a culture of poverty that subsidizes irresponsibility and perpetuates impoverishment. Let’s look at these one at a time.

The Rhetoric of Hate: Dehumanizing and Humiliating the Poor

When it comes to the poor and struggling, not only are many on the right hostile to various programs intended to help these groups, they are increasingly hostile to the poor themselves. The aforementioned rant by business journalist Rick Santelli, in which he referred to those who were facing foreclosure due to the implosion of the housing market as “losers” is, sadly, par for the course nowadays. Tea Party activists and political candidates like Nevada’s Sharron Angle insist that the unemployed are all “lazy welfare queens”50 who, according to still others, need to be forcibly placed in labor camps where they will have to work for free and be taught personal hygiene—a proposal seriously floated by Carl Paladino, the Republican candidate for governor of New York in 2010.51 Rush Limbaugh asks listeners, as if the answers were self-evidently negative, if they “know any low-income people who actually want to get a better job?” and wonders, “Do they even want to work?”52 Most recently, Speaker of the House John Boehner (who has said he would commit suicide before voting to increase the minimum wage unless said increase were tied to massive tax cuts for the wealthy and their corporations)53 suggested that what’s been holding back job creation in America is not the lack of employment openings but “this idea that has been born . . . that you know, I really don’t have to work . . . I think I’d rather just sit around.”54 According to conservative leaders, many Americans actually enjoy long-term unemployment.

For some on the right, it’s not just the unemployed whom we should scorn but also those who work, if they get minimum wage. Conservative commentator Erick Erickson expressed contempt for low-wage workers recently when he claimed: “If you’re a 30-something-year-old person and you’re making minimum wage, you’ve probably failed at life.”55 In other words, an adult who works at minimum wage trying to support his or her family, and who no doubt hopes for something better, is really just a loser to whom we should offer nothing but derision. Likewise, even if you work full-time and make a solid middle-class income but happen to work for a nonprofit organization—for instance, the United Way or Habitat for Humanity—you are deserving of repudiation in the eyes of Rush Limbaugh. According to Limbaugh, who apparently believes all “real work” is work that seeks to make a profit, nonprofit employees are “lazy idiots” who are no different from “rapists in terms of finance and economy.”56 Or if you are a government employee making a decent living, paying your taxes and stimulating the economy, you’re still a parasite according to FOX’s Stuart Varney, who has said of government workers furloughed during the government shutdown of 2013, “I want to punish these people.”57

As for Americans living in poverty, Limbaugh has likened them to wild animals that become dependent on others and forget how to feed themselves if they receive any form of assistance.58 He has also compared children living in impoverished families to puppies who will never bond with their owners (or parents) if fed by another, such as a school through a breakfast or lunch program.59 Yes, because children—none of which, it should be noted, Limbaugh actually has, and with whom he has virtually no experience whatsoever—are exactly like cocker spaniels. Limbaugh has suggested that if poor kids, whom he refers to as “wanton little waifs and serfs dependent on the state,”60 have trouble finding food at home during the summer break:

. . . there’s always the neighborhood dumpster. Now, you might find competition with homeless people there, but there are videos that have been produced to show you how to healthfully dine and how to dumpster dive and survive until school kicks back up in August.61

Fellow talk show host Sean Hannity has said much the same thing, comparing persons on public assistance to animals who will no longer remember how to feed themselves if we continue to support them with programs like SNAP.62 Likewise, right-wing commentator Ann Coulter insists that welfare programs create “generations of utterly irresponsible animals.”63 And if you’re wondering what kind of animal conservatives have in mind when they call the poor and those on public assistance such names, conservative activist, musician and avid hunter Ted Nugent—a man who can make even a pacifist wish that deer could shoot back—will gladly make it plain: according to Nugent, persons who receive benefits from the government are “gluttonous, soulless pigs.”64 Others insist they are essentially swamp-dwelling amphibians, as Republican congressman John Mica put it in 1996 during debate over that year’s welfare reform bill, holding up a sign on the House floor that read DONT FEED THE ALLIGATORS, and insisting that providing assistance to poor women would encourage them to have more children so as to get more “free handouts.”65

Not to be outdone, conservative author and talk-show host Neal Boortz, who has compared the poor to the “toenail fungus” of America,66 came up with particularly vicious ways to refer to the poor of New Orleans after Hurricane Katrina. In the wake of that catastrophe, in which more than a thousand people died and tens of thousands were displaced, Boortz referred to the city as a “city of parasites” and those who lived there as “garbage.” On one particular episode of his radio program, Boortz referred to those who were displaced as “complete bums, just debris.” He then went on an extended rant premised on entirely inaccurate perceptions of the city’s poor (as discussed in the introduction), but provided a disturbing window into the soul of modern conservatism. Responding to those who implored us to hear the anguish of those displaced by the flooding, Boortz retorted:

That wasn’t the cries of the downtrodden; that’s the cries of the useless, the worthless. New Orleans was a welfare city, a city of parasites, a city of people who could not and had no desire to fend for themselves. You have a hurricane descending on them and they sit on their fat asses and wait for somebody else to come rescue them. . . . You had a city of parasites and leeches.67

As much as we might hope such vitriol would find little fertile ground in which to take root, the evidence suggests hostility to the poor is easily internalized in a culture where such contempt is so common. Research by Princeton psychologist Susan Fiske has found that when hooked up to brain scan imaging machines and shown pictures of poor people or the homeless, large numbers of subjects react the same as if they had been shown pictures of things as opposed to people: a common sign of revulsion and lack of empathy.68

The lack of empathy evident in Fiske’s lab experiments can also be observed in everyday real-world settings. Consider the results of one disturbing experiment recently conducted by a filmmaker in Austin, Texas. The filmmaker and a homeless man there named Sandy Shook went to a local thrift shop and purchased a blazer, slacks, and dress shoes for Shook. Shook then stood on the street and asked passersby for spare change to help pay for bus fare or, alternately, for his Subway sandwich. Inevitably, people would stop and gladly interact with Shook and give him the change he requested. Then Shook tried the same experiment dressed as he normally is, in an old T-shirt and dirty jeans. The results were the opposite: people routinely passed him by, refused to give him change and in at least one case shouted “No!” at him even before he had asked for money.69 In other words, giving money to someone who looks as though he wouldn’t normally need it is far easier for most than giving to someone who looks like he does. It’s as if the decision to give isn’t based on need so much as a judgment of the moral deservingness of the person doing the asking.

Elsewhere, evidence of callousness to the homeless is even more blatant. As just one example, Hawaii state representative Tom Brower proudly goes hunting for homeless people who have filled shopping carts with their meager belongings; upon finding them, Brower, who says he’s “disgusted” by the homeless, smashes their carts with a sledgehammer.70 Even in relatively “liberal” San Francisco, the city’s main Catholic Church has installed a sprinkler system to drench homeless folks who occasionally sleep in the doorways.71 And recently, Alaska Congressman Don Young suggested that if wolves were introduced into communities where they weren’t currently to be found, those areas “wouldn’t have a homeless problem anymore.”72 It is no doubt this kind of visceral contempt that animates the recent rise in hateful assaults upon the homeless around the nation. Up by more than twenty percent just between 2012 and 2013, such attacks are becoming more brazen, including most recently, the attack on a fifty-eight-year-old man in Ventura, California who was set on fire by three young white men with shaved heads, resulting in second- and third-degree burns over his entire body.73

Part of the disgust felt by many toward the poor apparently stems from a sense that those in need lack sufficient humility. Commentator Bill O’Reilly, for instance, has openly advocated shaming the poor as a solution to the problem of poverty. In June 2004 he explained on his radio show that Ronald Reagan was too nice and not willing to be tough and nasty with the poor—especially the black poor. According to O’Reilly:

Reagan was not a confrontational guy, didn’t like confrontation, much rather be your pal . . . doesn’t want to get involved with the really nasty stuff, the tough stuff, and that’s what racial politics is—nasty and tough . . . you gotta look people in the eye and tell ’em they’re irresponsible and lazy. . . . Because that’s what poverty is. . . . In this country, you can succeed if you get educated and work hard. . . . You get addicted, you don’t know anything, you’ll be poor. But Reagan did not want to confront the issue.74

Far from an outlier, O’Reilly is par for the course among right-wing commentators. For the right it isn’t enough that the poor should be poor; rather, they should be humiliated by their economic condition, essentially ashamed to look at themselves in the mirror. Recently, FOX Business contributor Charles Gasparino lamented that when it comes to income assistance or housing aid, “the stigma is gone about accepting that check,”75 and Rich Lowry of the nation’s most prominent conservative magazine, National Review, says it’s “a disgrace” that the stigma of “being on the dole” has eroded, as if to suggest that what the poor have too much of is pride, and what they need is more shame to add to their economic deprivation.76 Evidently, shame has long been known to cure poverty.

Other conservative commentators have pushed for fingerprinting food stamp recipients and suggested that resistance to such a humiliating requirement, which essentially presumes that the poor are criminals, is part of the left wing’s unjustified “war on shame.”77 Still others have blamed the switch from stamps to Electronic Benefits Transfer (EBT) cards—which allows beneficiaries to feel less conspicuous since the EBT functions like a debit card—for reducing the stigma of receiving assistance, thereby boosting enrollment.78 Others self-righteously bray about not accepting EBTs as payment in their establishments and are praised for refusing to do so. As yet another FOX contributor recently put it, “Why can’t we make someone feel embarrassed” for receiving public assistance?79 In each case with rhetoric like this, the implicit assumption is that humiliation, not food, is the commodity of which the poor need more.

Even poor children should not be spared the lash of public humiliation for their condition. In recent years, conservatives from Newt Gingrich to West Virginia lawmaker Ray Canterbury have endorsed putting poor kids to work in their schools so they will learn work habits and earn their free and reduced price meals there.80 As Canterbury put it, “I think it would be a good idea if perhaps we had the kids work for their lunches: trash to be taken out, hallways to be swept, lawns to be mowed, make them earn it.”81 Naturally, because we wouldn’t want children to start thinking they were entitled to eat. Of course in many localities, poor children receiving school lunches are already stigmatized by being forced to go through separate lines where they receive prepackaged meals, unlike their non-poor peers who get to choose their own items. Fully a third of school districts operate these separate-and-unequal systems for school lunch recipients, creating such shame that some kids are skipping lunch altogether rather than facing the stigma of going through the separate line. They would rather go hungry.82 When one principal in Colorado objected to her school’s policy of stigmatizing free lunch recipients by stamping their hands and giving them different food than the other students, she was terminated.83

So too, economically strapped persons with disabilities are fair game for the hateful mocking of conservatives, as with an infamous confrontation in Columbus, Ohio, in 2010 between Tea Party activists opposed to health care reform, and a disabled counter-protester with Parkinson’s disease. Though the man with Parkinson’s was simply sitting on the ground making his support for health care reform known, he apparently wasn’t sufficiently ashamed of his condition for the right-wingers assembled. The Tea Partiers screamed at him, with one insisting that he was on the “wrong side of town for handouts,” and that “you have to work for everything you get.” Meanwhile, another Tea Party activist mockingly threw dollar bills at the man while another proclaimed that the disabled man was clearly a communist.84 More broadly, it is increasingly common for conservatives to attack disability benefits and those who receive them. Even though research suggests that fewer than one percent of disability payments from the Social Security program are received fraudulently, Kentucky Senator and possible presidential candidate Rand Paul suggested recently that most persons receiving such benefits were fakers. According to Paul:

Over half the people on disability are either anxious or their back hurts. Join the club. Who doesn’t get up a little anxious for work every day and their back hurts? Everyone over forty has a back pain.85

In the same cruel vein, Tom Sullivan, who is both a FOX News radio host and FOX Business commentator on television, recently told a caller who said she has bipolar disorder that her disease was “something made up,” as the “latest fad,” and that she had likely been “talked into feeling that way” by someone else.86 Though he also argued that perhaps there was a financial incentive for over-diagnosing certain illnesses so as to make more money for pharmaceutical companies (a quite possible reality, and one that might not sit well with his business-friendly employer), the tone he struck with the caller was one that well encapsulated the general and growing hostility to those who are struggling with illness and disability.

Whether they are targeting able-bodied adults, kids or those with disabilities, one thing is certain: conservatives long for the days when public assistance carried more stigma. FOX commentator Charles Payne wishes recipients were more embarrassed about needing help. As he puts it:

I think the real narrative here, though, is that people aren’t embarrassed by it. People aren’t ashamed by it. In other words, there was a time when people were embarrassed to be on food stamps. There was a time when people were embarrassed to be on unemployment for six months, let alone demanding to be on it for more than two years.87

Payne has long been among the most consistently cruel of conservative commentators, trotting out his own story of having grown up poor but having gone on to “make it” as evidence of how poor people have no one but themselves to blame. That Payne speaks in one breath of having grown up on welfare as a child, and then assures us in the next breath that people on assistance used to feel shame, raises obvious questions about the contempt he must feel for his family and himself, and tells us much about the psychological torment that his conservatism is intended to exorcise. According to Payne, poverty in America is “a little (too) comfortable,”88 and if there were more stigma associated with programs like food stamps, people would be less willing to stay on the program for so long.89 Of course, this is the kind of thinking one might expect from someone who says, “If you can’t pass a test to become a bus driver but you know you’re still going to eat, there’s a problem,”90 and that suffering from gout—a disease that is increasingly prominent among those with low income—is no big deal since gout was once considered a “rich man’s disease.”91

According to a conservative blogger at the prominent website The Daily Caller, not only should the poor be forced into the “humiliation” of shopping at substandard government-run stores rather than being able to shop where the rest of us do, they should also lose voting privileges if they receive any government assistance.92 This idea that the poor shouldn’t be allowed to vote—an issue most Americans probably thought had been settled generations ago—has been gaining traction on the right lately. Conservatives now openly raise the issue of property requirements for the franchise, suggesting, as has Rush Limbaugh, that if people can’t “even feed and clothe themselves” perhaps they shouldn’t be allowed to elect the nation’s leaders.93 Encouraging electoral participation among the “nonproductive” segments of society is not only inherently “un-American,” as one prominent conservative put it recently, but amounts to “handing out burglary tools to criminals.”94 Ted Nugent has said that we should suspend the right to vote of “any American who is on welfare. Once they get off welfare and are self-sustaining, they get their right to vote restored.95 And leading Tea Party activist Judson Phillips has exclaimed:

The Founding Fathers originally . . . put certain restrictions on who gets the right to vote. It wasn’t you were just a citizen and you got to vote. Some of the restrictions, you know, you obviously would not think about today. But one of those was you had to be a property owner. And that makes a lot of sense, because if you’re a property owner you actually have a vested stake in the community.96

In other words, to Philips—who is perhaps the most prominent Tea Party activist in the country—not only the poor per se but also anyone who rents, most college students, the elderly in nursing homes, and anyone else who for whatever reason doesn’t own property should be blocked from the most basic privilege of citizenship—voting. According to Bryan Fischer of the American Family Association, only property owners should be allowed to vote, because, “if somebody owns property in a community, they’re invested in the community. If they’re renters, they’re going to be up and gone; they could leave the next day . . . they have no skin in the game. They don’t care about the same things that somebody does who is rooted in the community.”97 In the eyes of prominent conservatives, people who rent don’t care about their communities, the quality of their children’s schools or the infrastructure of their neighborhoods; they are just transient slackers who care little for the broader well-being of the community. If you aren’t a property owner, this is what the right thinks about you.

Though not advocating property requirements to vote, FOX morning co-host Elisabeth Hasselbeck recently suggested that perhaps one should have to pass a civics test before being allowed to cast a ballot. Putting aside the fact that such a requirement—a central feature of the Jim Crow South, regularly abused so as to prevent blacks from voting—has such a history of racist misuse, there is an irony in Hasselbeck’s advocacy of it: namely, such tests have already been banned by Congress and are widely understood to be unconstitutional. To the extent Hasselbeck doesn’t seem to know that and yet believes one should have to pass a civics test to vote, perhaps she should be the first to forfeit her voter registration card,98 followed quickly by Ann Coulter99 and Newt Gingrich,100 both of whom have called for literacy or civics tests in order to vote, despite such instruments being outlawed by the Civil Rights Act of 1965.

For others, like venture capitalist Tom Perkins (about whom we’ll have more to say later), it’s not that people should necessarily be prevented from voting, but simply that the rich should get more votes than everyone else. The multibillionaire recently suggested that votes should be apportioned based on the dollar amount of taxes a person pays: in other words, “if you pay a million dollars in taxes, you get a million votes.”101 That such brazen calls for an official aristocracy of the rich and the eradication of democracy can be made with no sense of shame says a lot about how normalized the culture of cruelty and inequality has become.

Beyond merely restricting the freedom of poor people to vote, some on the right go quite a bit further, advocating that the poor should be forcibly sterilized by the state. For instance, former Arizona state senator Russell Pearce was recently forced to resign as vice-chair of the Arizona Republican Party after saying that women on Medicaid should have to get “Norplant birth-control implants, or tubal ligations.”102 It’s an idea similar to one proposed by white supremacist David Duke in 1991, while he was serving in the Louisiana legislature.103 Some conservative ideas never die, it seems, no matter how old, vicious, cruel or unconstitutional. Though such thinking may, as with Pierce, serve to embarrass the party of conservatives, it still seems worth mentioning how readily those on the right jump to such blatantly authoritarian and cruel policy proposals, only backtracking when their open hatred becomes a political liability for their more subtle peers.

Trivializing Hardship: Conservatives as Poverty Deniers

In 1981, Texas Senator Phil Gramm lamented: “We’re the only nation in the world where all our poor people are fat.”104 It was, to Gramm, clear evidence of how exaggerated the problem of economic hardship in America was, and how horrible the nation’s welfare state had become. Apparently, poor people aren’t really suffering or deserving of much sympathy until their rib cages are showing and their eye sockets have all but swallowed their eyes. If some poor people are fat, it’s not because so many of the cheapest and most readily available foods in low-income communities are high in empty calories and non-nutritional ingredients—or because the American diet in general is less healthy than in other countries—rather, it must be because poor people have it too good and are able to do a lot of fancy eating at public expense.

Lack of compassion for people in need, which makes it so easy to engage in the viciousness examined in the last section, or to call for the repeal of poor folks’ basic rights and freedoms, has long been fed by a belief among many that low-income families and underemployed people really aren’t suffering that badly. Which brings us to the second device by which the right seeks to demonize the poor and struggling: denying that they’re really struggling at all. Not only do they have a nifty disease like gout, as Charles Payne reminds us, which makes them similar to eighteenth-century royalty, but more importantly they are awash with other “stuff” that poor people shouldn’t have, which proves that they aren’t really doing that badly. This poverty denialism rests on three specific claims: first, that America’s poor are fabulously wealthy by global standards, and thus should essentially stop complaining; second, that the poor buy expensive food with their SNAP benefits and have all manner of consumer goods in their homes, which means they aren’t poor in any sense that should cause concern; and third, that large numbers of welfare recipients commit fraud in order to get benefits, and then misuse the benefits they receive. In short, these are not the deserving poor—their pain is not real.

As for the idea that the poor in America are not really poor, one can almost understand why this notion might seem persuasive even to those who are not particularly callous or cruel. Someone who has worked in the Peace Corps for instance, or the military, or has merely traveled widely and witnessed the kind of abject deprivation that is common in much of the world, where billions of people live on less than a dollar a day, might find this part of poverty denialism compelling. Most of us have seen at least one, if not several, late-night infomercials seeking charitable contributions to bring running water and vaccinations to the globe’s poorest inhabitants. By comparison to the poverty highlighted by such efforts, one might not find the moral claims of America’s poor to be particularly pressing.

That said, to diminish the real hardship faced by the poor in the United States solely because it is usually not as crushing as suffering elsewhere—and I say usually, because in some poor counties of America, conditions and life expectancy actually do rival those in some of the poorest nations on earth105—is neither a logical nor an ethical response to that hardship. Even though in absolute terms it is true that most persons in the United States do not suffer poverty in the same way and to the same extremes as say, Sri Lanka’s poor, such a reassurance is likely not much comfort for America’s struggling masses. After all, Americans are not Sri Lankans, and they are trying to stay afloat and compete in a society against other Americans. This is why the international standard for evaluating poverty is not simply a set dollar-equivalent amount, since poverty in a poor country is by definition different from poverty in a rich country, but is determined by looking at what percentage of a country’s citizens live at half or less of the nation’s median wage. To be at half or less of the median in any society, no matter what that median might be, is to be at a significant disadvantage relative to others in the job market and the housing market, in terms of the quality of education your children will likely receive, and in terms of the health care you can access. If the median income is well above your own, you will be effectively priced out of the market for any number of opportunities; as such, even if you are objectively richer than someone in Bangladesh or Ghana, the life you will be able to carve out for yourself in the place you actually live will be far removed from the mainstream there.

This is why the reassurances of blogger Catherine Rampell at the New York Times, to the effect that “the bottom 5 percent of the American income distribution is still richer than 68 percent of the world’s inhabitants,” or that “America’s poorest are, as a group, about as rich as India’s richest,” are vapid to a point that would be laughable were the subject matter not so serious.106 Contrary to Rampell’s breathless excitement at the chart demonstrating these fun facts—which she found in a book by World Bank economist Branko Milanovic and to which she refers as an “awesome chart” that “kinda blows your mind”—there is nothing awesome, mind-blowing, or even remotely relevant about the statistics in question. Nor are the protestations of Sean Hannity—who assures us that “poor in America is not poor like around the rest of the world”—helpful in understanding the real face of need in the United States.107

If anything, to be poor in a rich country, where one’s worth is sadly too often presumed to be linked to one’s possessions (unlike in a poor country, where people still know better) is to foster a particularly debilitating kind of relative deprivation. To be poor in a place where success is synonymous with being rich and famous increasingly means finding oneself voiceless, ignorable, criminalized and perceived as disposable. To live in a place where wealth is not only visible but flaunted, where the rich make no pretense to normalcy, and where one can regularly hear oneself being berated on the airwaves as losers and vermin and parasites precisely because you are poor or working at a minimum-wage job, is to be the victim of a cruelty that the citizenry of poor nations do not as likely experience. In a nation where poverty is distressingly normal for the vast majority, the poor are still likely to be viewed as belonging equally to a common humanity, unlike in a wealthy and powerful nation like the United States, where the humanity of poor people, and certainly their right to full citizenship, are increasingly under attack.

Ultimately, the politics of comparative suffering is always a losing and amoral proposition. It’s precisely such politics that would justify telling a Japanese American who was herded into an American internment camp during World War II that they have nothing to complain about and should actually be grateful: after all, they could have been in Tokyo when we firebombed it, or in Hiroshima or Nagasaki when we dropped the atomic bombs. It’s the kind of position that would rationalize saying to someone who survived the Holocaust of European Jewry that they had no legitimate complaint against the Nazis, since had they lived in the Soviet Union they may well have perished in Stalin’s gulag (or, for that matter, the reverse of this argument). To forward this kind of position is like telling an African American during Jim Crow segregation to get over it, since King Leopold killed roughly ten million Africans in the Congo under Belgian colonialism. In other words, this kind of comparison between the suffering one is currently experiencing and the much greater suffering one could theoretically experience elsewhere lacks all moral and practical relevance.

Not to mention, there is something ironic about this kind of argument coming from the rich, who regularly push for greater tax breaks so they can have more money with which to “do great things,” or just because they think they’ve earned it. After all, to whatever extent the poor in America are rich by global standards, surely the wealthy in America are far more so, and should perhaps rightly be seen as obsessive and gluttonous hoarders. They don’t seem satisfied with the kind of wealth that would allow them to literally buy entire countries outright, and which certainly dwarfs the wealth of the so-called rich in less wealthy nations, but yet they have the temerity to lecture poor people about gratitude?

Consider a recent commercial paid for by the Charles Koch Foundation that seeks to remind Americans how good they have it by noting that even if one earns only $34,000 a year, that’s enough to vault one into the top one percent of the world’s population in terms of income.108 Or consider the remarks of Bud Konheim, CEO and co-founder of fashion label Nicole Miller, who recently said those who are poor or working class in America should stop complaining, since their incomes would make them wealthy in India or China.109 To whatever extent one finds this kind of thinking even remotely persuasive, shouldn’t the logic of such an argument run both ways? Shouldn’t the rich in the United States stop complaining about their taxes? The regulations they have to put up with? The minimum wage they have to pay employees? Talk about ingratitude! If they lived in any other industrialized nation, the taxes they paid would be higher, regulations would be just as strict or more so, and their workers would have far greater protections and safety nets than in the United States. So when it comes to shutting one’s mouth and being grateful for what one has, perhaps the rich should lead by example.

In addition to comparing America’s poor to those of the world and finding the former unworthy of concern by comparison, today’s poverty deniers insist that those who claim to be struggling in the U.S. really aren’t, and this we know because of all the extravagances they enjoy. To Rush Limbaugh, those who are out of work spend their unemployment benefits on lottery tickets, “Smirnoff Ice and chips,” thereby demonstrating their personal irresponsibility.110 FOX News commentator Andrea Tantaros says she wishes she could live on food stamps since it would make for a fantastic “dieting technique” that would make her “look great.”111 In short, there is no reason to be sympathetic to those who are out of work or have been forced to rely on SNAP benefits, since they only squander the assistance they receive anyway, and don’t fully appreciate the weight-loss gift they’ve been given by virtue of their hardship.

One of the more prominent tropes of modern Scroogism is chastising the poor for possessing any material items remotely connected to middle-class normalcy, as if somehow the possession of modern conveniences like refrigerators, microwaves or televisions demonstrates that the poor in America aren’t really suffering.

In a segment from Bill O’Reilly’s FOX program in July 2011, he and fellow talking head Lou Dobbs joked about the “stuff” one can find in the homes of the poor. Citing a report by the Heritage Foundation, which has long forwarded this kind of argument so as to undermine support for safety-net programs, O’Reilly noted incredulously:

Eight-two percent have a microwave. This is 82 percent of American poor families. Seventy-eight percent have air conditioning. More than one television, 65 percent. Cable or satellite TV, 64 percent . . . Cell phones, 55 percent. Personal computer, 39 percent. So how can you be so poor and have all this stuff?112

Aside from the bizarre implication that air conditioning is a luxury the poor should not enjoy, there are a few obvious holes in O’Reilly’s argument here. First, it should be apparent to even the most casual thinker that most of the poor live in apartments pre-rigged with A/C whether or not they can afford to actually run it. Second, cable is necessary in most parts of the country in order for a television to get reception at all, so the mere fact that one has cable says very little about the quality of one’s television, let alone the extravagance of one’s entertainment habits. And finally, cell phones are no more extravagant than landlines, having more or less replaced the older systems for millions of Americans, including those who are by no means poor. To not have a phone would render a person unable to remain connected to possible jobs, to family or to emergency services. Surely we do not expect poor families to be completely cut off from the world in order to deserve concern. Or perhaps for the Bill O’Reillys of the world, that is exactly what is required.

For Robert Rector of the Heritage Foundation, the poor aren’t really suffering because per-person expenditures for the poorest fifth of Americans today are equal to the expenditures of the typical middle-class person in the early 1970s.113 But, as with most everything said about the poor by the folks at Heritage, this too is fundamentally disingenuous. First, this seemingly impressive fact does not mean that poor folks are living in a style comparable to those middle-class persons from several decades ago. Rector’s calculation is based on per-person spending, adjusted for the average consumer inflation rate. But certain items have inflated far faster than the general rate of inflation—namely, housing, education and health care—such that spending more for these things today is not tantamount to the receipt of greater luxuries. Today’s poor are spending more for these things because they are so much more expensive than those same things were in 1973, and a generic inflation-rate adjustment like the one made by Heritage will not account for that. It is certainly not because they are doing that much better than the middle class of the early 1970s.

Second, even if Rector were right and poor and lower-income persons are now able to live like the middle class did thirty to forty years ago, what is the practical meaning of this information? It is also probably the case that the poor in the 1960s had “stuff” comparable to what middle-class Americans had in 1939, but so what? The poor today also doubtless have certain luxuries unknown even to the wealthiest Americans in the 1790s, what with indoor plumbing and all, but one wonders what the point of such a comparison is. Does anyone really believe that today’s poor live better than Thomas Jefferson did, just because the latter had to crap in a chamber pot or an outhouse? Apparently, Rector and the folks at Heritage think so, as they have also made the argument that the poor in America today “live a better life than all but the richest persons a hundred years ago.”114 Though it should hardly need to be said, today’s poor do not live in the early 1970s, let alone the nineteenth century; they live in the present, where the ability to feel part of the mainstream (and to be part of the mainstream) requires one to be able to do things and have things that previous generations didn’t do or have. People didn’t “need” the Internet in the 1970s, for instance, because the Internet didn’t exist, but not having access to the web today can be seen as a pretty serious disadvantage. They didn’t need cars in 1837 either, but try finding steady employment today without one.

What Rector and others ignore is that the ability of the poor to purchase electronics—the prices of which have actually come down in recent years—says little about their ability to afford more important amenities. Televisions, microwaves or any other consumer products in the homes of the poor will tend to be pretty cheap. What you won’t as readily find is what really matters: namely, college degrees and high-quality preventive health care, the costs of which have far outpaced the rate of inflation. It is these things that an increasing number of Americans cannot afford, not because they have blown all their money on malt liquor and menthols but because they are not paid enough to purchase them, no matter the relatively cheap consumer goods with which they may entertain themselves, or which may cool the air in their apartments from time to time. The issue is not whether Americans are as poor today as the poor in Biafra, or as destitute as the poor were at the time of the Nixon administration or the Gettysburg Address or the landing of the Mayflower. The issue is whether the poor are situated in such a way as to compete with others in this country at this time, in such a way that they might move up the ladder and out of relative deprivation. A dishwasher will neither suffice for those purposes nor by virtue of its expense get in the way of them, but the lack of health care and education most certainly will.

To deny those who are struggling all manner of modern conveniences as the right appears prepared to do—even those that are increasingly necessary to stay connected to the mainstream and develop the cultural capital needed to make oneself employable—is to suggest that the poor should slug it out like the poor of old; it is to insist that they must suffer just as those in prior generations did before any sympathy can attach. Which is no doubt why conservative blogger Jim Hoft was so quick to criticize Lorain County, Ohio, for distributing air conditioners to needy elderly and disabled folks in the summer of 2012, during a record heat wave.115 To Hoft and those of his mindset, the poor and aged (and those with respiratory disease who were particularly targeted by the effort) should suffer just as they would have in the days before air conditioning—because poor people should be miserable in the eyes of conservative America, perhaps even prostrate, covered by dirt and surrounded by flies in order to be seen as truly deserving society’s assistance.

Notably, the common outrage over the possessions of the poor neglects to take heed of the obvious fact that for most, their consumer goods will likely represent items they were able to afford in better economic times before a layoff or medical emergency. If a family finds itself transitionally poor and having to turn temporarily to SNAP after the layoff of a parent, it’s not as if the computer, the car or the Xbox they had before the layoff should be expected to disappear. Unless one wishes to suggest that upon a layoff one should pawn everything in one’s possession before turning to the very government benefits that one’s taxes previously paid for during periods of employment, expressing shock at the minor possessions of the poorest among us is absurd.116

Darlena Cunha, a former television producer turned stay-at-home mom, recently penned a column for the Washington Post in which she discussed her own experience as someone who ended up on SNAP after her husband lost his job and the economy imploded, leaving them owing more on their mortgage than their home was worth. As Cunha noted, when she went to pick up her Electronic Benefits Transfer card driving her Mercedes—a car she and her husband had owned long before hard times struck—she was acutely aware of how others were viewing her and how the contrast made her feel about herself. She also mentioned how friends would tell her that she and her husband couldn’t be that bad off since they still had a luxury vehicle in their possession. And yet, as Cunha explained:

[The Mercedes] wasn’t a toy—it was paid off. My husband bought that car in full long before we met. Were we supposed to trade it in for a crappier car we’d have to make payments on? Only to have that less reliable car break down on us?117

Cunha’s point is all the more pertinent given that supposed “luxury” items like vehicles help facilitate opportunities for unemployed and poor persons seeking better jobs.118 To criticize the poor for having a car is to suggest that they should be without a vehicle so long as they receive government aid. But how can those in need better their situation if they have no reliable vehicle to get them from home to a job, especially if public transportation in their area is inconsistent or if the best job opportunities are far away?

Apparently it’s a concern considered trivial by some, as suggested by a recent story in the New York Times concerning the increasing use of automobile GPS “de-activation” devices that debt collection agencies and car loan lenders can utilize so as to disable vehicles driven by people who fall behind on their car payments. According to the article, people with less than excellent credit are being lured into car loans with predatory interest rates and massive late-payment penalties. In other words, people with little money are being asked to pay more of what they don’t have, and if they’re late with a payment by just a few days, their cars can be immobilized remotely even while the car is on the interstate, in traffic, trying to get to one’s job or to pick up one’s children at school.119 Aside from how dangerous such a practice can be, how can immobilizing a person’s car help them pay for the vehicle? If they can’t get to work, they can’t earn money with which to make the payments. But none of that matters in a culture of cruelty—all that matters to such a culture and its enforcers is that an increasingly large percentage of the American citizenry can be financially squeezed, neglected and criminalized.

For many on the political right it isn’t just luxuries like televisions and cars that they begrudge the poor. For some, like the editorial board of the New York Post, even providing a shelter for homeless families that is infested with rats, mold and roaches, and where “feces and vomit plug communal toilets”—as the city is apparently doing, according to a report in the New York Times—is “too generous” and relieves parents of the obligation to provide a decent home for their children.120 For Rush Limbaugh, it’s not merely a decrepit shelter or consumer products we should deny struggling Americans: even the idea that the poor should have teeth is pushing the envelope of acceptability for Rush. According to Limbaugh, if one is too poor to afford dentures, that is one’s own fault, and surely it is not the responsibility of publicly funded health care to provide such a luxury. In his estimation, one who is too poor to afford fake teeth should be content to either recycle those belonging to one’s dead uncle, or content oneself with the perpetual consumption of applesauce.121

Speaking of applesauce, conservatives have long been preoccupied with what poor people eat, as with the by now infamous stories that most of us have heard about persons buying expensive cuts of meat with food stamps or EBT cards. Tales of food stamp profligacy have been legion at least since Ronald Reagan’s 1976 presidential campaign, when he told the tale about “strapping young bucks” buying T-bone steaks with food stamps. Of course, the allure of the rather pedestrian T-bone has dimmed considerably over time, such that stories of culinary overindulgence on the part of the poor now require a bit of an upgrade. Today, it’s no longer mid-range quality steak for SNAP fraudsters, but rather king crab legs, according to Texas Congressman Louis Gohmert. In a recent speech on the House floor, Gohmert relayed a story supposedly told to him by a constituent who was angered that while he could only afford ground meat for himself and his family, he watched the person in front of him pay for crab legs with food stamps. That the constituent said the individual paid with stamps is itself an indication that the story was likely a lie (since there are no more actual food stamps in use), but that didn’t stop Gohmert from repeating it and insisting that such a story proved why the nation should cut back on SNAP; this, despite the fact that the average monthly allotment for SNAP recipients as of 2013 was only $133 per person, and only $122 per month in Gohmert’s own state.122

Clearly under the impression that the poor eat too well on the government dime, the aforementioned Arizona Republican activist Russell Pearce said recently that if it were up to him, families receiving assistance couldn’t buy “Ding Dongs and Ho Hos,” or “steak or frozen pizza,” but would be limited to “15-pound bags of rice and beans, blocks of cheese and powdered milk.”123 So not only should the poor not have seafood or meat, let alone that extravagant luxury known as frozen pizza, they shouldn’t be afforded the benefit of vegetables or even liquid milk, as these are properly understood to be the special purview of the rest of us.

But contrary to claims that the poor eat like royalty on the public dime, the evidence shows that most SNAP households are extremely thrifty with their food shopping. Far from blowing their benefits on crab legs or steak of any kind, they tend to shop inexpensively and responsibly to make the benefits last. According to a recent analysis of thousands of needy households, beneficiaries are bargain shoppers when they first receive SNAP and become even thriftier over time. Upon entering the program, nearly one in four households report purchasing food that is out of date or nearly expired, simply because those items are discounted, and this rate climbs to thirty percent for those same families after they have been on the program for six months. Likewise, eighty-five percent of new SNAP households buy food items on sale, and after six months of SNAP benefits about half of recipient households have learned to buy in bulk (so as to get discounts) and to clip coupons—practices that hardly afford one much lobster or very many premium cuts of steak.124

Importantly, pilot programs to encourage healthy eating among SNAP recipients show great promise—far more than punitive efforts to restrict what they can and cannot buy. For instance, the Department of Agriculture recently launched a project in Massachusetts to provide a credit of thirty cents for every dollar of SNAP spent on fresh fruits and vegetables. The result? A twenty-five percent jump in the consumption of these healthy items among SNAP recipients.125 Likewise, there are dozens of programs across the country that provide SNAP recipients with $2 of produce for every dollar of SNAP benefits spent at farmers’ markets, effectively matching such purchases and encouraging healthy eating.126

While the stories of SNAP extravagance say little about the reality of living and eating while poor, they speak volumes to the way in which more financially secure Americans think about those who are struggling. That anyone would believe SNAP recipients getting such paltry amounts in aid—again, $133 per person, per month, on average in 2013, or less than $1.50 per person per meal—would blow their benefits on crab legs or other expensive items, thereby reducing the amount of aid left for the rest of the month, says a lot about how families in need are perceived in this country. Those who repeat stories like this seem to believe that if and when the poor actually do splurge on pricey food items, they’re somehow putting one over on the rest of us. But it’s not as if their EBT cards are endless money pits that refill upon depletion like a cup of coffee at a Waffle House. If one blows all of one’s money on beluga caviar and cedar-planked salmon, that’s just less money for the rest of the month, which is why if a few among the poor spend in such a manner, they likely learn from their budget shortfalls and are unlikely to repeat the practice.

Although the facts suggest that impoverished families are far thriftier with their money, including government benefits, than commonly believed, it’s still worth noting how fundamentally cruel it is to police the shopping habits of the poor in the first place. To deny to those who are struggling an occasional soda or candy bar or even cigarettes or beer is incredibly callous. While there are excellent health reasons to avoid all four, and certainly their overconsumption—and this is true for all of us, not only those who are poor—is it really necessary to resent the consumer habits of those who are economically hurting? Must they be not only poor but without any momentary relief? Without any of the escapes and diversions the rest of us take for granted? No snack food, no alcohol, no cable TV, and no movies with the kids? No anything to take their minds off the daily grind of trying to make ends meet? To insist that folks struggling with poverty be so indelibly miserable as to force them to spend every waking moment trying to find a better job seems sadistically cruel; it treats their situation as tantamount to a crime for which they are to be punished. It’s the exact same thinking that animates resentment over prisoners receiving education while behind bars, or having any freedoms whatsoever—the idea that unless inmates are made to be utterly traumatized by their incarceration they won’t fear coming back to prison. So too, under this logic, unless the poor are traumatized completely by their poverty, they won’t work hard and get their lives together. Those who adhere to this thinking are making virtually no distinction between the blame they place on perpetrators of crime and the blame they place on victims of poverty.

This is far from mere hyperbole: those who struggle to survive readily articulate the way in which their reliance on public assistance has all but criminalized them, not only in the eyes of the public, but also in the eyes of those who oversee safety-net programs. In a recent column for the Washington Post, Kentucky-based writer Jeanine Grant Lister discussed her experiences as someone who once had to rely on food stamps and the national nutrition program for women, infants and children (WIC):

In America today, being poor is tantamount to a criminal offense, one that costs you a number of rights and untold dignities, including, apparently, the ability to determine what foods you can put on the dinner table. . . . Utilizing American safety-net programs (which, by the way, I paid into for years before receiving any “entitlements”) requires that I relinquish my privacy multiple times. I have to reveal how much I pay to live where I live, the amount of my utility and medical bills, what car I own, even whether I have a plot to be buried in when I die. I have to update the local office any time my income changes, or if a family member moves in or out, and even when my college-age children come home for the summer. When I used WIC to supplement the diets of myself and my two children, we were required to report to the Health Department quarterly for weight and wellness checks. My babies’ blood was taken to look for lead exposure. When my daughter’s test came back with sky-high lead levels, the Health Department came into my residence, crawled over the whole place, and took samples of windowsills, walls, soil, flooring and water, and found . . . nothing. Upon recheck, my daughter’s lead levels were perfectly normal and deemed a false positive. What if they had had discovered metabolites consistent with drug exposure? Poppyseeds metabolize like opiates. Had I been living in Section 8 housing, that would have resulted in a search of my home for drugs, the loss of my home and quite probably the loss of my children.127

Sadly, stigmatizing the impoverished in this way—viewing them, speaking of them and treating them as presumed ne’er-do-wells—is just another day at the office for most right-wing activists and media mouthpieces. Criticisms of safety-net programs have long been rooted in grandiose notions of widespread abuse and waste by recipients. Kansas lawmakers recently passed legislation to prohibit TANF recipients from withdrawing cash on their debit cards and using the money on cruise ships or for psychics, tattoos or lingerie, despite presenting no evidence that any of the state’s 17,000 recipients were splurging in such a fashion. Part of a larger welfare reform package, the Kansas law will also limit cash withdrawals using a TANF ATM card to a mere $25 per day. While supporters claim this will prevent recipients from using benefits on nonessential items, such a limit will also make it difficult, if not impossible, for recipients without checking accounts to pay rent or utilities.128

Meanwhile, FOX has hyped a report from a group called Colorado Watchdog, purporting to show that welfare recipients in that state are withdrawing their subsidies at liquor stores, exotic vacation spots, casinos and at least one strip club.129 According to the report, Coloradans withdrew $3.8 million in welfare benefits from ATM machines in states other than Colorado over a two-year period. Although nearly $1 million of this amount was in bordering states, which could simply reflect that residents live near the border and work in a neighboring state or shop there, the rest was withdrawn farther away, including $70,000 withdrawn in Las Vegas, about $6,500 in Hawaii, and $560 in the Virgin Islands.130 Such anomalies make for plenty of right-wing outrage, but they clearly are not representative of a substantial fraud problem. The entire amount of TANF money withdrawn in states other than Colorado or its border states comes to only 1.7 percent of Colorado’s TANF program dollars. The amount withdrawn in Vegas, Hawaii and St. Thomas combined amounts to less than five-hundredths of one percent (0.045) of all state benefits. To hype this handful of cases is less about truly rooting out a pattern of program abuse than about enraging a public already encouraged to think the worst of the poor and those on assistance.

As for withdrawals made in liquor stores, these too amount to less than one percent of TANF benefits withdrawn and involve less than one percent of households receiving benefits. Casino withdrawals, which amount to about $75,000 per year, could indicate that people are gambling with their benefits, but could also represent low-wage casino employees who make withdrawals at their place of employment because there isn’t a closer or more convenient ATM around. When it comes to strip club withdrawals, there appears to have been a whopping $1,500 withdrawn at one Denver-area club over the course of two years: disturbing but hardly evidence of a common practice.131 In all, making a public spectacle of these rare potential abuses of taxpayer monies ends up stigmatizing the ninety-nine percent or more of all recipients who play by the rules and don’t misuse benefits. While it might be worthwhile to figure out ways to sanction this handful who take unfair advantage, is it really so important to catch and punish these few that the broad base of TANF families should be stigmatized? In a culture of cruelty, apparently the answer is yes: stopping a handful of abusers is so important on principle, that even if entire programs have to be stigmatized, chopped or ended altogether, the cost is worth it.

Likewise, commentators on the right have accused Colorado TANF recipients of using benefits to buy marijuana at the state’s newly legalized weed stores, yet there have been only sixty-four cases of persons in the state using benefit cards to withdraw cash from ATMs located inside marijuana shops. This represents one and a half tenths of one percent (0.15) of all TANF-related withdrawals in Colorado during the month in which the usage was discovered; and the combined value of the withdrawals came to only $5,475, which is 4.4 thousandths of one percent (0.0044) of the state’s annual TANF block grant. Not to mention, the stores in which these ATMs were located dispense marijuana for patients who use the drug for medicinal purposes, so to ban recipients from using TANF benefits this way would be to deny them a valid and legal form of medically authorized relief.132

For many on the right, however, evidence is a luxury hardly worth indulging. It is virtually axiomatic for some that Americans are often poor because of drug use. Bill O’Reilly, in one particularly disingenuous segment on his FOX program, suggested that because roughly thirteen percent of the population was poor and roughly thirteen percent of Americans currently use drugs, “maybe poverty is not exclusively an economic problem.”133 The implication was that the thirteen percent in poverty and the thirteen percent who use drugs were the same people. But of course they are not. Despite being nearly three times as likely as whites to be poor, African Americans use drugs at rates that are essentially identical to the rates at which whites use them. Latinos, despite being 2.5 times as likely as whites to be poor, are less likely to use drugs than whites.134 And when it comes to drug abuse and dependence as opposed to mere recreational use, whites are more likely to abuse narcotics than people of color, despite being one-third as likely to be poor.135 If there were a correlation between poverty and drug use, suffice it to say that these data would look very different.

But lack of evidentiary support for their presumptions hasn’t stopped right-wing lawmakers from proposing drug testing for persons on public assistance. In the last several years, state legislatures have increasingly pushed through bills to require anyone receiving TANF or unemployment insurance to provide urine samples in order to prove they aren’t drug users.136 Although the bills have, in some cases, been found unconstitutional—and although the evidence suggests the cost of administering the drug testing exceeds the money saved from knocking drug users off the rolls—lawmakers persist, consumed with contempt for the poor and unemployed and committed to viewing them in the worst possible light.137 And all this, despite clear evidence that persons receiving welfare benefits do not use drugs, let alone abuse them, at rates any higher than the general public—and certainly not at rates any higher than others who receive money from the tax-payers, including teachers, those working for private companies but on government contracts, or, for instance, those Wall Street executives whose entire industry was bailed out by the government.138 Yet, none of these other recipients of public largesse are being drug tested, nor will they be.

Likewise, that so many Americans appear prepared to lecture the poor as to what they can eat (or even whether they should be able to purchase Valentine’s candy for a loved one using SNAP benefits) is telling as to the selective way in which government program dollars are perceived. As Bryce Covert notes in The Nation, “When we give people assistance through the home-mortgage interest deduction, we don’t feel entitled to tell them what house to buy or what neighborhood to live in; when we subsidize a college education through student loans, we don’t tell students what school to go to or what to major in. When we tax capital gains income at a lower rate than income made from labor, we certainly don’t tell those stock pickers what to do with the extra cash.”139 But of course, as Covert notes, while government programs like these benefit mostly middle-class and affluent taxpayers—and are “submerged” in the tax code or programs that are less visible to the public—food stamp EBTs are observed in the process of being used and help the poor and near-poor, who are presumed to be in that condition in the first place because of their irresponsibility. So even as those who are quite a bit better off receive the biggest benefits from government programs (funded by the government via direct payments or deferred taxes which result in the rest of us having to pick up the slack), it is the relatively small amount received by the poor that sets us on edge and makes us feel entitled to moralize.

Sadly, some simply cannot relinquish their commitment to the notion that the poor and those on public assistance are irresponsible and dishonest, scamming the system and taking advantage of hard-working taxpayers. When the Department of Agriculture recently released a report noting that 2012 had seen the lowest rate of SNAP payment errors in history, conservative commentators went ballistic. FOX Business anchor Stuart Varney excoriated the Department of Agriculture report, asking, “Since when has (a 3.42 percent error rate) been good?”140 In fact, such a rate is extremely good and is the lowest in the history of the food stamp/SNAP program.141 Even that error rate includes underpayments (i.e., payments that were lower than they should have been, or payments that were not made at all to persons who applied and should have received them but were unfairly rejected). When underpayments are subtracted from overpayments, the net amount of overpayment in SNAP falls to only around two percent of program dollars: one-eighth the amount of projected fraud in the area of tax collection.142

Importantly, the error rate in SNAP has declined rapidly despite the fact that program rolls increased during the recession. In the past decade, the SNAP error rate has fallen fifty-six percent even as participation grew by 134 percent due to the economic downturn. Likewise, although there is some degree of food stamp trafficking (in which recipients trade SNAP for cash, presumably so as to purchase items normally not covered by the program), it is estimated that for every dollar of SNAP benefits, only one penny is diverted through trafficking: half the amount that was being siphoned off a decade ago, and sixty percent below the amount being lost in the early 1990s to this kind of fraud.143

As for fraud by TANF recipients, there is no doubt that technical fraud occurs, meaning that recipients in some cases work for cash under the table by taking care of a neighbor’s kids or cleaning their house—income that would result in a suspension or reduction of benefits were it reported. But considering that the average monthly benefit from TANF is only $162 per person,144 and $387 per family—less than half the poverty line in every state and less than one-third the poverty line in most of them—is such under-the-table activity really surprising?145 If benefits are set so low that even when SNAP is added to them the typical family on both kinds of assistance still remains below the poverty line, how is one supposed to survive without such side work? If anything, that kind of fraud speaks to the work ethic of the poor and their desire to earn income and take responsibility for themselves and their children. It suggests that the stereotype of lazy welfare recipients sitting around doing nothing is a complete contrivance.

In her 2011 book, Cheating Welfare: Public Assistance and the Criminalization of Poverty, University of Connecticut law professor Kaaryn Gustafson notes that although technical fraud is common, other types—like someone filling out multiple claim forms so as to procure excess benefits from the system—are exceedingly rare. In California, she notes, officials only identify about three such cases per month, only one of which has sufficient evidence of intentional fraud as to justify further investigation. According to Gustafson, efforts to detect criminal fraud through mechanisms like fingerprinting of recipients, intended to spot persons with criminal records who are legally barred from most program benefits, have proven superfluous. Not only do such efforts not result in much weeding out of criminals, they also are anything but cost-effective. In Texas, fingerprinting efforts ended up costing taxpayers $1.7 million in the first seven months of operation, and nearly $16 million by the end of 2000. But in four years, there were only nine criminal fraud charges filed by state prosecutors.146 Indeed, serious fraud is so rare and expensive to detect and prosecute that anti-fraud initiatives typically exceed whatever amount of money is being lost from fraud in the first place. As Gustafson explains:

When a welfare recipient is charged with fraud, she adds costs to the criminal justice system. In addition to the costs of investigation, the county has to pay for the time of both a prosecutor and a public defender. If the recipient goes through a welfare fraud diversion program, the county bears continuing administrative costs for collecting payments and monitoring her progress in the diversion program. If the welfare recipient is convicted and sent to jail or prison, then government costs soar. It is much more expensive to house a single inmate for a year than it is to provide for a typical family on welfare. If the head of a household does end up serving time in jail or prison, her children may be placed in the foster care system, where more money will be spent on the children than under the welfare system. All of these costs are ignored in calculations of the costs of investigating and prosecuting welfare fraud. In sum, the government cost savings that policymakers associate with punitive and criminalizing welfare policies may actually only be cost shifting—either between federal, state, and local coffers or from the welfare system to the criminal justice and foster care systems.147

For some however, presuming the worst about the needy and unemployed is so reflexive an act, that they will quite purposely deceive the public. Recently, right-wing talking heads from Rush Limbaugh to FOX News personalities Bill Hemmer, Lou Dobbs, Shannon Bream and Charles Payne all publicized what they considered “stunning new evidence” about the irresponsibility of the unemployed.148 Relying for their information on a popular right-wing website that naturally provided no links to its source, they trumpeted supposedly convincing data from the Labor Department to the effect that the unemployed spend more time shopping than looking for a job. In fact, the original article breathlessly exhorted its readers that the unemployed not only shop too much but also spend twice as much time during an average day in “socializing, relaxing or leisure” activities as they do searching for a job. They also seemed shocked and appalled at the amount of sleep that unemployed people report getting each day, as if this were indicative of how lazy the jobless are: “Nearly all of the unemployed—99.9 percent—reported sleeping on an average day. On average, they dedicated 9.24 hours to that activity.”149 Horrors.

But naturally, the article’s author and the right-wing media figures who repeated that author’s claims got it wrong. Despite Payne’s suggestion that the data proves how welfare programs “do make people lazy. They make people comfortable. They make you want to take a chill pill,”150 the actual statistics say nothing of the sort. The data source in question nowhere used the term “unemployed” to describe individuals being examined, nor was it specifying anything about those persons; rather it refers to persons “not employed,” which is an entirely different thing, despite how similar it may sound. According to the report used to make this claim (the Labor Department’s American Time Use Survey), those who are not employed include persons “not in the labor force,”151 and that concept includes not only people who have simply quit looking for a job and might be on public assistance, but also those who are retired, disabled or full-time students, and those who are stay-at-home moms or dads with partners who earn enough to support them on their own. According to Census data, of all persons who are not working, two in five are retired, one in five are students, fifteen percent more are chronically ill or disabled, and another thirteen percent are caring for children or other family members (this would include many middle-class stay-at-home mothers). In other words, eighty-five out of a hundred people who are classified as not working or not employed fall into these categories.152 So the fact that only nineteen percent of those classified as “not employed” engage in job searches or interviews on an average day, while 22.5 percent of those “not employed” shop for items other than groceries on an average day, means nothing. Those doing all that shopping and luxuriating are mostly not the people the right would have us envision: rather, they are people who are not in the labor force because they haven’t the need to be due to a partner’s earnings, or else they have already retired or are going to school.

The stay-at-home mom who spends her days shopping and getting her nails done and whose husband makes millions on Wall Street may be lazy and self-absorbed, and the seventy-five-year-old Florida snowbird who sits on the beach all day may have been a horrible human being during his work years, but they are surely not the individuals being chastised by the right with this data, even though they are the ones who are likely to be showing up in it. This is just one more prime example of how conservatives routinely distort data to further a narrative of cruelty toward America’s most vulnerable.

Welfare Dependence and the Culture of Poverty: America’s Zombie Lie

But no matter the evidence, there are many who simply fail to accept that their stereotypes of the poor are inaccurate. Not only do they continue to believe that the poor and those on public assistance are grifters, they insist that various safety-net programs are so generous as to have engendered intergenerational welfare dependence and a “culture of poverty,” characterized by irresponsibility on the part of recipients. According to this line of thinking, programs from cash aid to SNAP to unemployment insurance (even Social Security and Medicare for the elderly) have rendered the United States a nation of “takers.” It is this zombie lie—the kind that never seems to die no matter the counter-evidence—that one can hear articulated virtually any day on talk radio or on FOX. On his FOX Business show, former Judge Andrew Napolitano has claimed:

Entitlements like Social Security, Medicare and Medicaid . . . make Americans dependent upon big government. And dependency is turning this country from a nation of makers who come up with new ideas, who employ new people, who risk their wealth and create wealth, into a nation of takers who primarily consume other people’s wealth.153

While Napolitano is willing to throw all social programs including Social Security under the dependency bus—surely not endearing himself to the elderly who rely on it for their survival—the charge of dependency is more often thrown at programs aimed specifically at the poor, like TANF, SNAP benefits, or public housing. This is why when Congressman Paul Ryan recently introduced his new anti-poverty plan, he did so in terms that clearly suggested a belief that existing programs had contributed to cultural pathology and dependency. As Ryan put it:

We have got this tailspin of culture, in our inner cities in particular, of men not working . . . generations of men not even thinking about working or learning the value and the culture of work, and so there is a real culture problem here that has to be dealt with.154

Aside from the implicitly racist framing of the issue—references to “inner cities” immediately conjure images of persons of color and are known to do so—Ryan’s assumptions are based on falsehoods. His plan, which calls for welfare recipients to sign behavioral contracts with the government and then to be sanctioned for any failure to follow the terms of the contract, presumes that those on assistance are dysfunctional and little more than children in need of parental guidance and discipline.155 Though said in a slightly less bombastic way, it’s little different from the hateful ventilations of Ted Nugent, who has said antipoverty programs should be eliminated because poverty is the result of “poor decisions” that “we need to punish.”156 Because punishing poor people historically always managed to eliminate poverty—it’s apparently an ancient wisdom we’ve forgotten.

The truth, of course, is quite the opposite: there is no logic or evidence to suggest that welfare programs have created a culture of poverty or permanent underclass. If there were a “culture of poverty,” or if poverty were mostly the result of “bad decisions,” we would expect most of the poor to remain poor for long periods. After all, few people trapped in a culture of pathology and dysfunction in January would likely have undergone a major cultural transformation by April, or even by Christmas; but most people who slip into poverty do not remain there long, suggesting that impoverishment is more about economic conditions and opportunity than about individual or cultural pathology. As Washington University professor Mark Rank explains:

The average time most people spend in poverty is relatively short . . . the typical pattern is for an individual to experience poverty for a year or two, get above the poverty line for an extended period . . . and then perhaps encounter another spell at some later point. Events like losing a job, having work hours cut back, experiencing a family split or developing a serious medical problem all have the potential to throw households into poverty.157

Data clearly bear out Rank’s point: Among persons entering poverty in the most recent period under review, forty-three percent remained poor for four months or less, 71.5 percent were poor for no more than a year, and only eighteen percent—or less than one in five—remained poor for more than twenty months in that period.158 This is not to diminish the hardship faced by such families during their time in poverty (and of course, as mentioned earlier, even those who are not officially poor face substantial hardship and difficulty making ends meet), but merely to suggest that the notion that poverty becomes a “way of life” or is intergenerational in nature is almost entirely false.

Modern-day Scrooges might respond that although most of the poor might not be trapped in a long-term underclass culture, certain segments of the poverty population are, and especially those who rely on various forms of public welfare such as cash aid or nutrition assistance. But that position is also belied by the available data. For instance, the rates at which persons avail themselves of cash aid or SNAP have fluctuated from nearly seventeen percent of the population in 1993 to only 12.5 percent by 2000, then back to seventeen percent by 2008 at the onset of the recession, and then twenty-three percent by 2011. Rates of welfare dependency, which are calculated using a bipartisan formula developed by Congress in 1994, have also jumped around from six percent in 1993 down to three percent by 2000, back to four percent by the beginning of the economic collapse, and then 5.2 percent by 2011.159 If rates of welfare receipt and dependence both fluctuate so dramatically, and especially in direct relation to the strength or weakness of the economy, it becomes difficult to believe in a widespread “culture of poverty” that plagues those at the bottom of the nation’s economic barrel.

Finally, the fact that most so-called welfare recipients don’t receive benefits for a long period of time also suggests that poverty and even welfare receipt itself are evidence not of cultural pathology so much as of economic conditions over which most Americans have little control. In the case of TANF, for instance, half of all persons who enter the program will be off of the rolls entirely within four months, and nearly eight in ten will leave the rolls within a year. As for SNAP recipients, fifty-two percent who come onto the program rolls will exit within a year, and two in three will be off the rolls within twenty months.160

Although conservatives claim that most welfare recipients receive benefits for long periods, they make this case by blatantly misinterpreting the available data. For instance, conservative advocacy groups will often point out that if you look at the TANF rolls at any given moment, the typical family receiving benefits will have been receiving TANF for roughly three years.161 Likewise, for SNAP, they will note that many remain recipients for a long period—an average of seven years for about half of all persons receiving SNAP at a given moment.162 But these statistics, while seemingly quite damning of those on assistance (or at least damning of the programs themselves for fostering long-term dependence, as per the conservative gospel) are thoroughly deceptive, and do not change the fact that most persons who come onto either the cash or nutrition assistance rolls will leave the programs in a short period. How can that be? How can most recipients get off the programs in a matter of months, and yet, most persons on the programs at any given moment still be long-term recipients? It sounds impossible for both of these things to be true, but it isn’t. Both claims are accurate, but only one is relevant, and it isn’t the one upon which conservatives focus.

The difference between the percentage of overall TANF or SNAP recipients who are short-term versus long-term beneficiaries, and the percentage of such recipients at any given moment who will be long-term beneficiaries is a large one. By definition, if a person is on the rolls at any given moment, that same person cannot also be off the rolls at that same moment. So if you look at the TANF or SNAP rolls in August, for instance, anyone who came onto one or both programs in January and then was off by June would not be captured in the data. But anyone who was a long-term recipient and was going to be on the rolls for several years most certainly would be. What one will see at any given moment will be a disproportionate number of long-term recipients, not because most who enter the programs actually remain on them for a long-time, but because anyone who is a long-term recipient is going to be captured in the data at whatever moment you take your statistical snapshot. But that says nothing about the effect of the programs themselves on recipients and their propensity to become dependent for long stretches of time.

As an analogy, consider the population of the nation’s jails. On the one hand, most people jailed over the course of a given year will be locked up for relatively minor offenses, and will be released in a fairly short period of time, while a much smaller share will be tried and convicted of serious crimes and sentenced to do time in prison. But if you looked at the population of persons in jail who were awaiting trial right now, or at any given point in the course of a year, a disproportionate share of these individuals would likely be persons who had been accused of serious crimes and were facing long terms. Not because most lawbreakers are hard-core violent offenders who will receive long sentences, but because anyone who is a hard-core violent offender is likely to be captured in the data at whatever moment you sample it. Minor offenders, on the other hand, will have cycled in and out of jail, or they will have been released on bail awaiting trial; as such, they will not be evident to the same extent.

It would be the same for hospitals. If you were to look at those currently occupying beds in your local hospital, a disproportionate share of them would be suffering from serious conditions from which they might not recover, and certainly not quickly. And yet, if you were to look at the entry log of all persons who cycled through the hospital in a year, most would have come in for far less serious conditions, at which point they would have been fixed up by doctors and then sent on their way. If you assessed the efficacy of the doctors based solely on the share of chronically ill patients remaining in a hospital bed at any given moment, your assessment wouldn’t be very good. But if you assessed their effectiveness by looking at the results obtained for all patients admitted, the hospital and its doctors would look far better. The same is true with welfare programs. The important point is that most people who enter the programs won’t stay long, and this is why we can say that such initiatives do not foster dependence. If welfare benefits did foster dependence, let alone a culture of dependency, we would expect that large numbers and perhaps the majority of such persons coming onto the rolls would find themselves trapped on them, unable or unwilling to leave, and that is simply not the case. In other words, when someone like Wisconsin congressman Glenn Grothman insists that, “some people are arranging their life to be on [SNAP],” he is not only insulting the poor, he is also lying about them.163

The only way that someone could really believe that social welfare programs in the United States encourage dependence is by knowing almost nothing about the nation’s welfare apparatus, because given the paltry nature of most program benefits and how few people actually receive them, becoming dependent on the benefits of those programs is virtually impossible. For example, by September 2013 there were only about 3.6 million people in the entire country receiving cash welfare under the TANF program, down from fourteen million who received such benefits in the early 1990s. Of these 3.6 million TANF beneficiaries, 2.8 million, or seventy-eight percent, are children,164 and of those children who benefit from the program, three in four are under the age of twelve.165 The percentage of the nation’s adults currently receiving TANF sits at less than one-half of one percent (0.5) of the adult population, and less than 1.5 percent of the total population (including child recipients) is on the program. Far from encouraging a nation of dependent takers, the nation’s primary cash aid effort reaches almost no one.166

In fact, not only do few Americans receive cash assistance, but most people living below the poverty line do not receive cash aid under TANF. Fewer than one in ten poor people in the United States receive cash assistance, down from over forty percent of the poor who did in the mid-1970s. Going back to the claim that welfare payments somehow create a permanent culture of poverty, how can such a claim be made when nine of ten poor people don’t even receive them? Even among the somewhat smaller group of poor persons who are eligible for aid—because not all of the poor meet the various requirements of the law—most do not receive assistance. Whereas nearly eighty percent of those families who were eligible for cash assistance in 1981 actually received aid, today only about a third of eligible families do.167 And how can such benefits engender dependence, when the value of those benefits remains so paltry? In 2012, families enrolled in TANF received an average monthly benefit of $387,168 and even the maximum monthly benefit for a family of three with no other income averaged only $436.169

Not only does TANF reach very few, and not only are typical benefits quite low, but those it reaches look nothing like the common stereotype. One in four TANF recipients lives in a family with at least one full-time worker, four in ten live in a family with at least one person working either part time or full time, and six in ten live in a family where someone either currently works or is actively involved in searching for employment.170 For those families where no adult recipient of TANF is in the labor force, a disproportionate share of such cases involve households where the adult member of the family is at home caring for small children or is disabled. When it comes to households on TANF where no one is working, looking for work or disabled—in other words, households with an able-bodied adult who is “doing nothing” in the eyes of the modern-day Scrooges—only nine out of every one hundred recipient households fit this description. With about 1.2 million recipient households in all, this means only about 108,000 of them could even theoretically represent the image held by conservatives.171 At least ninety-one percent of such families fail to fit the image of those receiving welfare, yet the stereotype persists.

Among the most prevalent stereotypes of the poor, and especially those on public assistance, is that of the single mother who engages in irresponsible sexual activity, giving birth to children out of wedlock and thus increasing her monthly welfare stash. But it is simply not the case that welfare payments contribute to out-of-wedlock childbirth among single moms. Half of all families receiving cash welfare have only one child, and nearly eight in ten have only one or two.172 What’s more, and contrary to popular perception, there is no difference between the number of children in single-parent families that receive assistance and the number of kids in families that don’t: in both instances, the statistical average number of children in the home is just under two.173 Likewise, the typical household receiving SNAP benefits is composed of only two people—most often a parent and one child.174

As for commonly held racial stereotypes of welfare recipients, these too lie shattered before the facts. Although it is true that persons of color are disproportionately represented on the TANF rolls (because they are disproportionately poor, and poverty is what qualifies one for benefits), only about a third of recipients are black, while slightly less than a third are white and another 30 percent are Hispanic (all of them legally present in the country, by the way).175 If there are 3.6 million TANF beneficiaries and only twenty-two percent of these are adults, as noted previously, and if one-third of these are African American, this means that there are only about 261,000 black adults in the entire nation receiving cash welfare benefits, out of a population of more than twenty-nine million black adults in all: about nine-tenths of one percent (0.9) of the overall adult black population. Considering that the “culture of poverty” is so often thought to be specifically a problem within the black community, the fact that not even one percent of the black adult population receives cash benefits renders such beliefs nothing short of preposterous. How a group of nearly forty million individuals (adults and kids combined) can be rendered culturally defective by programs that reach so few remains a mystery that those committed to the zombie lie feel no need to explain. Likewise, when it comes to SNAP, the common racial assumptions about who receives benefits (and how many black folks in particular do) are incredibly inaccurate. In 2013, forty-five percent of recipients were white, while thirty-one percent were black and nineteen percent were Latino/a.176

Much as with TANF, there is no evidence or logic to suggest that SNAP encourages dependence or cultural pathology. Although the rolls of SNAP beneficiaries did indeed grow dramatically after the onset of the recession in 2008, and have only recently begun to drop again—177down nearly two million persons in all from December 2012 to December 2014178—it is not the case that these benefits are sufficient to engender dependence on the part of those who receive them. First, benefit levels are hardly adequate to foster dependence. As of 2015, the average monthly SNAP allotment comes to only $128 per person, or approximately $4.27 per day, or $1.42 per meal.179 In 2015, the maximum monthly benefit for a family of three is estimated to be $511, or roughly $1.90 per person per meal.180 Second, to suggest that SNAP beneficiaries are rendered dependent by the program, or are part of some culture of poverty “unattached to work,” as Paul Ryan argues, is to take no note of the facts regarding the population of SNAP recipients. To begin with, forty-four percent of all persons receiving SNAP are children who are obviously not expected to be in the workforce earning their keep. Another nine percent are elderly and another twelve percent are non-elderly adults who are disabled. These groups alone represent roughly two-thirds of SNAP beneficiaries: people who are not expected to be working.181 Of those who are able-bodied adults, a little more than half already work or live in a household where another adult works, or they are actively looking for work but unable to find it.182 According to the most recent evidence, eighty-six percent of SNAP recipients are either children, disabled persons, persons who are already working, or persons who are unemployed but actively and consistently seeking a job.183 This means that the common image crafted by conservatives applies at most to one in seven persons benefiting from SNAP.

Despite the financial inadequacy of these two programs, conservatives continue to falsely insist that large numbers of families receive generous assistance from a huge basket of programs beyond these two. For instance, several FOX commentators recently twisted the findings of a Census Bureau report on various government benefits received by children so as to suggest that most American kids are now essentially wards of the state. According to the report, sixty-five percent of American children now live in a household that receives some form of public assistance during the course of a year. Roughly two-thirds of the nation’s youth come from families that receive benefits from SNAP, TANF, Medicaid, WIC, and/or the school lunch program.184 To FOX commentator and longtime actress Stacey Dash (whose most memorable role was, appropriately enough, in the film Clueless), such facts prove that government aid is “the new version of slavery.”185 Of course it is, because if you receive an EBT card or state-subsidized asthma medication it’s exactly like being whipped, raped and stripped of all legal rights.

Putting aside Dash’s absurd slavery analogy, the reaction from the right to the Census report could hardly be less honest. As for the raw facts, FOX more or less got them right. In 2011, approximately forty-eight million separate children lived in families receiving benefits from one or more of the above-mentioned programs, and this represented sixty-five percent of all children in the United States that year. Thirty-five million of these kids lived in homes where someone received benefits from the school lunch program; twenty-six million of them lived in homes where Medicaid benefits were utilized; seventeen million of them were in homes that received SNAP; six million were in homes that used WIC, and a little over two million were in homes that benefited from cash assistance under TANF. Although these numbers have come down a bit since then, for 2011 they are indeed accurate so far as they go. But this is roughly the point at which FOX proceeded to get everything else about the report horribly, horribly wrong.

To begin with, the period under review in the report stretches from 2008 to 2011. In other words, the report examines children’s family conditions during the worst economic downturn since the Great Depression. Even the data for 2011 reflect family conditions at the tail end of the recession and while the after-shocks of job loss and wage stagnation in the previous three years were still reverberating for millions. That the number of kids in families having to turn to various government benefits would increase during an economic crisis unparalleled in the past seventy years should hardly surprise anyone.

Second, and as the report’s author makes very clear, the primary challenges facing children—and particularly those in low-and moderate-income families—include disruptive life transitions such as parental unemployment or having to move to a new place often. These kinds of transitions, as the report indicates, are highly correlated with having to rely on one or another government program. As it turns out, forty-two percent of children in poor families (and about a third of all kids in the nation) moved at least once during the period under review, and forty-four percent of poor kids (and about a third of all children) had at least one parent who experienced a change in their job situation during the same period. This matters because, as the author notes:

Parents who have steady employment may be better able to provide consistent economic support, while parents who go through many job changes may have unpredictable work schedules and irregular income.186

In other words, whatever the statistics might say, they suggest that use of these programs is less about culturally engendered dependence on benefits and more about serious and unexpected life drama that happens often to persons who are on the economic margins, and especially during an extraordinary economic recession.

Third, to argue that the sixty-five percent figure proves the so-called welfare state is creating a self-perpetuating culture of poverty (the standard right-wing interpretation) ignores the fact that most of the kids reaping the benefits from the listed programs are not officially poor, but they qualify for benefits because their family incomes are too low to bring them above eligibility levels. There were 16.6 million children living in “poor” families in America in 2011, for instance, but 47.9 million kids living in families receiving benefits from these programs that year. This means that two-thirds of the kids whose families benefit from these efforts are not living in poor homes, which in turn means that they will likely be in homes with parents who earn income, but not enough to make it without a little help. Many others live in homes that are poor but still have earned income from work. How the use of these programs can be blamed for fostering “dependence” or discouraging work when most of the beneficiaries live in homes with earned income is a mystery left unexamined by conservative hysterics.

So, for instance, let’s look at the SNAP program. According to the most recent data from 2013, fifty-two percent of SNAP households with kids have earned income from work, and of those that don’t, a large number of them have parents who are disabled. In fact, it is increasingly likely for SNAP households to have earned income, and less likely for them to rely on other forms of assistance, suggesting that receipt of this program’s benefits has nothing to do with dependence, but rather, reflect the realities of low-wage work in a faltering economy. For instance, SNAP households are fifty percent more likely to have income from work today than they were in 1989, while the likelihood that they receive cash welfare has plummeted by eighty-five percent, from forty-two percent of such households to only 6.5 percent today.187

Or consider the school lunch program. According to the report that so concerns FOX, this is the program that appears to benefit the most children, with nearly half of all kids living in homes that benefit from this one government effort. But there are three huge problems with the way conservatives are reading the data. To begin with, eligibility for free or reduced-price lunch goes up to 185 percent of the poverty line, which means that many beneficiaries of this program are not poor, and thus reside in families that are hardly dependent on welfare benefits; rather, they work, albeit at jobs that don’t pay enough to bring them above the eligibility limits. How a program can be rendering people dependent when they in fact work hard every day is again left unexplained by the right.

Second, according to the most recent data, nearly nine million kids who are counted as benefiting from the school lunch program—and who represent nearly thirty percent of current recipients—are called “full-pay” beneficiaries. These kids come from families whose income is high enough that they don’t qualify for free meals, or even officially reduced-price lunches, but they are still receiving a slight price break relative to the actual cost of the food provided and are thus counted in the data as beneficiaries of the program. They may not even know that they’re benefiting. They don’t have to fill out paperwork or apply; rather, they just receive a slight subsidy for the cafeteria meals they purchase, and are therefore counted just like folks who get their meals for free. Clearly, even under the most absurd interpretation, these 8.7 million recipients cannot be considered “dependent.”188

Third, many children who receive benefits from the school lunch program only do so because they live in high-poverty school districts where all students are automatically enrolled in the program (even if they aren’t poor, and no matter how hard their parents work)—a policy implemented so as to reduce administrative costs, thereby allowing the program to operate more cheaply and efficiently. While we could perhaps end automatic enrollment and make all parents prove their low income in order to qualify for benefits, such a change would add to the costs of the program by increasing the kinds of bureaucratic paper-shuffling that the American right normally opposes.

In all, when you consider those kids who receive school lunch benefits but are a) not poor and who live in homes with a parent or parents who work; b) poor but whose parent or parents work; c) not poor at all but who benefit from the small subsidy provided even to “full-pay” recipients; or d) children who benefit automatically just because they attend a high poverty school but who may not be poor themselves, there is little doubt that the vast majority of the children and families claimed as beneficiaries are not caught in a cycle of dependence, and that none of them are being “enslaved” by the program. The need is real, but the dependence is not.

As for Medicaid, the assumption that families with kids who make use of this program are slackers who would rather let the government take care of them than work for a living couldn’t be further from the truth. Fully eighty-six percent of children who receive benefits through Medicaid or the supplement to Medicaid known as the Children’s Health Insurance Program (CHIP) come from families where at least one adult works.189 Sadly, despite their earned income and even middle-class status in many cases, families in high-cost-of-living areas where health care inflation has been especially onerous are eligible for benefits and often have to make use of them. But doing so hardly suggests that the families are suffering from a debilitating mentality of dependence, nor that their children are being taught to rely on the state. The parents in these cases are doing their best; they’re working and doing everything conservatives would have them do. Unfortunately their earnings have been insufficient to cover the spiraling costs of health care.

And when it comes to the WIC program for postpartum moms and their infants and toddlers, forty-three percent of beneficiaries live above the poverty line due to earned income, but still qualify for assistance. If nearly half of beneficiaries aren’t even poor because they receive money from employment, how can the program be seen as encouraging dependence and laziness? And even for those beneficiaries who are poor, how can a program that provides assistance to children at special risk for nutritional deficiencies (like kids born prematurely or with particularly low birth weight), be ridiculed as an effort that fosters a culture of poverty?

Naturally, FOX is hardly alone in claiming that poor families are receiving massive government benefits. A recent study by the Cato Institute claims that the typical welfare family receives such huge handouts in most states that adults in these families have little incentive to work. According to Cato, welfare benefits make these families better off than they would be if one of its members were to get a full-time job.190 But the report, much like an earlier one they issued in the mid-1990s, is entirely dishonest when it comes to how much a normal “welfare” family receives in benefits.

For instance, in order to claim that welfare benefits pay more than minimum wage in thirty-five states and are equivalent to a $15-an-hour job in thirteen of these, they divide the total amount spent on seven different welfare programs by the numbers of poor families with no currently employed member, and then assume that the resulting number is the average amount received by each such poor family. But this is dishonest on multiple levels. To begin with, few if any poor families receive benefits from every single one of the seven programs Cato references: TANF, SNAP, WIC, Medicaid, housing assistance, subsidies for utilities, and emergency food aid. As such, to presume that the typical “welfare” family takes home a basket of goodies anywhere near the amount Cato estimates is preposterous. For instance, as Cato admits in the report (though studiously finessing the implications of this fact), only about fourteen percent of TANF recipients also receive housing assistance;191 and although most families receiving support from TANF receive SNAP benefits, very few persons who receive SNAP live in households where TANF benefits are received. According to the most recent data, only 6.5 percent of households that receive SNAP benefits also receive cash welfare under TANF.192 To presume a common welfare basket involving both of these program’s benefits (to say nothing of the others) is to grossly distort the picture for most persons who rely at one time or another on public assistance. Additionally, only eleven percent of SNAP recipients receive benefits from the WIC nutrition program for new moms and infants,193 and only a little more than one in four SNAP beneficiaries receive any form of housing subsidy or public housing benefit,194 indicating that Cato’s assumptions about what a “typical” welfare family receives are completely off-the-mark. Even more fatal for Cato’s principal claim—the idea that welfare programs discourage work among the poor because of their generosity—is the fact that since 1989 the percentage of SNAP recipient households also receiving cash has plummeted from forty-two percent of all households to only 6.5 percent, while the share receiving income from work has increased substantially, from only twenty percent of recipient households in 1989 to thirty-one percent with earnings now.195 This being so, it is disingenuous to claim that SNAP discourages work, since SNAP families are working more than ever and relying on cash aid less than in the past.

Additionally, Cato ignores the fact that many of the benefits it references are received by persons who are not poor and who currently work, or those who work but in spite of their employment remain below the poverty line. SNAP expenditures, utility assistance expenditures, and Medicaid benefits are not all consumed by the unemployed poor; thus, simply dividing the amount spent on these efforts by the number of poor families with no member in the workforce will result in a gross overestimation of the amount being received by these kinds of families and overstate the supposed “disincentive” that these programs create for seeking employment. Given that large numbers of SNAP beneficiaries live in homes with at least one working family member, the entire idea of such a program creating a disincentive to employment is debunked.

Third, the methodology the Cato analysts use to calculate the value of non-cash benefits is so dubious as to suggest they concocted it with deliberate deception in mind. For instance, they presume that the value of the rent subsidy provided to those who receive housing benefits is equal to the average fair market rent in a recipient’s state. But those who receive rent subsidies or live in public housing do not receive benefits equal to the average fair market rent. At most, the value of the benefit received should be considered relative to the typical rent at the lowest end of the rental market, since it is that kind of housing that poor people would be accessing in the absence of rental aid. By calculating the benefit relative to the average cost of housing in a state, Cato inflates the value of assistance by comparing it to middle-class rental housing. They are basically assuming that in the absence of public housing subsidies the poor and unemployed would find jobs that would allow them to afford an apartment in the mid-range of their local housing market, which is like saying that if poor people didn’t receive housing subsidies they would suddenly become middle class—an idea so self-evidently preposterous as to be hardly worth serious consideration.

Cato also assumes that beneficiaries of housing assistance receive aid equal to the full cost of that housing, but this too is inaccurate. Persons in public housing or who receive Section 8 vouchers are expected to use roughly thirty percent of their income to pay rent. Although the unemployed may not contribute anything toward rent in those months when they are jobless, for many in public housing who work part time, roughly one-third of those earnings will be paid in rent. Since persons who work and receive housing subsidies are not counted in Cato’s analysis—rather, all housing benefits are presumed to go to persons who do not work at all—Cato overstates the amount of the per-family subsidy, and especially how much is received by the supposedly “idle” poor. Cato also grossly distorts the percentage of persons who benefit from housing subsidies while also receiving benefits from other programs. Only half of those who receive rent assistance or other public housing benefits receive SNAP in a given month, while less than eight percent benefit from WIC, a far cry from Cato’s accusation that housing aid comes along with a plethora of other programs.196

As for TANF, Cato’s claims are even more preposterous: their analysis presumes that each poor family eligible for benefits actually receives those benefits, but most of the poor do not. Only one in ten poor people, and only a little more than one in four poor families, receive cash welfare benefits, meaning that such benefits cannot be a substantial cause of why such families presumably are poor or unemployed.197 If you don’t receive a benefit, it’s pretty tough to conclude that said benefit is the reason you aren’t currently working.

When it comes to Medicaid, the methodological dishonesty continues. Cato calculates the value of Medicaid benefits by comparing the per-family costs of the program to the premiums a person would have to pay under a typical private insurance policy providing the same coverage. In short, Cato assumes that in the absence of Medicaid the poor would use their own money to purchase comparable coverage, but such a claim is ridiculous. It’s not as if people struggling with poverty are such shrewd and calculating mathematicians as to sit down and carefully calculate the cost of private care as opposed to public care and then simply opt for the latter so they can keep their own money for lottery tickets. In the absence of Medicaid, the poor would simply receive no health care at all, or rely on emergency room care when they became ill. Medicaid does not provide the poor with an income boost relative to what they would have in its absence; it does not allow them to keep more money in their own pockets that they would otherwise have spent on health care; rather, it provides them with needed medicine and other health services that they would otherwise likely do without. Not to mention, payments are made to health care providers and not the poor themselves: it is hardly fair to accuse those who benefit from heart surgery or blood pressure medicine of being enriched by welfare. One has to get sick in order to use the benefits, which is hardly something the poor—however crafty some may believe them to be—are likely to do on purpose.

And, of course, many of the Medicaid benefits that Cato presumes are going to the jobless poor are going to persons who, while poor enough to qualify for benefits, are actually in working families. As mentioned previously, nearly nine in ten low-income children who receive benefits through Medicaid or the Children’s Health Insurance Program are in families where at least one adult works.198 By ignoring these beneficiaries in employed homes, Cato overstates the value of benefits to each jobless family on assistance, thereby vastly exaggerating the supposed work disincentive provided by the program. Roughly half of families receiving benefits from Medicaid or CHIP are above 130 percent of the poverty line, but qualify for assistance because the cost of living in their communities is especially high and their wages make it impossible to afford health care on the private market. Medicaid cannot be blamed for discouraging work or locking people into poverty if half of recipients are earning income above the poverty line. Finally, even those who benefit from Medicaid often do not receive benefits from other programs, let alone all of the programs specified by Cato. For instance, less than forty percent of Medicaid beneficiaries also receive SNAP, and only ten percent receive benefits from WIC.199

Regarding WIC, Cato notes that since about sixty percent of eligible families participate in the program, it is therefore legitimate to include WIC benefits in the calculation of a typical welfare benefit package. But just because six in ten families eligible for this one program receive its benefits does not mean that the typical family in need receives the benefits of this program as well as all the rest Cato mentions, thereby providing a total benefit package sufficient to beat working for a living. Nor does it prove that all of those who are eligible for WIC are the jobless poor, and that therefore the program’s benefits are all accruing to such persons and providing a possible work disincentive. Indeed, most of those who benefit from the other programs in Cato’s presumptive “welfare basket” do not receive WIC, and most who receive WIC do not benefit from the other programs. Only eight of one hundred recipient families also benefit from TANF and only about one in four TANF beneficiaries also receive WIC (even then, only for a brief time, as they are only available to moms and their kids under the age of five).200 Only forty percent of WIC beneficiaries also receive benefits under SNAP, meaning that the clear majority do not.201 And since many low-income working moms qualify for WIC for themselves and their kids it is also unfair to calculate the average amount spent on this program as if it were all spent on persons in poor and unemployed families. More than forty percent of WIC families have incomes that are at least thirty percent above the poverty line, which means they have some kind of earned income. To blame such a program for discouraging work and locking people in poverty when some forty-three percent of its beneficiaries are above poverty and have income seems especially disingenuous.202

Overall, the evidence clearly negates any claim that government benefits for those in need discourage work. After all, according to the most recent evidence, three-quarters of all persons enrolled in major government benefit programs are in working families, and these families consume roughly two-thirds of all program benefit dollars.203 As such, to suggest that government aid saps the work effort of its recipients, flies in the face of the facts, however much it might be commonly believed.

But however dishonest the Cato Institute might be, its deception is minor league compared to that of the far more professional liars who populate the Heritage Foundation. In their attempt to discredit government antipoverty efforts, Heritage publishes an “Index of Dependence on Government,” which implies that the typical American has been rendered dependent on tens of thousands of dollars in annual cash and prizes from the state. They also have released several reports over the years professing to demonstrate how many trillions of dollars have been spent on antipoverty efforts, to no real effect. Yet Heritage’s arguments are thoroughly dishonest, as anyone who actually reads the reports can readily see. For instance, when it comes to the antipoverty programs tallied by Heritage, their list includes things that very few people would consider welfare.204 Among the programs that Heritage throws into the mix when bashing programs for the poor—and when claiming that trillions of dollars have been wasted on these efforts—one finds not only things like cash assistance, food stamps and housing programs, but also:

Adoption assistance (typically paid to middle-class families who adopt neglected children);

Foster care assistance (also typically paid to middle-class families who foster);

Disability payments for disabled children;

Emergency food and shelter assistance;

Community health centers (where payments are made to the providers, not the poor);

The nutritional program for the elderly;

Rural housing insurance;

The Title XX Block Grant (intended mostly to prevent child abuse);

The Social Service Program for Refugees;

Head Start (a popular and successful pre-school readiness program);

Job training programs (intended to reduce welfare dependence);

Pell Grants for college (also intended to boost future employability and wages, and reduce welfare dependence);

AmeriCorps and other volunteer initiatives; and

The Earned Income Tax Credit

Counting the Earned Income Tax Credit in the litany of “dependency-inducing” anti-poverty programs is especially egregious, since conservatives like Ronald Reagan specifically supported the EITC because it reduces dependence on other means-tested programs like cash and food assistance. The EITC rewards work by subsidizing earned income with tax refunds at low wage levels, and you can’t receive the benefits if you don’t work. By considering the EITC a form of welfare, Heritage inverts the meaning of the word and demonstrates its willingness to file every government program that benefits the poor under the rubric of the welfare state.

Of course, the bulk of the “massive” increase in welfare spending about which Heritage is so animated is in Medicaid, but two-thirds of Medicaid spending is for elderly people or the disabled, neither of whom even the most cold-hearted of modern Scrooges (one hopes) would expect to be in the workforce “earning their keep.”205 Twenty-one percent of Medicaid spending is for poor children, while only fifteen percent of program benefits are going to able-bodied adults.206 So to calculate the size of the supposedly massive welfare state by throwing in the incredibly expensive health care expenditures on elderly, blind and disabled folks is to mislead the public about the amount being spent on the supposedly able-bodied poor.

Likewise, in its “Index of Dependence on Government,” Heritage basically considers all government programs other than the military and K-12 education to be fair game for accusations of dependence-inducement.207 So the Index authors include even Social Security and Medicare for the elderly as programs that foster dependence. The authors of the report romanticize the days when poor people (including the elderly) just relied on their families to care for them, or perhaps churches or “mutual aid societies.” That such channels clearly weren’t sufficient—large numbers of the elderly were poor in those days, which is precisely why Social Security was created—seems not to faze them. It’s the same with housing: Heritage argues that the old days of private and religious groups providing housing (like orphanages, or Boy’s Town, perhaps) were better than government-provided housing benefits. This is the vision the right offers to poor people: relying on some kindly old priest and his group home to take care of you, and if for whatever reason they can’t manage it, that’s too bad. They also argue that government-provided health care under Medicare and Medicaid has destroyed the wonderful private institutions that used to provide for people in need. But what evidence is there that such institutions ever covered the cost of high-dollar treatments like chemotherapy, radiation or organ transplants? Of course, they never did. Heritage seems to think people only get colds or the flu or chickenpox, and that armies of kindly old family doctors will gladly, out of the goodness of their hearts, provide care to them for free. But even if that were true for some, how would that address more long-term and costly care for serious conditions? It wouldn’t, of course; rather, private, for-profit providers would end up refusing expensive care to those who couldn’t afford it. It would be rationed care based solely on ability to pay—death panels, if you will, on which the panelists would be not doctors at all (and surely not the government), but insurance company representatives and hospital executives looking out for the bottom line.

Overall, the Index is calculated by throwing in pretty much every kind of government program imaginable and proclaiming them all guilty of making Americans dependent on the state. Among the programs deemed so destructive to personal independence, Heritage includes consumer and occupational safety spending, disease control funding, children and family services spending, all job training programs, disability insurance, agricultural research, and disaster relief. Even those who criticize programs like TANF, public housing or SNAP should be able to see that any measure of “Dependence on Government” that includes these things, as well as Social Security, Medicare and student loans, cannot be considered serious scholarship. It speaks to the ideological dishonesty of the right and those who seek to undo the various programs of the national safety net, and it should call into question their attacks on all programs, including the easier and more vulnerable targets like cash, food and housing aid.

The rhetoric of the culture of cruelty has been especially vicious of late with regard to the long-term unemployed and those who have been forced to rely on unemployment insurance. So those who have a solid work history and lost their jobs through no fault of their own (both conditions that have to be met in order to qualify for unemployment insurance) are increasingly incurring the wrath of the right. According to FOX commentators Stephen Moore and Eric Bolling, unemployment insurance is “like a paid vacation for people,”208 which discourages the unemployed from looking for work. In a column for the Wall Street Journal in early 2013, Holman Jenkins claimed that unemployment insurance and Social Security disability payments encourage those who receive them to rely on “someone else to be productive.”209 Along those same lines, actor, commercial pitchman and onetime political speechwriter Ben Stein—apparently confusing the larger American public with Ferris Bueller, the main character in the only movie for which he is remembered—says that because of the availability of unemployment insurance, lots of unemployed people “would prefer not to go to work.”210 Rush Limbaugh concurs, noting that “extended unemployment benefits do nothing but incentivize people not to look for work,”211 and that by advocating an extension of such benefits, President Obama is “in the process of creating and building a permanent underclass.”212

Yet, contrary to right-wing belief, there is little evidence (or logic) to suggest that the availability of unemployment insurance contributes to long-term joblessness, or that the elimination of those benefits will suddenly lead the long-term unemployed to find jobs. In fact, a recent study by the Joint Economic Committee of Congress found that those persons who are out of work and receiving unemployment insurance spend more time looking for work than those who are unemployed but not receiving assistance.213 Indeed, given that one can only qualify for unemployment benefits if one is actively searching for work, whatever relationship exists between unemployment insurance and increased joblessness is largely proof that the program is working, not failing. After all, if people receiving benefits remain in the job market (rather than dropping out altogether), and thereby are counted as unemployed, it’s true that the unemployment rate could be marginally higher than it would have been had they simply stopped looking for work (at which point they would not be captured in the unemployment data). But surely it would be better to provide incentives to stay in the workforce and look for a job while receiving unemployment insurance, than for those out of work to simply give up hope, even if giving up hope managed to knock the unemployment rate down a few points.

It appears from the bulk of available evidence that extending unemployment benefits during an economic downturn results in more jobs being created rather than destroyed. Because these benefits are spent by their recipients, they serve to stimulate the economy, and according to the Congressional Budget Office, can produce as many as 300,000 jobs nationwide thanks to that stimulus.214 That conservatives would attack the concept of unemployment insurance, even while more than three in four unemployed persons don’t even receive any,215 suggests the extent to which the right will blame the have-nots for economic problems they clearly did not cause.

The Real Reasons for Unemployment, Poverty and Welfare

Ultimately, it isn’t a culture of poverty or individual irresponsibility that explains why a person is underemployed, unable to make ends meet, or in need of government assistance. It isn’t a lack of values, or laziness. People are out of work because at any given moment there are rarely enough jobs available for all who are searching for one. People fall below the poverty line because they either can’t find work, or do work but their wages are subsistence level. And people find themselves turning to government assistance because without work, or with only low-wage work, certain benefits from health care to housing subsidies to nutrition assistance become critical lifelines.

Far from not wanting to work, the unemployed desperately seek jobs; so much so in fact that the competition to get hired at Walmart can often prove more daunting than the competition to get into an Ivy League college. For instance, when Walmart opened a new store in Washington, D.C. in 2013, 23,000 people filled out applications in hopes of landing one of only six hundred jobs: an acceptance rate of 2.6 percent. By contrast, the overall Ivy League admissions rate is nearly nine percent, and even at Harvard about five percent of applicants manage to get in.216 Likewise, only about six of every one hundred applicants for jobs at McDonalds get hired, suggesting that the problem is not a lack of willingness to work, but rather an insufficient number of positions for all who need and are seeking employment.217

Beyond the merely anecdotal, we know this is the problem in the larger economy. In June 2014 there were 4.7 million job openings, but there were 9.5 million people unemployed and actively searching for work.218 In other words, for every job opening there were two people looking for employment, which is to say that no matter the work ethic of the unemployed and no matter their drive, determination, skills, values, sobriety, intelligence or anything else, half of all job seekers could not possibly find work. Although this is an improvement from 2013, when the ratio of job seekers to jobs was three to one, and far better than during the height of the recession when the ratio reached as high as seven to one,219 it nonetheless suggests that the economy is not producing enough employment for all who want and need work. Considering that there are millions more who have grown so discouraged that they’ve given up looking for work altogether at this point, and who are no longer technically in the job market (nor counted in unemployment figures), the actual gap between persons needing work and available jobs is no doubt far worse than the official two-to-one ratio.

Among the challenges facing the unemployed, and especially the long-term unemployed, is discrimination. Presumimg them less competent or perhaps too desperate, employers are far less likely to provide interviews to long-term unemployed job seekers, no matter their qualifications. A recent study for the Boston Federal Reserve found that when qualifications and experience are otherwise similar, persons who have been out of work for longer periods of time are operating at a significant disadvantage. In the study, 4,800 résumés were sent out in response to six hundred job openings. Some résumés were of actual people, while others were fabricated to represent unemployed persons with various work histories and qualifications. For the unemployed, the study’s author deliberately manipulated and altered certain factors such as how long the individual had been out of work, how often they had moved between jobs, and whether they had specific experience in the field for which they were applying.220 Although the study confirmed that employers tend to favor those with industry-specific experience and stable work histories without too much job-hopping, these factors turned out to be far less important than the length of time a person had currently been unemployed. Applicants with industry-specific experience were less likely to get called back than those without such experience, if the more qualified applicant had been out of work for six months or longer, while the less qualified person had only been out of work for a short or medium period. People without relevant experience but whose spell of unemployment has been short are about three times as likely to be called back for an interview as those with relevant experience but who have been unemployed for six months or more.221 In other words, when a person has been out of work for six months or longer, employers are simply screening them out regardless of experience and qualifications.222 Unless there are specific measures established to bar discrimination against the long-term unemployed, or tax incentives for their hiring, or direct hiring of such persons by the government for new jobs programs, it is unlikely that the economic position of the long-term jobless is likely to improve. Unfortunately, any attempt to get employers to stop discriminating against the long-term unemployed is derided by the modern-day Scrooges as “punishing achievers,” in the words of Rush Limbaugh.223

While reliance on benefits says little or nothing about the values of the poor, the need so many people have for these programs says quite a bit about the values of those for whom beneficiaries work, and from whom they receive such paltry wages as to leave them eligible for assistance. Companies like Walmart and McDonalds pay their employees so little that workers at the companies often comprise the biggest group of Medicaid and SNAP beneficiaries in a given community,224 and Walmart stores regularly set up food donation bins where they encourage their employees to buy and donate food for other employees who don’t have enough to eat!225 By encouraging their employees to apply for public assistance, companies like McDonalds and Walmart get the taxpayers to subsidize them by shifting the burden of supporting workers from employers to the public. As many as eight in ten Walmart store associates rely on SNAP benefits so as to subsidize their paltry wages, and overall, taxpayers foot the bill for more than $6 billion in various welfare benefits for Walmart employees.226 Although Walmart recently announced that it would boost wages for about 500,000 of its employees over the next two years—an issue to which we will return in a bit—many of these workers will remain eligible for public assistance even after the wage boost.

Cincinnati Walmart associate La’Randa Jackson’s story, sadly, is all too typical. Although her family receives SNAP benefits, it’s rarely enough to last the month. “I skip a lot of meals,” she says. “The most important thing is food for the babies, then my younger brothers. Then, if there’s enough, my mom and I eat.” Sometimes she manages to get some extra food from the emergency food bank in town. As Jackson explains it, “The lady who works there knows we have babies at home.”227 Adding insult to injury, not only does Walmart pay such paltry wages that its associates are forced to turn to SNAP for food, but even worse, Walmart is the nation’s largest food stamp redeemer as well. In other words, in many cases, their own employees are buying food at Walmart with the SNAP benefits they only have to rely on because their employer pays them so badly. This means that Walmart is making money on both ends: by paying poverty-level wages and then selling their own employees food, subsidized by the taxpayers. Annually, almost one in five dollars spent with SNAP benefits is spent at Walmart, bringing in approximately $13.5 billion in additional sales for the company.228

The picture is similar in the fast food industry. Current estimates suggest that fast-food workers and their families receive about $7 billion in public assistance of one form or another each year: a massive subsidy to low-wage employers, paid for by the taxpayers.229 Indeed, the families of fast-food workers are about twice as likely as persons in the general population to rely on various forms of public assistance, with slightly more than half of such families receiving benefits from Medicaid, SNAP, TANF and/or the refundable portion of the Earned Income Tax Credit (EITC). Precisely because fast-food wages are so low (irrespective of the massive profits made by companies in that sector), the typical family of a fast-food employee receives nearly $8,000 annually in Medicaid benefits, and a few thousand more in food stamp and EITC benefits.230

Perhaps even more disturbing, given the mega-millions received by top bank executives (and the industry itself, due to recent government bailouts), a distressing number of bank tellers—perhaps the lowest rung in the industry’s workforce, but nonetheless the one with which the public has the most interaction—are forced to rely on public assistance due to low wages. Nationally, nearly a third of all bank tellers’ families benefit from at least one government aid program, from the Earned Income Tax Credit to Medicaid to SNAP, amounting to approximately $1 billion in total benefits. In New York, which is the fulcrum of the banking industry (and where top bankers receive six-figure bonuses as a matter of course), tellers make less than $13 per hour on average, forcing roughly forty percent of these workers’ families to rely on one or another form of government aid.231

But rather than criticize companies for the inadequate pay offered to those who do the work that makes their profits possible, the wealthy economic minority and those on the right suggest that it is the value system of those low-wage workers that is to blame for their condition. Rather than advocating minimum wage hikes or livable wage legislation that would boost pay levels and purchasing power, thereby reducing the amount of public benefits received by these families, the rich oppose such wage boosts and advocate cutting the very programs that keep the families in the fast-food industry above complete destitution. Some go so far as to threaten workers with job loss if a minimum wage hike is successful, as did the Employment Policies Institute (a lobbying arm of the restaurant industry in California), which recently took out billboards in San Francisco, threatening to replace workers with iPads should the state’s minimum wage be raised.232

Attempts by low-wage workers to organize for higher wages have been openly derided by FOX host Charles Payne, who seems especially upset that workers fighting for a livable wage would compare themselves to the foot soldiers of the civil rights movement. When FOX’s Steve Doocy recently suggested that such a comparison was “insulting,” Payne agreed, adding: “It’s beyond the pale. Here is one of those things that insults almost everybody. Obviously, it would insult anyone who was involved in the civil rights movement, and also the workers.”233 Putting aside the bizarre notion that workers pushing for higher wages are somehow insulting themselves, it is worth noting that absolutely no one who was involved in the civil rights movement (which would exclude every prominent conservative in America, without a single exception) has objected to the analogy between the fight for decent wages and the fight for civil rights. Congressman John Lewis, who was repeatedly arrested and beaten in the struggle for racial equity, not only hasn’t risen to the floor of the House of Representatives to denounce living wage activists; far from it, he actually supports them. Which makes sense, given the views of movement leader Martin Luther King Jr. on matters of economic justice for working people, which included strong support for labor unions, for higher minimum wages, and for guaranteed employment. In 1966, when addressing pending legislation to raise the minimum wage floor, King insisted, “A living wage should be the right of all working Americans.” Indeed, he claimed that there was “no more crucial civil rights issue facing Congress” than the need to raise the minimum wage and extend its coverage to entire classes of workers, like farmworkers, to whom it did not yet (and still does not) apply, in most cases.234

For the modern-day Scrooges, the answer to any call for wage hikes at the bottom, or safety net protections, is essentially the equivalent of “Bah, Humbug!” To such persons as these, and their conservative mouthpieces in the media, attempts by workers to boost their pay, improve their work conditions, or mend the tattered safety net for the benefit of millions of families is little more than confiscation—the act of takers living off the work of the makers in society. Fundamentally, hostility to the poor and programs to support them comes down to an all-too-common belief that if those in poverty would simply try harder they wouldn’t be in the shape they are. The attitude was expressed with no sense of misgiving in a recent Daily Show interview during which FOX Business commentator Todd Wilemon exhorted, “If you’re poor, just stop being poor,” thereby offering up the standard aristocratic advice to those who struggle to keep their heads above water.235 In other words, it’s your fault, so just stop it already. Get it together. Get a job. Be more like rich people.

Loving the One Percent: The Valorization of the Rich and Powerful

Which brings us to perhaps the most significant and telling example of modern Scroogism in recent years, and one of the pinnacle moments of the contemporary culture of cruelty: namely, the statement made by GOP presidential candidate Mitt Romney about the difference between the forty-seven percent of Americans who are essentially lazy, and the rest of us. In May 2012, at a private fundraiser, Romney issued his infamous “forty-seven percent” remark to those assembled, a statement that would go public a few months later when a video of the comments was leaked to the press. As Romney put it:

There are forty-seven percent of the people who will vote for the president no matter what . . . forty-seven percent who are with him, who are dependent upon government, who believe they are victims, who believe the government has a responsibility to care for them, who believe that they are entitled to health care, to food, to housing, to you-name-it . . . the government should give it to them. And they will vote for this president no matter what. . . . These are people who pay no income tax. Forty-seven percent of Americans pay no income tax. So our message of low taxes doesn’t connect. . . . My job is not to worry about those people. I’ll never convince them they should take personal responsibility and care for their lives.236

In other words, to the standard bearer of the Republican Party roughly half of the American people are “dependent on government,” suffer from an entitlement mentality, and refuse to take responsibility for their lives. For Romney’s running mate, Congressman Paul Ryan, the numbers are even worse. According to statements made by Ryan in 2010, fully six in ten Americans are “takers” rather than “makers” because they receive some form of government benefit, from Medicare health coverage to unemployment insurance to nutrition assistance or the Earned Income Tax Credit, while not paying income taxes.237

Of Makers and Takers: Taxes, Public Subsidies and the Real Face of Entitlement

Ultimately, the thinking on display in the comments of both Romney and Ryan is clear: the poor are simply different from the rich in terms of values, work ethic and talent. While the latter create jobs and add value to the larger society, the former simply live off the more productive. Rather than criticize the wealthy, the poor and working class should be thanking them for all the good they do, or so the thinking goes. According to billionaire real estate investor Sam Zell, “the one percent work harder,” and rather than criticize them, everyone else should emulate them.238 Likewise, Forbes columnist Harry Binswanger has said in all seriousness that anyone “who earns a million dollars or more should be exempt from all income taxes,” and because even that tax rate of zero is insufficient thanks for all the good they do for the world, “to augment the tax exemption, in an annual public ceremony, the year’s top earner should be awarded the Congressional Medal of Honor.”239

To question the prerogatives of the wealthy (let alone to actually advocate policies that might shrink the disparities between the wealthy economic minority and the rest of us) is to invite howls of protest that one is essentially the equivalent of a Nazi looking to march the rich into the ovens. To wit, the recent claim by venture capitalist Tom Perkins of San Francisco that those who fight for greater equality are essentially gearing up for their own “progressive Kristallnacht,”240 reminiscent of what Hitler’s legions launched against the Jews of Germany. Perkins, a billionaire who likes to brag about his $300,000 watch, is worried about poor people literally killing off the rich, which is ironic since it is he, a rich guy, who has actually been convicted of killing someone.241 In 1996 while racing his yacht off the coast of France, Perkins collided with a smaller boat, killing a doctor in the process; Perkins was tried and convicted of manslaughter but only paid $10,000 as punishment.242

Although his remarks about the impending slaughter of the oligarchs, published in the letters section of the Wall Street Journal, provoked howls of outrage,243 Perkins defended himself by noting that although the Nazi imagery was perhaps unfortunate, the underlying argument was true: demonizing the rich is no different from demonizing any other minority.244 It’s a position that the editorial page of the Wall Street Journal then ratified in the wake of the controversy,245 as did FOX Business contributor Charles Payne, who defended Perkins’s comments by claiming that the wealthy have a justified rage at those who would question their wealth, and that Perkins’s predictions of a progressive Kristallnacht were possibly overdrawn but not by much. As Payne explained it, in his typically grammar- and vocabulary-challenged style (neither of which serve as job disqualifiers at FOX):

There is a war on success. It hasn’t been violent, but that doesn’t mean that it can’t, or that it won’t one day (sic). . . . We can snicker at Tom Perkins and his poor analogy, or we can look around and understand that his fears may one day spread to many others because the kind of anger based on envy can become uncontrollable. It can ravish (sic) an individual or a country once its spreads. Coupled with failed economic policy it can destroy. I don’t think we should wait for people to be dragged out of their Park Avenue homes before we see how dangerous this war really is and can become.246

In other words, to Charles Payne, Tom Perkins was wrong but not really. The paranoid billionaire was just a few years ahead of the curve with his prediction.

In keeping with the progressives-as-Nazis theme, Home Depot founder Ken Langone has made it clear how he views the activism of people who express concerns about wealth inequality. “I hope it’s not working,” Langone has said. And then, descending into the pit of victimhood, he notes: “Because if you go back to 1933, with different words, this is what Hitler was saying in Germany. You don’t survive as a society if you encourage and thrive on envy or jealousy.”247 For others, like AIG CEO Robert Benmosche, criticisms of the rich might not quite be equal to the Holocaust—after all, diminishing the horrors of Nazi genocide might be a bit dicey for a nice Jewish boy like Benmosche—but they certainly are comparable to the lynching of black people. After public outrage erupted over the massive bonuses paid to the company’s executives (even as AIG had to be bailed out by the government), Benmosche claimed that the uproar “was intended to stir public anger, to get everybody out there with their pitchforks and their hangman nooses, and all that—sort of like what we did in the Deep South. . . . And I think it was just as bad and just as wrong.”248 Yes, because criticizing million-dollar bonuses for people who helped bring down the economy is exactly like the extra-judicial murder of black people.

The tendency to view the wealthy as virtual superheroes to whom the rest of us owe some debt of gratitude is becoming increasingly prevalent, and not only in the United States, but among the Anglo-elite in the U.K. as well. Boris Johnson, the Mayor of London, recently admonished the commoners in his own city that they should be “offering their humble and hearty thanks” to the super-rich, because, as he put it:

These are the people who put bread on the tables of families who—if the rich didn’t invest in supercars and employ eau de cologne–dabbers—might otherwise find themselves without a breadwinner.249

For clarification, an “eau de cologne–dabber” is someone who literally places perfumed water upon the temples of the rich, and is paid to do this because, naturally, the rich cannot put on their own perfume. The working class should be grateful, apparently, that the rich in London are so lazy; otherwise, how might the masses even manage to feed themselves? That such incredibly lazy souls as these have somehow managed to become millionaires and billionaires despite their pathetic indolence, apparently gives Johnson no pause. That people who can’t even “pick up their own socks” (as Johnson himself puts it) can somehow control such an outsized portion of the world’s wealth, causes no national reconsideration of the so-called merit of the wealthy, though among more sober-minded persons one might expect that it would.

Elsewhere, great inequalities of income and wealth are applauded as the only imaginable incentive for hard work on the part of the poor. Canadian millionaire Kevin O’Leary responded to a 2014 OXFAM report, which noted that the world’s wealthiest eighty-five people were worth as much as the bottom half of the world’s population (approximately 3.5 billion people), by exclaiming that the report was “fantastic news.” Only such incredible inequality can spur the poor to better their condition, according to O’Leary. When asked if a poor African living on a dollar a day is truly inspired to harder work by the presence of the eighty-five wealthiest persons on the planet, and that they might actually think they are going to be the next Bill Gates, O’Leary felt no compunction in saying that such inequality and great wealth was exactly “the inspiration that everyone needs.”250 Inspiration or not, it appears that something in O’Leary’s formula for success isn’t quite working. Just a year after the announcement of the “fantastic news” of such enormous global inequality, things seemed to be getting worse rather than better. In 2015, it only took the world’s wealthiest eighty people, rather than eighty-five, to equal the wealth of the poorest half of humanity.251 At this rate, O’Leary’s “inspiring” inequality will result in one person having the same net worth as 3.5 billion people by about 2030. Fantastic!

Central to aristocratic defensiveness about the extent of their wealth is the idea that the rich do more than enough for the rest of us, especially in terms of the nation’s tax burden, which, in Romney’s estimation, nearly half of Americans are skipping out on while the wealthy pick up the tab. It’s not a particularly new position among conservatives. As early as 1975, when the Earned Income Tax Credit (EITC) was first passed so as to remove low-income Americans from the tax rolls by offering tax credits intended to subsidize work, some on the right were already screaming foul. Despite the fact that the program only benefited those who were working, and had the effect of reducing reliance on other forms of welfare that were not tied to employment, conservative firebrand Pat Buchanan was enraged. In his very first syndicated column, Buchanan, fresh off his stint working in the Nixon White House, blasted tax relief for the poor as “the redistribution of wealth, downward,” and insisted that the 4.6 million low-income persons who were to be dropped from the tax rolls would be “reassigned to who expanding army of citizens who pay nothing in federal income taxes for the broad and widening array of social benefits they enjoy.” They would, in Buchanan’s estimation, come to represent “a new class in America, a vast constituency of millions with no interest whatsoever in reducing the power of government, and every incentive to support its continued growth.”252

But actually, when it comes to who pays taxes and who doesn’t, here too the position of the wealthy economic minority is without merit. When Cato Institute senior fellow Alan Reynolds says, “Poor people don’t pay taxes in this country,” or when FOX Business host Stuart Varney insists, “Yes, forty-seven percent of households pay not a single dime in taxes,” they are lying.253 And when FOX’s Greg Gutfeld claims he envies those who are too poor to owe income taxes,254 as if to suggest that minimum wage workers are living it up while highly paid media commentators like himself are oppressed, he makes no sense at all. I’m sure FOX would be happy to pay him $7.25 an hour if he’d like to experience life without income taxes (or much income), but somehow I’m guessing he won’t request such a perk. Let’s look at the facts.

On the one hand, yes, nearly half of the American population does not end up paying net federal income taxes, but this does not make the comments by Reynolds and Varney or the positions of Romney and Ryan accurate. To begin, it’s important to understand why people who don’t pay taxes enjoy that so-called luxury. One-fifth of those who don’t pay income taxes are elderly and on fixed incomes, with nearly another fifth being students, the disabled or persons who are unemployed but actively seeking work.255 The remaining three-fifths do work but simply don’t earn enough to owe federal income tax. Why? Well surely it isn’t because they have chosen to receive crappy pay just to get out of paying taxes. It’s not as if the poor and struggling are turning down six-figure job offers just to avoid having to fork over a percentage to the government. They are not earning enough because they can’t find a job that pays enough, and they are not paying taxes on what they earn because the poor and near-poor have been removed from the income tax rolls due to bipartisan agreements in place since the mid-1970s, intended to boost disposable income with programs like the Earned Income Tax Credit.

Additionally, although such persons may not pay income taxes, those individuals almost inevitably contribute to the overall tax pie via state and local sales taxes and payroll taxes, the latter of which only apply to the first $117,000 of income, thereby hitting middle- and working-class folks harder, proportionately, than the rich. In fact, when it comes to taxes other than those levied on income at the federal level, lower- and middle-income Americans actually pay quite a bit more, percentage-wise, than the affluent. State and local taxes, on average, take more than twice as much from the poorest residents (those in the bottom fifth of households) as from the top one percent: about 10.9 percent of income from those at the bottom, compared to only 5.4 percent from the wealthiest. The middle class too pays state and local taxes at a much higher rate than that paid by the nation’s affluent economic minority. This is because state and local governments rely heavily on sales taxes, which take a higher share of income from those at the bottom than from those at the top. If a rich person and a poor person both buy a gallon of milk in Tennessee, for instance (which still levies sales taxes even on necessities like food), or clothes for their kids to start the school year, the taxes levied on these items will be the same as a share of the purchase price, but as a share of both shoppers’ incomes, the tax bite will be more onerous to the lower-income shopper. Over the course of a year, taxes such as this add up to a substantial burden at the bottom of the economic pyramid, while amounting to only a very small burden for those at the top. In some states, the disproportionate burden for the working class and poor is especially crushing relative to that for the rich. In Washington State the poorest fifth of residents pay about seventeen percent of their annual income in state and local taxes: seven times the percentage paid by the wealthiest one percent, at only 2.4 percent of income. In Florida, the poorest residents pay 6.8 times as much as the richest, percentage-wise (12.9 percent as opposed to 1.9); in Texas, the ratio is more than four to one.256

Overall, there is not much difference between the tax burdens on the wealthy as opposed to the middle and working class, when all taxes (federal, state and local) are considered. Whereas the rich would have us believe they are carrying a disproportionate amount of the tax load, the data says something else altogether. The richest one percent of Americans pay twenty-four percent of all taxes, but they also earn twenty-two percent of all national income. The next richest four percent of Americans pay fifteen percent of all taxes, but they also earn fourteen percent of all income. In all, the top tenth of earners pay nearly half of all taxes, which may seem extreme, but they also bring in forty-six percent of all national income. Meanwhile, the middle fifth of income earners pays only ten percent of all taxes, which may seem as if they were not paying their fair share, but they only receive eleven percent of all income. So too, the poorest fifth of Americans contribute only two percent of all taxes paid in the country—a seemingly inadequate percentage—but this fifth only receives about three percent of all income.257

In terms of relative tax rates, the claims of an unfair burden on the rich also fall short. The top one percent, who had average incomes of $1.5 million in 2013, paid about thirty-three percent of that in overall taxes at all levels. But those with average incomes of only $75,000 (who found themselves in the upper-middle-class fifth of all earners) paid an almost equivalent rate of thirty percent; and the middle fifth of earners, with average incomes of only $45,500, paid about twenty-seven percent of their incomes in taxes. Even those in the lower middle class, with annual incomes averaging only around $28,000 per year, paid twenty-three percent of their incomes in taxes—less than the rate for the rich but not dramatically so. And surely twenty-three percent for someone making $28,000 is a much larger burden, in real terms, than a rate of thirty-three percent on someone who makes $1.3 million. Although the poorest fifth of Americans (whose annual incomes amount to only about $14,000 on average) have a much lower tax burden than others—since they have been removed from federal income taxes by the EITC and other income exemptions intended to reduce reliance on government programs—even they pay about nineteen percent of their paltry incomes in overall taxes. This is hardly evidence of freeloading, even by the poorest fifth of Americans, let alone by the forty-seven percent about whom Romney seemed so judgmental.258

Of course, it’s not just with regard to taxation that the meme of the “makers versus the takers” is dishonest. The other implicit assumption of that narrative is that the rich, unlike the poor, don’t rely on government for their success. According to elitist rhetoric, government is for the poor and life’s losers, while the wealthy and successful prosper as a result of their own genius and the magic of the free market. But how anyone could believe that only the poor rely on government, especially in the wake of the government bailout of the banking industry and several American corporations, is beyond comprehension by the rational mind. The overall value of the various government-backed initiatives on behalf of industry since the economic meltdown includes not just the more than $800 billion disbursed to financial institutions by the Treasury Department under the Troubled Assets Relief Program (TARP), but also hundreds of billions in additional loans to banks to improve their ability to start lending again.259 Without these bailouts, the banks in question would have gone under. Whether or not one believes that considering these institutions “too big to fail” might have been a necessary evil at the time of the bailouts, there can certainly be no doubt that it was government, not the magic of the marketplace or the genius of the leadership in these places, that allowed them to continue existing at all, let alone to prosper once again.

And yet, even as the U.S. government literally saved these institutions by bailing them out with taxpayer monies, the economic minority that benefited from that financial safety net remain ungrateful. Former AIG CEO Maurice “Hank” Greenberg, for instance, filed a lawsuit against the government for bailing out the company, because to Greenberg’s way of thinking the terms of the bailout were insufficiently favorable to the company and its stockholders. According to Greenberg’s attorney, by requiring AIG to give the government eighty percent equity ownership of the firm before agreeing to the $85 billion loan, the bailout resulted in injury to stockholders because limitations were placed on the amount of ownership (and thus income) they could enjoy privately.260 That there would have been no AIG at all absent the loan matters not to economic aristocrats: they believe they deserve government assistance—corporate welfare—and that they should set the terms of such welfare at the same time. Because the rich believe that, unlike other beneficiaries of government benefits who are expected to meet certain conditions in order to qualify for assistance, the wealthy should receive public assistance with no strings attached at all. Rules are for the little people.261

Average hard-working Americans have certainly never received the kind of forbearance shown to the banks and their top leaders, and frankly, that’s just how the rich like it. Far from relying on the marketplace, they quite openly insist that they deserve government assistance, even as those at the subsistence end of the economic spectrum do not. So consider the breathtakingly tone-deaf remarks of billionaire Charles Munger, vice-chair of Berkshire Hathaway: In 2010, while speaking at the University of Michigan, Munger told the audience that they should “thank God” for the bailouts of Wall Street, and rather than “bitching” about them, they should wish those bailouts had been “a little bigger.” But when asked if it might also have been helpful to bail out homeowners who were underwater on their mortgages—in many cases because they were roped into terms that were unfavorable to them, though quite favorable to the bankers and rich investors—Munger was incredulous: “There’s danger in just shoveling out money to people who say, ‘My life is a little harder than it used to be’,” Munger explained. “At a certain place you’ve got to say to the people, ‘Suck it in and cope, buddy.’”262 In other words, America’s neediest families should suck it up and cope, while the rich sit back and enjoy corporate welfare to keep their highly profitable businesses humming along.

But it’s not just the bank bailouts that demonstrate how dependent the rich are on taxpayer dollars, suckled from the very government they despise. From 2000 to 2012, not even including the bailouts, some of the world’s wealthiest companies received $21.3 billion in direct government subsidies—about $200 million, on average, for each of the companies in the Fortune 100—in the form of subsidized loans, “technology development” grants and subsidized insurance, all of which use taxpayers’ money to reduce operating costs and increase profits for corporate executives.263 Overall, well over $100 billion in direct government subsidies have been handed out to businesses in recent years, the vast majority of it to huge corporations. Among the biggest recipients of government subsidies ranging from special financing deals to tax holidays to subsidized promotion of goods abroad, are Boeing, Dow Chemical, General Motors, Walmart, General Electric and FedEx. Most telling, even the current darlings of the right wing, the Koch brothers, have received substantial assistance from the government, to the tune of $88 million.264

In addition to direct subsidies, there are also indirect ways in which government benefits the corporate class. Because of what are known as “tax expenditures”—preferential tax treatment that reduces revenues available to the government, thereby operating like a spending program, but through the tax code rather than the normal budget process—corporations are able to artificially reduce what they have to contribute in taxes. In 2011, the government allowed corporations to defer paying $24 billion in taxes they otherwise would have owed by not taxing income earned abroad until those earnings are repatriated to the United States. So although the money has been earned and is available to benefit the company, as long as they reinvest those earnings in another country rather than in their homeland, taxes on those earnings go uncollected. Another $27 billion was lost due to “accelerated depreciation” rules, which allow companies to write off operating expenses for plant and equipment far more quickly than such costs actually occur.265 And as discussed earlier, the preferential treatment of capital gains income—a government program that favors the income earned by the wealthy over the income earned by average Americans—provides a huge windfall to the rich, saving those who make over $1 million per year about $131,000 in taxes, as opposed to the EITC for poor and working-class families, which provides about $2,200 in relief to them.266 So again, who is more dependent on government welfare? Who are the makers, and who are the takers?

Or consider the common practice of state and local tax abatements and special “economic development awards” granted to corporations so as to lure them to particular locations, ostensibly for the purpose of creating jobs. Surely such policies suggest that corporate success owes less to hard work or talent than public policy. Although supporters of the practice insist these special financing deals are a critical economic development tool, there is much reason to doubt their faith in the matter.267 Whether hosting professional sports franchises or manufacturing plants, communities often end up giving away more in lost property tax revenue than they gain in payroll, sales taxes generated or other economic benefits.268 And even if the incentives work as advertised—much as with the bailout of the large banks—the larger philosophical point remains: can we really claim these businesses are making it on their own, or are successful due to the talent of their executives, when they have to procure sweetheart deals from the taxpayers in order to produce such results? Wouldn’t capitalist theory suggest that for an investment to be efficient and worthwhile, it should pay for itself in the market, without government giveaways? Indeed, don’t such giveaways by definition suggest the inefficiency and thus market-illegitimacy of such investments? That some of the biggest recipients of these handouts are indeed among the nation’s most profitable companies makes the practice all the more suspect. Perhaps if tax abatements were being given to small mom-and-pop businesses we could see their utility—after all, firms like that might have a hard time competing against larger companies (like big box retailers, for instance)—but those are not the companies reaping the rewards, by and large. Although large corporations have received only ten percent of all announced subsidy awards at the state and local level, these firms have pocketed at least seventy-five percent of the actual dollars given away by these efforts—an amount equal to about $110 billion in all. Among the largest recipients of such corporate welfare are Boeing (with over $13 billion in subsidies), Intel (with nearly $4 billion), GM and Ford (with $6 billion between them), Nike (with over $2 billion) and Dow Chemical (with $1.4 billion). Other brands often credited with success due to the genius and innovation of their corporate leadership have also been given significant handouts: Google has received over $600 million in state and local subsidies, FedEx and Apple have procured about $500 million each, and Amazon.com and Samsung have both benefitted from over $300 million in subsidies.269

Beyond corporate welfare itself, there are entire industries that rely on particular public policies in order to make profit. For instance, consider the way that private businesses profit from the rise of mass incarceration in America. As the number of persons in jail or prison has exploded, especially for nonviolent offenses and disproportionately for people of color,270 companies such as Corrections Corporation of America (CCA) have developed entire business models that rely on the continuation of a public policy to lock people up. Their profits are not the result of innovation or business acumen. In a nation that didn’t incarcerate so many people, no matter how bright their executives might be, and no matter how hard-working their employees, they simply could not be profitable. Their financial success—indeed their very existence—is due to government policy, which is why the industry hires lobbyists to push for longer prison sentences, even for relatively minor offenses.271 Private prison operators require states to fulfill “occupancy guarantees” or else pay penalties to the company, if for some reason they can’t find enough criminals to lock up.272 Think about it: state officials are agreeing to lock up enough people to keep a private prison full, even before they know how much crime will be committed or how many dangerous offenders there will be in the coming year who might need to be detained. And if they fail to meet their incarceration targets, they agree to pay a penalty for underutilization. Which means that states will either have to find people to incarcerate (no matter how minor their offenses and no matter whether there might be more productive ways to deal with many offenders), or else pay the companies a penalty for having effectively reduced their local crime rates. What is that, if not a textbook example of private businesses subsisting on the public dole, where the government subsidy provided is not just money but the actual lives of people locked up to boost private profits?

But people who form companies to profit from the operation of private prisons are not the only ones making money from locking up Americans. So too are those companies that provide goods and services to prisons and prisoners, and those that make use of prison labor. As for the first of these categories, food providers like Aramark, despite being cited for multiple sanitation violations, rake in hundreds of millions of dollars annually from prison contracts. Global Tel*Link, which benefits from a virtual monopoly on phone service in prisons—and charges inflated collect call rates to those whom inmates call—makes half a billion in annual profits from prison calls. And even though health care is notoriously inadequate in prisons, Corizon, the nation’s leading prison health provider, makes over $1.4 billion per year.273

As for companies that use prison labor, currently as many as a million prisoners in the United States are working as call center operators or taking hotel reservations or manufacturing textiles, shoes and clothing, while getting paid less than a dollar per day in some cases. This prison labor boosts profits for American businesses by giving them a cheap supply of virtual slave workers, while undermining employment opportunities for people on the outside. It’s a practice for which we condemn China and other countries, but which we engage in without compunction.274 Other inmates perform essentially free labor for the state—perhaps building furniture or cleaning up roadsides.275 Not only are the jobs performed by inmates for pennies per hour taking jobs away, in many cases, from those in the so-called free world, but even worse, because inmate pay is far below what non-inmates would receive for the same work, the money that can be sent home to the inmate’s family—thereby helping to support children left behind—is essentially nonexistent. This further undermines the economic base of the inmate’s home community. Whether working for private companies or state agencies, the effect is the same: depressing the wages of working-class people, providing uncompensated (or barely compensated) benefits for economic aristocrats, and helping to perpetuate inequality.

Or consider the pharmaceutical industry. When drug companies develop new drugs, they often do so only after taking advantage of government-sponsored university research. The companies then market their branded products, for which they can charge exorbitant prices, in large part because of the government-granted patent monopolies that prevent generic drugs from competing with them for a number of years. As just one example, consider the recent case of the hepatitis-C drug Sovaldi, manufactured and sold by Gilead Sciences. Gilead’s price schedule for a standard twelve-week treatment course of Sovaldi is $84,000, or $1000 per pill, even though the actual cost of production for the entire twelve-week treatment is between $68 and $136, or somewhere between eighty cents and $1.60 per pill. In other words, the price markup is on the order of one thousand to one. Since few individuals can afford the expense of such drugs, one might wonder how the company can get away with charging so much. But the answer to such a question is easy: private insurance and public insurance operated by the government will pick up the tab, thereby inflating the costs of health care for everyone. In just one year, Sovaldi and a companion drug raked in $12.4 billion for Gilead.

And while pharmaceuticals are quick to defend these kinds of profits by claiming that the cost of developing drugs is massive, thereby requiring such prices to recoup corporate research and development costs, in the case of Sovaldi, as with so many other drugs, the argument falls flat. Turns out, the professor who developed the drug was working under government grants from the National Institutes of Health, and a disproportionate amount of the costs of research were borne by public dollars. Gilead’s own contribution to the drug’s R&D was likely no more than $300 million. Though hardly chump change, this amount was earned back by the company after just a few weeks of Sovaldi sales.276

Or what about the nation’s various energy companies? Among the various forms of corporate welfare, which far and away dwarf most programs serving the needs of low-income and poor Americans, consider subsidies for the oil, gas and coal industries. Each year, a combination of special tax breaks, loan guarantees and direct subsidies for energy research and development cost taxpayers between $49 billion and $100 billion.277 Even if one accepts the economic validity of such subsidies, the mere fact of their existence suggests that the companies and industries benefiting from such government largesse owe their success in large measure to the state and not merely to the genius of their executives, let alone the “magic of the marketplace.”

Of course, to the ruling class, all of this makes perfect sense. To give taxpayers’ money to the rich or to steer such money in that direction is different from giving money to average people. When you listen to a Charles Munger or Lloyd Blankfein, chief of Goldman Sachs—who has said that investment bankers are “doing God’s work”278 and who has defended the roughly $13 billion his firm got from the bailout of large insurer AIG279—it is hard to resist the conclusion that at some level, the wealthy economic minority simply believe that the rich and the poor are two distinct species. On the one hand, they insist that putting more money in the pockets of the wealthy via the bailouts or tax cuts can incentivize productive economic activity, and that when the rich have this extra money they can be guaranteed to do great things with it. They’ll create jobs, start companies, and invest it wisely to the benefit of all. In other words, the rich respond positively to more money. On the other hand, the same voices assure us that putting more money in the pockets of the poor and struggling—via minimum wage hikes, overtime pay protections, the expansion of safety net programs or unemployment benefits—will do the opposite: it will strip the poor of the incentive to work, and if they have this extra money they will do horrible things with it; they’ll buy narcotics, sit around all day doing nothing, or make babies they can’t afford. In other words, the impoverished respond dysfunctionally to more money. The only thing that will properly incentivize them is the threat of destitution. Only the fear of homelessness, starvation and death in the gutter can possibly make struggling Americans do any work whatsoever. No overstatement, this is precisely the thinking of conservative economist and investor George Gilder—one of Ronald Reagan’s favorite writers—who argued in his 1981 book, Wealth and Poverty, that “in order to succeed, the poor need most of all the spur of their poverty.”280 Only someone who believed that poor Americans were barely human, such that they don’t respond to the same incentives the rest of us would, could make this kind of argument. And only someone who believed the rich were inherently superior could justify the benefits showered upon them by the state.

No, You Didn’t Build That: Confronting the Myth of Elite Talent

Naturally, the economic aristocrats and the conservatives whom they bankroll firmly believe in their innate superiority. They sincerely preach the gospel of meritocracy and the idea that those who make it to the top of the power structure have done so by dint of their own hard work and talent. Research has found that dominant social groups—in the United States this means men, whites and those with higher incomes—are especially likely to think that they are smarter and more capable than others and have earned whatever they have by virtue of their own abilities.281

But what evidence actually supports this position? Looking at it historically, the idea that the wealthy have earned their great fortunes has never made much sense. White people who enslaved blacks formed the nation’s original aristocracy, and relied not only on the stolen labor of Africans for their wealth but also on the willingness of government to defend their investment in human property by enshrining enslavement in the laws of the land and agreeing to the return of freedom-seeking blacks to their owners. Had it not been for the state’s support in the maintenance of human trafficking and enslavement, the work and genius of the wealthy planter class would have meant nothing. They surely weren’t prepared to pick the cotton or build the levees or construct the houses in which they lived, nor were they willing to pay market rates for that work. Their fortunes came from the barbaric exploitation of black families—men, women and children—and from no other source.

Interestingly, the wealthy planter class all but admitted their own dependence on enslaved black families and bragged about their own relative idleness, never noting the way such admissions contradicted whatever pretense they may have had to actually deserving their station. The thought of abolition frightened them because, if they could not force African peoples to work for them for free, their every luxury would be lost. Why? Because naturally it would be absurd to expect the rich to do the hard work needed to maintain the lifestyle to which they had become accustomed. That, after all, would make them little better than the slaves to whom they felt so naturally superior. As Herbert Gutman and the American Social History Project note in their epic volume, Who Built America?

Chattel slavery discredited hard work, associating those who performed it with the slave’s lowly status. Planters generally prided themselves on being men of leisure and culture, freed from labor and financial concerns.282

One especially honest (but not too self-aware) member of the South Carolina planter class summed up the thinking when he explained, “Slavery with us is no abstraction but a great and vital fact. Without it our every comfort would be taken from us. Our wives and children made unhappy . . . our people ruined forever.”283 One white Mississippi planter, lamenting the abolition of slavery after the South was vanquished in battle, put it this way: “I never did a day’s work in my life, and don’t know how to begin.”284 In short, the rich white Southerner, totally dependent on enslaving blacks for his fortune, was the ultimate lazy slacker, yet his laziness hardly prevented him from attaining monumental riches.

So too, the wealth of the early industrialists had less to do with their own hard work or intellect than with illegal activity and the intervention of the state. The Erie Canal, constructed with public money from 1817 to 1825, linked the Great Lakes and Ohio Valley to the Hudson River and New York City, vastly lowering shipping costs of goods to the nation’s interior and boosting profits for private businesses, none of which spent their own cash to finance the project.285 The further growth of the nation’s economy in the late nineteenth century and the profits of the business class at that time were only made possible by the transcontinental railroad. But in order to make the railroad feasible at a profitable rate, officials with the Central Pacific and Union Pacific railroad companies bribed elected officials to give them free land on which to lay the track, engaged in illegal kick-back schemes, overcharged the government for the costs of construction and arranged for multiple public subsidies, allowing them to reap enormous profit at public expense. And, as with whites who trafficked and enslaved blacks, these economic aristocrats depended upon the use of exploited workers, mostly Irish and Chinese, to keep their profits high.286 Between 1862 and 1872, Congress gave railroad companies more than one hundred million acres of previously public land, in addition to granting them tens of millions of dollars worth of tax concessions and loans.287

Additionally, beginning in the early 1860s, the government began handing out large parcels of land to white families under the Homestead Act: hundreds of millions of acres upon which to farm and carve out a living. Although the work done on those homesteads was no doubt real, the ability of those farmers to access that land in the first place was due to government initiative.288 For those denied access to the land, like African Americans, or those pushed off the land (like indigenous people or Mexicans who lost land claims after the war with Mexico), that government intervention also enshrined significant white racial privilege and advantage. And in many instances, even the small-scale white farmers were taken advantage of by big mining and lumber interests that would pay the individuals to stake claims for them under the Act, and then assume ownership after paying them a small pittance.289

By the early 1900s, the government was hard at work granting monopoly charters to corporations in a number of industries, from banking to transportation to insurance and others, thereby extending to the owners of these companies exclusive rights to engage in various types of enterprise. Their resulting fortunes, which were vast indeed, owed to government favoritism and graft, not to their own genius or having won a competition against less worthy competitors. Throughout this period, government forces were used to crush labor movement activity, including strikes by workers made to toil in often horrific conditions, suggesting that again, without the heavy arm of the state, their profits would have been much less certain.290 And far from being self-made men, fully ninety-five percent of executives and financial tycoons at the beginning of the twentieth century were from upper-middle class or wealthy backgrounds. Throughout the nineteenth century only two percent of industrialists were born to working-class parents.291

But things are different today, some would insist. Surely the wealthy today earn their own keep, regardless of how the rich in earlier times might have procured theirs. Although it’s possible—putting aside the fact that the wealthy of the 1700s, 1800s and early 1900s all would have said they had earned their fortunes too—in truth, the wealthy financial minority are no more justified in their positions now than in the past. First, recall the examples of direct government subsidy and preferential tax treatment mentioned earlier in this chapter, to say nothing of the government bailout of the financial industry, all of which demonstrate that the wealthy owe their position to the loving hand of a charitable state. But that isn’t all. In 2008, for instance, less than one-fifth of the income earned by those making more than $10 million came from actual labor, while the rest came from interest, dividends and rents on properties these folks already owned.292 In other words, even if they hadn’t gotten out of bed for a single day of work, these individuals would have still made at least $8 million on average that year. What does that have to do with merit in any appreciable sense, let alone hard work?

Drilling down a bit more specifically, consider Wall Street traders. Far from making their fortunes due to their own skills, such folks are able to make hundreds of millions of dollars more than what they otherwise would, not because they’re working harder, but simply by utilizing lightning-quick computers and software programs to which only they have access. These systems allow them to see trades that are in the process of being made—perhaps by individuals doing their own investing, or simply by investors who don’t have such computers. Before the trade can go through, the high-speed traders can buy the same stock that’s about to be purchased by the regular investor, and then sell it to that initial investor for a few pennies more than they were going to otherwise pay for it, all before the original trade is final. Although the practice has little discernible impact on the small investor, who likely won’t notice the tiny markup, the practice, done millions of times a day, rakes in mega-profits for those engaged in it.293 They are not producing anything of value. They are not making the companies whose stock is purchased worth more, allowing them to create jobs. They are simply skimming money off the top with a practice that is essentially the high-tech equivalent of mind reading or card counting in Vegas, only far more foolproof than either of those. It has nothing to do with merit or skill.

Likewise, to believe that America’s corporate executives have “earned” their exorbitant pay and that income reflects effort or ability seems downright delusional. From 1978 to 2013, CEO compensation (base pay plus exercised stock options) increased by 937 percent. Although it should be obvious that such an aristocratic bunch did not in fact manage to increase their work effort by this much, or become nearly a thousand percent smarter or more productive in that time, let there be no mistake: this boost in pay at the top was more than double the rise in the stock market over that same period. In other words, CEO pay grew twice as fast as the company value overseen by those CEOs. In the process, it far and away outstripped wage growth for the typical worker, whose pay barely budged, if at all, even as their productivity rose dramatically.294

In 1965, the ratio of CEO-to-worker compensation was only about twenty to one. By 1978 that ratio had grown, but still only stood at thirty to one. By 1995, however, the average CEO was bringing home 123 times what the average worker earned, and today, that ratio stands at 296 to 1.295 The typical American CEO’s annual bonus alone is sixty-two times greater than the average worker’s annual pay.296 To think that these numbers reflect merit not only requires one to assume that a typical CEO is worth three hundred times more than a typical worker, or works three hundred times harder, or is three hundred times more productive; more to the point, given the change over time, one would have to believe that CEOs were evolving at a scientifically unheard-of pace. After all, the top executive in 1965 was only twenty times more productive, according to this logic, and didn’t really gain much in terms of ability or smarts over the next fourteen years. But then, suddenly, it’s as if some biological breakthrough occurred, and although average workers stopped evolving, the species known as homo executivis enjoyed some amazing genetic leap to previously unimagined levels of talent and ability. And apparently, a few particular CEOs have evolved even more quickly and dramatically than their merely average peers, with former Walmart CEO Michael Duke receiving nearly a thousand times more than the average company employee, and Apple CEO Tim Cook taking home 6,258 times the wage of the typical Apple employee in 2011.297 To believe that these kinds of financial chasms can be chalked up to merit and relative ability seems to stretch the bounds of credulity: after all, it would mean that Apple and Walmart either have especially superhuman executives or especially dull and unmeritorious hourly workers, or perhaps both, when compared to other corporations.

Surely it can’t be merit that explains executive pay, considering that the highest-paid CEO in the United States—Charif Souki of Cheniere Energy—runs a company that has never even claimed a profit.298 Or consider the $8.5 million raise given in 2013 to Jamie Dimon, the chair and CEO of JPMorgan Chase, which brought his total pay to $20 million that year, even after the company’s profits fell sixteen percent, and after the company was forced to pay out roughly $20 billion to settle various legal claims.299 Pay packages like this, despite mediocre or even negative performance, no doubt help explain why former AT&T Broadband CEO Leo Hindery insists that executive pay is “a fraud,” which owes entirely to corporate “cronyism.”300 Meanwhile, the CEO of the Container Store, Kip Tindell, who has imposed limits on his own pay to no more than thirty-five times that of his average store employee—and who also pays those employees double the retail industry norm (around $50,000 per year on average)—is enjoying steady profits, suggesting that CEO pay is unrelated to excellence and that the tendency toward inflated executive compensation is more about greed than merit.301

Beyond the purely anecdotal, evidence seems to suggest that exorbitant CEO pay, and particularly “incentive pay” for performance, is negatively correlated with a company’s stock returns in most cases. Not only is such compensation not a legitimate reward for a job well done; if anything, it may lead companies down the wrong road. According to a 2013 study by business school professors from Cambridge, Purdue and the University of Utah, firms whose CEOs rake in the very top levels of pay see their stock market returns fall by approximately eight percent over the three-year period immediately following the excess payouts. They suggest that excess pay leads to CEO overconfidence, which causes stock losses due to irresponsible over-investment and value-destroying mergers and acquisitions for which the company was not well suited.302

If we consider it logically, we must know that pay scales do not reflect hard work per se, let alone one’s larger social value. Few among us, for instance, would actually accept the notion that a hedge fund manager like Steven Cohen really earned his $2.3 billion income in 2013, especially considering that the very same year he received this amount his firm pled guilty to insider trading, for which they were hit with a fine of $1.8 billion.303 Doubtless, few of us have jobs that would allow us to commit a major financial crime and still remain on the free side of a jail cell, let alone able to walk away with a payday larger than the penalty we were asked to fork over. Likewise, it’s hard to believe the earnings of Chris Levett, head of Clive Capital (a commodity hedge fund) are earned. After all, from 2011 to 2013, even as the firm lost money in both years for its investors, Levett was paid nearly $100 million.304 In general, research finds that the average annual rate of return for hedge funds is actually no better, and is sometimes worse, than it is for low-risk or even no-risk investment instruments, and no better than the annual rate of return for the S&P 500.305 In other words, most hedge fund managers aren’t even outperforming the market or government bond rates, yet they rake in huge excess profits.

The absurdity of such hefty incomes for hedge fund managers is particularly obvious when the figures are contrasted with the incomes of many others in society, whom most would likely consider far more vital to the overall national well-being. As Robert Reich has noted:

What’s the worth to society of social workers who put in long and difficult hours dealing with patients suffering from mental illness or substance abuse? Probably higher than their average pay of $18.14 an hour, which translates into less than $38,000 a year. How much does society gain from personal-care aides who assist the elderly, convalescents, and persons with disabilities? Likely more than their average pay of $9.67 an hour, or just over $20,000 a year. What’s the social worth of hospital orderlies who feed, bathe, dress, and move patients, and empty their bedpans? Surely higher than their median wage of $11.63 an hour, or $24,190 a year. Or of child care workers, who get $10.33 an hour, $21,490 a year? And preschool teachers, who earn $13.26 an hour, $27,570 a year?306

The list could go on for several pages: nurses, kindergarten teachers, firefighters, school counselors, food safety inspectors, farmers, hospice care workers and so on: all professions that most would consider pretty indispensable to the common good, and yet all of which pay far less than managing a hedge fund, moving money around for rich people, and apparently engaging in fraudulent behavior while doing it.

If you ask most people what jobs they consider the most important in the society, the list you’ll get in response will always be pretty similar, and rarely if ever will they include jobs like “hedge fund manager” or “derivatives trader” or “real estate developer” or even “corporate executive.” These are the jobs that pay the big money, but not because they have more objective value in the minds of Americans. Unless you asked this question of actual bond traders, it is unlikely that a single person would answer “bond trader” when pressed about the society’s most important positions. In fact, I’ve conducted this little experiment before, and the list of the ten most important jobs is always top-heavy with professions that don’t pay very much. With the exception of doctors, the jobs listed are some of the nation’s lowest paying. Other than physicians, they typically include teachers, nurses, firefighters, police officers, soldiers, child care providers, elder care and nursing home providers, farmers, clergy and mothers. Occasionally, they will also include engineers—another high-paying profession—but rarely any other career that is particularly lucrative.

Now, compare the average person’s list to the highest-paying careers, according to the Labor Department. In addition to investment bankers, the highest paid are physicians of various types, CEOs, petroleum engineers, lawyers, architectural and engineering managers (especially for the oil and gas industry), natural sciences managers, marketing managers, computer and IT managers and industrial psychologists.307 No offense meant to anyone in one of those positions, but your pay hardly reflects the value placed on your jobs by the public. And let’s be honest, no kid ever went to bed at night, clutched a teddy bear and said, “When I grow up, I want to sell highly leveraged mortgage-backed securities,” even though doing so would no doubt make those kids a lot of money. Children don’t think about things like money. They typically have more ethical concerns and far more admirable value systems.

According to the Bureau of Labor Statistics, almost none of the most socially useful jobs identified by the general public are among the best paid. Other than doctors—a broad category, in which various specialties almost always pay more than the general practice thought of by most when the term “physician” is used—the pay rates of the most socially useful jobs rank very low. Police and detective supervisors (not what most people are thinking of when they say police, in answer to the question), come in at number 180 on the list of best-paying professions, which is the highest ranking of any job other than physician that most folks mention. Criminal investigators rank 211; police detectives, 215; farmers, ranchers and agricultural managers rank 268; while no other socially useful jobs rank among the top three hundred.308

Ultimately, pay levels are not about merit or social value; they’re about power dynamics. They’re about how much value is placed on various types of work, by people with lots of money to spend. So, for instance, if patients in nursing homes each managed to crap a flawless ten-carat diamond once they reached the age of ninety, rest assured, elder care workers would be paid like investment bankers, solely for their ability to keep old people alive until it was time for the diamond harvest. But as it is, they are paid horribly, since rich people see more value in office buildings and yachts and derivatives than they do in the people who care for their own grandparents.

Issues like the strength or weakness of labor unions, and how much influence the rich exercise over executive compensation packages set by company boards, further determine pay levels. We know power relations are more influential on pay than actual merit if for no other reason than this: the gaps between pay at the top and the bottom have grown drastically in recent decades, and far more quickly than could be explained by growing genius among the rich or falling IQ and output at the bottom of the scale. As Reich explains:

Fifty years ago, when General Motors was the largest employer in America, the typical GM worker got paid $35 an hour in today’s dollars. Today, America’s largest employer is Walmart, and the typical Walmart worker earns $8.80 an hour. Does this mean the typical GM employee a half-century ago was worth four times what today’s typical Walmart employee is worth? Not at all. That GM worker wasn’t much better educated or productive. . . . The real difference is the GM worker a half-century ago had a strong union behind him that summoned the collective bargaining power of all autoworkers to get a substantial share of company revenues for its members. And the bargains those unions struck with employers raised the wages and benefits of non-unionized workers as well. Non-union firms knew they’d be unionized if they didn’t come close to matching the union contracts. Today’s Walmart workers don’t have a union to negotiate a better deal. And because fewer than 7 percent of today’s private-sector workers are unionized, non-union employers across America don’t have to match union contracts. . . . The result has been a race to the bottom. By the same token, today’s CEOs don’t rake in 300 times the pay of average workers because they’re “worth” it. They get these humongous pay packages because they appoint the compensation committees . . . that decide executive pay. Or their boards don’t want to be seen by investors as having hired a “second-string” CEO who’s paid less than the CEOs of their major competitors. Either way, the result has been a race to the top.309

Even though Walmart recently announced plans to offer a substantial wage boost to about half a million of their employees, this fact hardly changes the truth of Reich’s statement. In fact, if anything, it only further demonstrates the wisdom of it. The announcement of pending raises at Walmart, though a positive sign for their employees (and many others whose wages may also be forced upward by such a jump at the nation’s largest employer), actually proves the fundamental flaw at the heart of right-wing economic theory. First, and just to clarify what the raise does and doesn’t mean, although Walmart recently announced plans to boost its lowest-level employees to $9 per hour ($10 per hour by 2016) and bring department managers to levels as high as $15 per hour, such news hardly suggests that the company values its employees at a level commensurate with their worth. As of now, Walmart extracts nearly $7,300 in net profit from each of its employees, on average, up from less than $6,000 per employee (which was still significant) just before the recession. In other words, as the economy tanked, the amount of surplus value the nation’s largest company was able to skim from the work of their associates increased.310 Although news of a pay hike is welcome, it will likely only bring their profit per worker back to pre-recession levels and merely reflects the tightening labor market, which has Walmart worried that if they didn’t offer more pay, their employees might jump ship.

And it is this last point, more than anything, that proves the disconnect between the income workers receive and their actual work effort or productivity. That Walmart offered these raises proves that previously they had been paying so little not because that was all their workers were worth to them, but because they could get away with it in a weak economy where working people had fewer options. The lesson this reality affords us—both the previous wages being offered and the proposed pay hikes—is a significant one, and utterly debunks the dominant narrative about pay levels and people “getting what they’re worth” in the market. After all, Walmart employees didn’t become more productive in the last few months so as to justify the raises they appear poised to receive. Rather, economic conditions beyond the control of those workers changed, thereby necessitating a pay hike in the eyes of their employer. This is how the so-called free market works: it isn’t about workers getting what they’re worth; rather, it is about employers paying as little as they can get away with. The market as such does not exist; only power dynamics exist—who owns, who doesn’t; who is in charge and who isn’t.

Likewise, pay at the top hardly reflects merit or productivity either; it too is rooted in dynamics of power and influence. Reich has also explained, for instance, the particular disconnect between Wall Street bonuses paid to investment bankers and any notion of actual merit or talent on their part. In 2013, for instance, Wall Street bonuses skyrocketed to $26.7 billion overall, and averaged a fifteen percent boost from the previous year, bringing bonus levels to their highest point since the 2008 economic collapse. But as Reich explains, these bonuses had nothing to do with a fifteen percent gain in productivity, or indeed any measurable notion of merit. Instead of merit, these bonuses (and indeed the entire profitability of these banks) were made possible by government policy, and the indirect subsidy received by these entities ever since the government bailout rendered the investment banks, and especially the largest of them, “too big to fail.” By bailing out the investment banks and sending a clear signal that these institutions—as opposed to smaller depository institutions, like your local bank branch—will not be allowed to go under, the government indirectly subsidized the larger banks by making it more attractive for persons to park their money with Goldman Sachs, for instance, than with a smaller bank. Even though the investment banks pay out a smaller interest rate on the money deposited with their institutions, the security purchased by “too big to fail” steers money to the large firms that would otherwise have gone elsewhere; as such, it results in more money for investment banks, and higher bonuses for investment bankers on Wall Street. If the government had not made Wall Street investments so artificially attractive with the bailout, roughly $83 billion less would have been deposited on Wall Street last year, according to recent estimates.311

Needless to say, without that $83 billion, which only exists because of government policy and has nothing to do with the genius of bankers, there is no way investment banks could have paid out nearly $30 billion in bonuses last year. In fact, the amount of the predicted subsidy received by just the “big five” investment banks was roughly equal to those companies’ profits last year. In other words, without government helping to steer money to those institutions, they would have barely broken even, let alone have been able to pay out such massive bonuses. It is more accurate then to think of investment banks and bankers as charity cases and welfare recipients, rather than as hard-working and highly skilled business folks.

Even beyond the bailout and its salutary effects for current banking profits, the everyday operations of Wall Street are made easier by government actions (or perhaps we should say, inactions). Investment banks have reaped significant profits above and beyond what otherwise could have been possible, thanks to the deregulation of the financial industry in the 1990s—a government decision that made possible several investment instruments and practices that previously would have been disallowed. In short, had it not been for the power of the banking lobby to incentivize lawmakers to loosen the rules for Wall Street, no matter the genius and hard work of the investment class, hundreds of billions, even trillions of dollars simply wouldn’t have been made. That’s not about a magical marketplace, but about naked political clout. Likewise, the lack of a sales tax on financial transactions, despite sales taxes on virtually all other consumer purchases, amounts to a form of preferential government treatment of one type of market activity, and in this case, one most likely to further profit the already wealthy. When Americans have to pay sales tax on baby formula and fresh produce worth a few dollars but not on stock purchases worth trillions, it seems obvious that certain types of market activity are being favored over others, to the benefit of the wealthy.

Even those upper-income individuals unrelated to the banking industry reap substantial benefits from government. Every year, just one preferential tax policy—the home mortgage interest deduction—costs the government over $100 billion in revenue that would otherwise have been collected. This kind of policy, known as a tax expenditure, is every bit as much a government program as direct housing subsidies for Americans in need. It is no different from writing checks to homeowners to help pay their mortgages, because not collecting taxes that would otherwise be owed deprives the government of money in the same way as collecting it through taxes and then turning around and giving it back out again. And this benefit, which costs more than twice the amount spent annually on low-income housing programs, disproportionately benefits wealthier Americans, because the value of the deduction increases percentage-wise, depending on one’s tax bracket. Also, of course, since there is no similar deduction or tax credit for renters, such a policy by definition subsidizes more affluent homeowners but not less-well-off families that rent.

Although the deduction has long been defended as a way to encourage homeownership, it is questionable whether it truly serves this purpose. Since wealthy homeowners are likely to buy a house with or without the deductibility of mortgage interest, the only possible effect for them would be to encourage them to buy a bigger house than they might otherwise purchase: hardly as noble a goal as encouraging homeownership in general. And for persons whose incomes put them on the cusp of buying as opposed to renting, for whom an interest deduction might make the difference, it is doubtful that the deduction as currently constituted does much good. Why? Because the deduction is only available to people who itemize their taxes (which most moderate-income families do not), and because the value of the deduction is tied to a person’s tax bracket. So the average benefit for homeowners with income between $40,000 and $75,000 a year, for instance, only comes to $523, or about $44 each month. Is that benefit sizable enough to encourage them to purchase a home rather than rent? Not likely. In other words, the mortgage-interest deduction is mostly a tax giveaway to upper-middle-income and affluent homeowners—and for all those who reap the benefits (my family included, thank you very much), it amounts to a government program that puts more money in our pockets.312

The facts are all too clear: rather than talent determining income or wealth, it is a combination of luck, connections, government assistance and public policy like financial deregulation which ultimately make the difference. And let’s not forget making money the old-fashioned way: inheriting it. No matter how much we may like to believe that dynastic wealth is a feature of life only in other nations, inherited wealth continues to skew the class structure in the United States as well. Before the economic meltdown, estimates suggested that from 1998 until 2017, about $7 trillion in assets were in the process of being handed down via inheritance. By 2061, that number is expected to reach $36 trillion in intergenerational wealth transfers, nearly ninety percent of which will flow from the wealthiest fifth of Americans to their heirs.313 Although there is no doubt that the economic collapse may have put a temporary damper on the assets of some among the affluent, as we’ve seen previously, most of their wealth has been recouped and then some; thus there is little reason to suspect that these numbers will have declined, and much reason to expect them to climb in coming decades. Not to mention, these numbers only refer to assets transferred at death, but what are called inter vivos transfers—gifts essentially—from parents to children while those parents are still alive (such as help with a down payment on a house, or college tuition) actually account for a larger share of intergenerational wealth transfers than direct inheritance.314 That numbers like these drive a stake through the heart of the idea that the well-off simply “earn” their position should be obvious.

Most important, perhaps, is the simple reality that the rich almost always depend on squeezing the working class for whatever fortunes they manage to build. The idea that the poor and working class need the wealthy, rather than the other way around—though a common perception, it appears—couldn’t be more backwards. Without workers, whom they pay less than the value of the work performed, no capitalist could ever become successful. It is only by paying workers less than the value of what they do for you that you are able to make a profit. It seems axiomatic that if you do a job for me that I could not and would not do for myself, and which enriches me to the tune of $100, but I only pay you $70 for your effort, I have taken advantage of you. To that argument the defender of capitalism would reply that without the capitalist to offer the job, the worker would have made nothing. But this equation continues to miss the obvious: without prior workers, there would have been no capitalist. The wealth held by the capitalist came from somewhere, and in almost no instance did it come from their own direct labor; rather, it came from someone else’s labor—either people the capitalist hired, or those his predecessor hired; or it came from state-sanctioned violence and the forcible expropriation of land. The railroad tycoons did not lay their own track and dig their own tunnels, not even for one day, let alone long enough to save up the money with which they were able to hire all those other folks to continue the effort. They inherited their companies or knew the right people and had the power of the state at their disposal to make their profits possible. To give thanks to the capitalist for the job offer he is able to make, without acknowledging the complete reliance upon labor that made the capitalist possible, and without which he or she could not exist, is to invert the cause-and-effect relationship between work and wealth. It was something that Abraham Lincoln understood quite clearly, however much his words might appear radical by comparison to today’s political boilerplate:

Labor is prior to and independent of capital. Capital is only the fruit of labor, and could never have existed if labor had not first existed. Labor is the superior of capital and deserves much the higher consideration.315

In short, the rich didn’t build their fortunes: the labor of others who were underpaid for their trouble did. Capitalists, it turns out, may be the most dependent people on the planet.

A Culture of Predatory Affluence: Examining the Inverted Values of the Rich

Not only are talent and hard work inadequate to explain the inflated incomes of the super-rich; so too, their value systems and personal integrity fail to justify their positions. Indeed, while the wealthy and their conservative media megaphones spend time and energy bashing the so-called “culture of poverty” and suggesting that it is the poor and unemployed whose values are dysfunctional, pathological and destructive, the reality is almost entirely the opposite of that charge. If anything, it is the culture and values of the affluent that are the most dysfunctional and destructive to the social good.

Consider, for instance, the value system of executives at one of America’s largest corporations—General Motors. Recently it was revealed that GM had made the conscious decision not to replace faulty ignition switches on certain of their cars, even though they knew that the switches could turn off unintentionally, thereby disabling power steering, airbags and power brakes and leading to dangerous and potentially deadly accidents. According to internal GM documents, the flaw was known to exist and the decision as to whether or not the company would recall the vehicles and make the necessary fix was debated internally. Ultimately, it was decided there would be no recall and no fix for the existing vehicles, because the costs were prohibitive. And what were these? Less than $1 per car, and about $400,000 in various other costs. Ultimately, much as Ford had done in the 1970s with its release of the known-to-be-dangerous Pinto, GM decided it would cost less to pay off the families of those killed in accidents related to the faulty switch, or to pay the bills of those injured, than to make the fix on all the flawed vehicles they had put on the road. In short, a multibillion-dollar company decided that their money was more important than other people’s lives—a calculation that ultimately resulted in the deaths of at least thirteen people.316 If a drug dealer were to make this calculation preceding a deadly drive-by shooting intended to take out his gang rival (and thus protect his financial interests), we would call that criminal, we would seek to jail him, and we would probably consider his actions evidence of an inherently pathological culture. If corporate executives and engineers make this calculation, as was the case at GM (and several decades ago at Ford), the dominant analysis in the media and among the nation’s business class is that the result has been a terrible tragedy, but that it does not reflect anything meaningful about the value systems of the wealthy people upon whom blame ultimately resides.

It’s certainly not the poor who took advantage of investors by selling them risky and even useless mortgages in large bundles, knowing full well the dangers posed by those investment instruments; it was JP Morgan that did that, and Citigroup and Bank of America, among others, all of which are now ponying up tens of billions of dollars in settlements with the Justice Department for their questionable, unethical and in many cases blatantly fraudulent activities—although, as we’ll explore shortly, these fines amount to very little in the larger scheme of things, and essentially amount to a slap on the wrist.317

The tendency to recklessness and risk-taking that was central to the banking crisis stems directly from the value systems and psychology of those who make their livings as investment bankers. And according to recent studies, such persons are actually more likely to engage in reckless and risky behavior than even certified psychopaths. In one study, Swiss researchers tested stock traders on measures of cooperation and egotism, using computer simulations and standard intelligence tests, ultimately finding that the traders “behaved more egotistically and were more willing to take risks than a group of psychopaths who took the same test.” Particularly disturbing was the observed tendency of the investment bankers to deliberately seek to damage their opponents in the experiment. Rather than simply trying to outperform others on their own merits, the traders seemed especially interested in harming others in order to get ahead. As one of the research team put it, it was as if the stockbrokers discovered that their neighbor had the same car as they did, “and they took after it with a baseball bat so they could look better themselves.”318

When it comes to the defective value systems of the nation’s wealthy economic minority, there are few better examples than that provided by the investment bank Goldman Sachs. The firm, which received more in bailout funds and government subsidies than any other investment bank, used millions in taxpayer money to pay top executives, even as the bank’s actions had helped bring the economy to the brink of utter collapse. From 2009 to 2011, after receiving bailout funds from the government, Goldman Sachs paid its senior officials nearly $50 billion in bonuses. And this they did despite a history of unethical, destructive activity responsible for the suffering and even death of millions, through the deliberate manipulation of food prices. As Chris Hedges has noted:

Goldman Sachs’ commodities index is the most heavily traded in the world. Goldman Sachs hoards rice, wheat, corn, sugar and livestock and jacks up commodity prices around the globe so that poor families can no longer afford basic staples and literally starve. Goldman Sachs is able to carry out its malfeasance at home and in global markets because it has former officials filtered throughout the government and lavishly funds compliant politicians—including Barack Obama, who received $1 million from employees at Goldman Sachs in 2008 when he ran for president. These politicians, in return, permit Goldman Sachs to ignore security laws that under a functioning judiciary system would see the firm indicted for felony fraud. Or, as in the case of Bill Clinton, these politicians pass laws such as the 2000 Commodity Futures Modernization Act that effectively removed all oversight and outside control over the speculation in commodities, one of the major reasons food prices have soared. In 2008 and again in 2010 prices for crops such as rice, wheat and corn doubled and even tripled, making life precarious for hundreds of millions of people. And it was all done so a few corporate oligarchs, the 1 percent, could make personal fortunes in the tens and hundreds of millions of dollars. Despite a damning 650-page Senate subcommittee investigation report, no individual at Goldman Sachs has been indicted, although the report accuses Goldman of defrauding its clients.319

But the manipulation of food prices is only part of Goldman Sachs’s pathological culture. All throughout the economic runup to the Great Recession, and in moves that helped contribute directly to it, Goldman was misleading investors about the value of the investments they peddled, making the investments sound like guaranteed money makers even as they were actively betting against their own recommendations with still other investments—all to ensure they could make money either way. No matter what happened to the persons whom they tricked into buying securities they knew were junk, Goldman covered their assets (or more to their point, their asses), with little concern for the effect of their actions. When the dust settled, it was the taxpayers who got stuck with the bill.

That all of this reckless and irresponsible investment behavior was neither the creation of poor people nor the outgrowth of so-called underclass cultural pathology should be apparent. These tools of economic manipulation were the product of wealthy and highly educated individuals and the institutions for which they worked. If anything, they are the effluent of affluence, or more importantly, indicative of a culture of “predatory affluence,” within which people seek to make money without actually working, without having to create anything of lasting value, and without having to worry about the impact of their actions on others. They are the result of a kind of rapacious and ravenous greed of which the poor cannot even conceive. A poor person’s greed might lead them to steal $200 worth of goods from a store, or your purse, or your iPhone. Though unfortunate, and wrongheaded, and unethical, it isn’t likely to have a dramatic impact on the larger society. And of course, if apprehended, the perpetrator of such robbery whose greed (or even desperation in some cases) led them to steal your stuff will likely go to jail. But the greed of the rich, which causes them to be unsatisfied with six-figure salaries and to seek out hundreds of millions, or even billions in loot, will cause them to figure out ways to game the entire economy, to play fast and loose with other people’s livelihoods, to inflate the demand for risky mortgages so that they can make money off the misfortune of others, and if the economy collapses, oh well. They still made a killing on the deal, and in all likelihood, none of them will go to jail—even the people who created investment instruments they knew would likely bring immense loss to others. Fraud and deceit on that level is considered “too big to jail,” and the institutions that perpetrated all that risk and fraud “too big to fail.” And so, despite their inverted value systems, their “short-term orientation,” and their sociopathic disregard for the well-being of others (all of which we are told characterize the underclass, but really fit the wealthy Wall Street speculators far better), they remain unpunished and available for the veneration of fellow aristocrats who game the system while acting as if they are making vital contributions to our society and the world.

Even rich people who are repeatedly caught committing crimes often get off with only minor punishments. A 2011 New York Times analysis uncovered more than fifty cases over the past fifteen years in which Wall Street bankers violated anti-fraud laws. Once caught, the violators promised never to break the laws again, and then proceeded to do so over and over with virtually no consequence. In many of the most prominent cases, top executives at the fraudulent firms got huge raises even after the misdeeds had been uncovered, and the fines imposed in most instances only amounted to between one and five percent of revenues—hardly sufficient to deter future financial crimes.320

Frankly, even when the rich make their money from perfectly legal means, there are still valid questions to be asked as to the ethics of their operations. Just as conservatives condemn the ethics of the poor for supposedly relying on government aid (which is entirely legal), so too should it be acceptable to challenge the ethics of companies and individuals at the top of the nation’s economic pyramid—especially considering how many of these make their money from less than entirely laudable means. So consider, for instance, the operations of the Blackstone Group: an investment group based in New York, which over the past few years has bought up tens of thousands of foreclosed properties and is now the largest single landlord in the nation, even renting out those properties in some cases to the very individuals who owned them previously.321 To hear the experts tell it, Blackstone is one of the “hottest investments” on Wall Street, and as such, it is making plenty of money. But are its operations ethical?

To make that determination it helps to think about what foreclosure meant to the more than seven million families that lost their homes during the Great Recession. For those who found themselves unable to make their mortgage payments, the American Dream came crashing down around them. Many were families that had never been able to own a home until relatively recently, but thanks to the proliferation of subprime loan instruments, which allowed people in the banking business to make mega-profits off inflated interest rates, they suddenly could. Lenders were offering loans to people who they knew would likely have difficulty making payments, but it didn’t matter. If the borrowers defaulted, they could always reclaim the property and sell it again, and ultimately the risk was low: most subprime mortgages were being repackaged in large bundles and sold to wealthy investors in the form of mortgage-backed securities. If some of the loans went bad, it would be the investors who lost their money, not the banks themselves. So there was very little incentive for lenders to worry about whether the loans were too risky for the borrowers. And the borrowers, not understanding the finer points of things like “adjustable rate mortgages” (which start out small and affordable but then balloon after a few years), got caught in a system intended to make short-term profits without much regard for the effect on the borrowers over time.

So into the breach of foreclosure came a number of investors seeking to snap up foreclosed properties. At first, the process likely helped stabilize the housing market and allowed mostly small investors to buy up a handful of properties. Initially, this process of buying up blighted and vacant housing was likely a net plus for the communities in which so many foreclosures had occurred, but now, several years later, buying up foreclosed homes has become less a way to stabilize communities and more a strategy for making bucketloads of money. By snapping up so many properties, big investors like Blackstone reduce the pool of moderately priced housing options for families to purchase and begin building equity. Because investment groups like Blackstone have so much cash on hand to allow them to buy up single-family homes for rental, individuals who might be able to buy foreclosed houses at auction for a good price are inevitably outbid, and end up renting at a higher cost than what a fair mortgage would run them. The net result is fewer homeowners, less equity built up among the working class, and less affordable housing for moderate-income individuals, while wealthy investors make ten percent profits annually on each rental unit. And, of course, as Blackstone and others bundle all these rental properties into packages for wealthy investors, the risk to the economy grows. Just as mortgage-backed securities brought down the economy when too many people couldn’t pay their notes due to inflated interest rates, so too could rental-backed securities create problems if tenants can’t keep up with their rents. In an economy that isn’t producing rising wages for most workers—and certainly not the kind of workers who normally rent—that risk is quite real.

So how do we assess the ethics of such a practice? Is it ethical to make money off of the pain of others, who lost their homes? And to engage in a practice that ultimately increases the cost of living to moderate-income persons and reduces the ability of such persons to buy their own homes? Is the value of moneymaking at all costs, and as quickly as possible, a value we wish to promote? Or should such a value and those who adhere to it be questioned? Ironically, one of the principal critiques of the so-called “culture of poverty” has long been that those trapped in this supposed culture have a “short-term orientation” and don’t plan for the future sufficiently; yet with groups like Blackstone, the very same short-term thinking—make money now, and lots of it, without regard for the risk that such actions might be introducing into the economy—is seen as normal, legitimate, even laudable in the eyes of the wealthy minority. By ratifying such practices and allowing them to proceed with very little regulation or oversight, we begin down a road similar to the one that has already caused so much pain for so many people.

Beyond individual examples like Blackstone, or the phenomenon of widespread unethical financial activity in the United States, there is reason to believe that the larger culture of affluence and great wealth itself poses a significant risk to the society we share. A recent analysis of seven separate studies found that the wealthy actually behave less ethically than the poor: they are more likely to break driving laws, more likely to exhibit unethical decision-making tendencies, to take valued goods from others, to lie in negotiations, to cheat so as to increase their chances of winning a prize, and to openly endorse unethical behavior to get ahead at work. According to the studies, the unethical tendencies of the upper class stem mostly from their more favorable attitudes toward greed when compared to those in lower-income groups.322 Likewise, four additional studies have recently found that lower-income persons are more generous than the wealthy, more trusting and more likely to help someone in need. The research finds that people who are categorized as poor and working class are more likely to act in pro-social ways because of their greater commitment to egalitarian values and greater levels of compassion.323 As explained by those who have studied the link between wealth and unethical behavior:

The answer may have something to do with how wealth and abundance give us a sense of freedom and independence from others. The less we have to rely on others, the less we may care about their feelings. This leads us towards being more self-focused. Another reason has to do with our attitudes towards greed. Like Gordon Gekko [in the fiction film Wall Street], upper-class people may be more likely to endorse the idea that “greed is good.” [Researcher Paul K.] Piff and his colleagues found that wealthier people are more likely to agree with statements that greed is justified, beneficial, and morally defensible. These attitudes ended up predicting participants’ likelihood of engaging in unethical behavior.324

Researchers who have explored the connection between wealth, power and cold-hearted, even cruel behaviors, have uniformly found the connection to be strong. In experimental settings, they have been able to induce feelings of power among subjects that lead to a substantially increased tendency to engage in self-aggrandizing, callous and cruel behaviors toward others. According to the research, wealth and power produce a kind of implicit, if not explicit, narcissism:

Even thoughts of being wealthy can create a feeling of increased entitlement—you start to feel superior to everyone else and thus more deserving. . . . Wealthier people were more likely to agree with statements like, “I honestly feel I’m just more deserving than other people. . . . ” This had straightforward and clearly measurable effects on behavior. . . . For example, when told that they would have their photograph taken, well-off people were more likely to rush to the mirror to check themselves out and adjust their appearance. Asked to draw symbols, like circles, to represent how they saw themselves and others, more affluent people drew much larger circles for themselves and smaller ones for the rest of humankind. If you think of yourself as larger than life, larger and more important than other people, it is hardly surprising that your behavior would become oriented towards getting what you think you deserve.325

Perhaps this is why polling data indicate the wealthy are so much less sympathetic to the lives and struggles of hard-working but still poor Americans. So whereas the general public says by an overwhelming margin (about four to one) that the minimum wage should be high enough to ensure that no family with a full-time worker at that wage should remain poor, only forty percent of the wealthy agree.326 In this regard, it appears that the values of the rich clearly are at odds with the larger society’s values. Interestingly, even though this fact renders the values of the wealthy pathological by definition—since pathology refers to something in an abnormal state—one rarely if ever hears discussion of the rich as a pathological “overclass” that manifests dysfunctional and abnormal values.

According to still more research, when people experience power their brains become less sensitive to others. As Canadian psychologists Michael Inzlicht and Sukhvinder Obhi note:

The human brain can be exquisitely attuned to other people, thanks in part to its so-called mirror system. The mirror system is composed of a network of brain regions that become active both when you perform an action . . . and when you observe someone else who performs the same action. . . . Our brains appear to be able to intimately resonate with others’ actions, and this process may allow us not only to understand what they are doing, but also, in some sense, to experience it ourselves—i.e., to empathize.

In our study, we induced a set of participants to temporarily feel varying levels of power by asking them to write a brief essay about a moment in their lives. Some wrote about a time when they felt powerful . . . others wrote about a time when they felt powerless . . . Next, the participants watched a video of a human hand repeatedly squeezing a rubber ball. While they watched, we assessed the degree of motor excitation occurring in the brain—a measure that is widely used to infer activation of the mirror system . . . by the application of transcranial magnetic stimulation and the measurement of electrical muscle activation in the subject’s hand. We sought to determine the degree to which the participants’ brains became active during the observation of rubber ball squeezing, relative to a period in which they observed no action.

We found that for those participants who were induced to experience feelings of power, their brains showed virtually no resonance with the actions of others; conversely, for those participants who were induced to experience feelings of powerlessness, their brains resonated quite a bit. In short, the brains of powerful people did not mirror the actions of other people. And when we analyzed the text of the participants’ essays, using established techniques for coding and measuring themes, we found that the more power that people expressed, the less their brains resonated. Power, it appears, changes how the brain itself responds to others.327

Additional research in two different countries has found that when individuals are placed in experimental settings and are led to believe they have just beaten someone else in a particular task (although in truth there was no competitor), they behave more aggressively toward the imaginary “other” than when they are led to believe they had lost the competition. Researchers suggest the studies demonstrate that winning makes one more aggressive, reducing empathy to those one has defeated.328 This too could have implications for how those in a hyper-competitive economy act upon “winning” the various competitions of everyday life: by beating someone out for a job, or by making more money than one’s fellow citizens, for instance.

A lack of empathy flows from a culture of cruelty and predatory affluence rather than anything over which the poor and unemployed have control. Unethical behavior makes perfect sense in a culture where getting ahead at all costs is the only supreme value. As David Callahan explains in his book The Cheating Culture:

In today’s competitive economy, where success and job security can’t be taken for granted, it’s increasingly tempting to leave your ethics at home every morning. Students are cheating more now that getting a good education is a matter of economic life and death. Lawyers are overbilling as they’ve been pushed to bring in more money for the firm and as it’s gotten harder to make partner. Doctors are accepting bribes from drug makers as HMOs have squeezed their incomes. . . . A CEO will inflate earnings reports to please Wall Street—and increase the value of his stock options by $50 million.329

And it’s not just corporate executives; even mere wealthy parents game the system on behalf of their kids. Affluent parents often pay tutors not only to help their children in certain subjects, but even to write papers for them; and it’s entirely the norm, it seems, for rich parents to pay psychologists thousands of dollars for a “learning disability” diagnosis for their kids, so those children can get extra time on standardized tests like the SAT. Of the 30,000 test takers who are granted disability status each year, the overwhelming majority are wealthy; meanwhile, low-income kids who might actually have learning disabilities, but not the money to pay for the diagnosis, are expected to play by the normal rules, and as a result, they are put at a disadvantage on a test that already favored the rich in the first place, if only because of the quality of their K–12 schooling.330

Though it doesn’t happen often, occasionally, members of the economic minority will themselves acknowledge the way in which great wealth can distort one’s value system. Former hedge fund manager Sam Polk took to the pages of the New York Times in early 2014 to note the way his earnings—which he now could readily acknowledge had been unrelated to anything socially productive—had changed him, turning him into a wealth junkie, much as drugs can become addicting.

I wanted a billion dollars. It’s staggering to think that in the course of five years, I’d gone from being thrilled at my first bonus—$40,000—to being disappointed when, my second year at the hedge fund, I was paid “only” $1.5 million.

As Polk put it, his greed overtook any moral qualms he had about misleading investors and ruining people’s financial lives.

Not only was I not helping to fix any problems in the world, but I was profiting from them. During the market crash in 2008, I’d made a ton of money by shorting the derivatives of risky companies. As the world crumbled, I profited. I’d seen the crash coming, but instead of trying to help the people it would hurt the most—people who didn’t have a million dollars in the bank—I’d made money off it.331

Although Polk ultimately “got clean” by walking away from the hedge-fund world and the riches that came from it, and now is engaged in a number of truly inspiring projects intended to empower persons in marginalized communities, for every reformed money-junkie there are several others who are still ensnared in a culture of predatory affluence, manipulating financial instruments for their personal gain. It is a mindset that is at once entirely psychopathic and yet normalized within the system of capitalism to which Americans are wedded. Ultimately, folks like Polk were only able to become addicted to outrageous fortune in the first place because the society in which they live allows such grotesque profits to flow to those whose economic activity is so corrosive of the greater good. In short, it is the pathological values of policymakers and the economic aristocrats who call the tunes to which they so eagerly dance, which are to blame for the Sam Polks of the world. We create them, systemically, and by the antisocial, money-obsessed ideologies we teach to the people of the nation every day. Waste, disposability, selfish materialism, celebrity superficiality and the never-ending quest for more, more, more seem to have won out over the advance of citizenship, public interest, community building and the common good.

It is a mindset that does virtually nothing valuable for communities or the world, unlike other, far less well-paying professions, as Polk himself notes in the Times piece:

Yes, I was sharp, good with numbers. I had marketable talents. But in the end I didn’t really do anything. I was a derivatives trader, and it occurred to me the world would hardly change at all if credit derivatives ceased to exist. Not so nurse practitioners.332

In some ways, that the wealthy turn out to be moral and ethical reprobates should hardly surprise us. To a large extent dishonesty and predation are the values inculcated by the nation’s most elite finishing schools for bankers and others who are trained to siphon all they can out of the system. At Harvard Business School, for instance, students are told, “Speak with conviction. Even if you believe something only 55 percent, say it as if you believe it 100 percent.”333 Lying is not only something that rogues do to make an extra buck; rather, it is virtually built-in to the process of enormous money-making. According to the evidence, top executives are fully aware of and endorse that reality. One recent survey of five hundred top executives in the U.S. and the UK found that one in four said they knew of ethical and legal wrongdoing in the workplace, and the same number agreed that success in the financial services sector may actually require conduct that is unethical or illegal. One in seven of the executives said they would commit insider trading if they believed they could get away with it, and nearly one in three said that their compensation plans created incentives to violate the law or one’s own ethical standards.334

Jim Cramer, formerly a hedge fund manager and now a major television personality who gives investment advice to millions of people who hang on his every word (even though his investment advice is notoriously mediocre),335 has made the thinking very clear: “What’s important when you are in that hedge-fund mode is to not do anything remotely truthful because the truth is so against your view, that it’s important to create a new truth, to develop a fiction.”336

You can probably imagine the reaction if a poor person were to describe the importance of creative dishonesty so as to procure food stamp benefits or disability payments. Conservatives would point to them as proof positive of the dysfunctional and destructive values bred within the so-called underclass. But when rich white men like Jim Cramer encourage deceit as a way of life, so as to make billions of dollars, they are praised as genius investors worthy of significant tax concessions. While the nation is treated to a never-ending stream of warnings about the culture of poverty and the dysfunctional underclass pathologies of the struggling, the much more significant and destructive pathologies and inverted value systems of the rich go uninterrogated.

With Justice for None: The Real World Implications of a Culture of Cruelty

It’s important to note, however, that the culture of cruelty and the assorted rhetorical devices used to maintain and further it are far from mere academic matters: there are real-world implications to the kind of callousness displayed toward the poor and those in need; so too, there are policy implications to the veneration of predatory financial minorities and the myths that are propagated to defend their excess wealth. As mentioned previously, such thinking stokes support for cuts in social safety-net programs and provides rhetorical ammunition for those who seek to limit the availability of unemployment insurance. But it does more than this.

The culture of affluence and cruelty contributes to the kind of obeisance to corporations that leads directly to death and suffering around the globe. So consider pharmaceutical companies that manufacture drugs meant to treat HIV/AIDS. On the one hand, we know that several drugs developed for those with HIV have extended life in the United States for hundreds of thousands of people. That’s the good news. But because of intellectual property laws, which protect these pharmaceuticals from competition by generic drug makers, when South Africa passed a law that would have allowed South African drug companies to make generic versions of the same drugs, U.S. lawmakers and trade representatives cried foul. Even though international law allows and even calls for such actions to be permitted in cases of national emergency (which the AIDS crisis in South Africa surely was), lawmakers objected, deferring to corporations and their supposed property rights over and above the needs of desperately ill people to receive medicine at an affordable price. Although GlaxoSmithKline (the company that makes the main anti-AIDS drug, AZT) reached an agreement to allow a South African manufacturer to make the drug, they required the generic version to be sold only in that nation, and insisted on a thirty percent royalty on all sales. Interestingly, Glaxo thinks it deserves credit and money for AZT, even though researchers at the Michigan Cancer Institute and Duke University, who were working under government grant monies from the National Cancer Institute, actually discovered it. Because of trade policies that prioritize the intellectual property rights of American companies, millions of people around the world suffer, priced out of the market for needed medicine.337

The culture of cruelty facilitates a callous disregard for the suffering of Americans as well. After the passage of the Affordable Care Act (ACA, commonly referred to as “Obamacare”), conservatives repeatedly pledged to torpedo the law, either by repealing it in Congress or by seeking to have it deemed unconstitutional by the courts. Although the Supreme Court upheld the law’s constitutionality, it also stipulated that states could not be compelled, as the law had called for, to expand their Medicaid rolls to persons who normally would not have qualified for benefits under the program. Relieved of the obligation to cover more patients under Medicaid, twenty-five states, mostly led by conservative Republicans, have refused to expand their Medicaid rolls, leaving five million low-income workers without access to affordable care.338 Because several of these states set absurdly stringent Medicaid eligibility levels—Texas and Louisiana won’t cover a family of three making more than $5,000 in annual income—even people living at only a bit more than a fourth of the poverty line will be “too rich” to qualify for Medicaid.339

The fact that opposition to the ACA is largely about cruelty and a callous disregard for the poor and working class should be obvious. Rush Limbaugh, for instance, has openly derided the centerpiece of the law—the idea that insurance companies should not be allowed to deny coverage to those with pre-existing conditions—by calling such a requirement nothing more than “welfare” and “nonsense.”340 In other words, to leading conservatives, if you have chronic asthma, or a history of cancer, or high blood pressure, or a thyroid condition, insurers should be allowed to reject you entirely for coverage, and to give coverage to anyone in such a condition is to make them welfare recipients. This is modern fiscal conservatism in a nutshell: the right of corporations to make money and make decisions that kill people is more sacrosanct than the right of American families to survive or receive health care that might help them live healthy and productive lives.

And in what seems like a direct mirror of the Dickensian thinking that led off this chapter, there has actually been a resurrection not only of hateful rhetoric toward the poor and struggling, but also the mechanisms of punishment for poverty that were so prevalent in Dickens’s time, and which were supported by the likes of Ebenezer Scrooge. Debtors’ prisons, although technically illegal, seem to be making a comeback. In several states, poor people are being incarcerated for failure to pay various fines and fees (for traffic tickets or other low-level offenses), under laws governing contempt of court, thereby allowing officials to avoid the impression that they are locking people up for poverty, but ultimately amounting to just that.

Consider the case of Kristy and Timothy Fugatt. In 2010, police in Childersberg, Alabama, ticketed them for driving with expired tags. The fine came to $296 in all, with an additional $198 for Kristy, because her license had also expired. Because they were unable to pay, they were put on probation. Their probation was overseen by a private company called Judicial Correction Services, which charges $45 per month to each probationer they handle. Once the Fugatts fell behind on their payments for the initial violations—in large part because their infant child was hospitalized with a rare brain disease, and caring for him made it difficult to hold down steady employment341 —they were charged additional fees and threatened with incarceration. In 2012, a police officer arrested them, threatened them with a Taser, and told them that their kids would be taken away and placed in state custody. They only gained release after relatives came to the jail and paid off their outstanding debt.342

Although locking up indigent defendants for failure to pay fees and fines makes no sense economically—it costs more to jail people for noncompliance than the value of the debt those people have failed to pay343—the practice seems to be growing in popularity.344 Even worse, in Arkansas, persons who are late on their rent can actually be incarcerated. In such cases, if a landlord issues an eviction notice, rather than going through the civil courts if a resident falls behind or refuses to pay, tenants can be arrested for a criminal violation and locked up. Even if a tenant is only one day late with rent, he or she can be evicted on ten days’ notice. If they haven’t vacated within those ten days, the landlord can have them arrested.345

Meanwhile, as the poor are incarcerated for minor offenses or for failure to pay fines and fees to the courts after receiving traffic tickets, the rich manage to avoid jail or prison time, even if their crimes are far more serious. Despite the persistent fraud that was at the heart of Wall Street’s activities for much of the first decade of the 2000s, and which ultimately cratered the economy for the rest of us, no investment bankers have gone to jail for their misdeeds. At most, their companies pay fines that can then be written off on the company’s next tax return.

In the case of JPMorgan Chase, the deal cut with the government is a perfect example of how the state soft-pedals white-collar crime on a gargantuan scale, even as it furiously prosecutes low-level street criminals. Of the $9 billion ultimately paid to the government by Chase, only about $2 billion was defined in the settlement as a fine or penalty for wrongdoing; this means that the remaining $7 billion could be written off on the company’s taxes the following year. Then, because the settlement terms gave Chase immunity from further civil liability, the firm’s stock shot up by six percent upon news of the deal, pumping roughly $12 billion of added value into the company’s stock, essentially making the settlement a money-maker for a firm that had defrauded investors by selling mortgage-backed securities they knew full well were junk. Chase then further insulated itself from the cost of its actions by laying off 7,500 low-level employees, which then allowed them to offer nice raises to upper management, including a seventy-four percent raise for CEO Jamie Dimon, bringing his overall compensation package to nearly $20 million.346

Author and journalist Matt Taibbi explains the magnitude of the problem, and makes clear how economic privilege insulates the nation’s financial aristocracy from the kinds of punishment that would surely await average Americans guilty of even a fraction of the illegality engaged in by the banking class:

Not a single executive who ran the companies that cooked up and cashed in on the phony financial boom—an industrywide scam that involved the mass sale of mismarked, fraudulent mortgage-backed securities—has ever been convicted. Their names by now are familiar to even the most casual Middle American news consumer: companies like AIG, Goldman Sachs, Lehman Brothers, JP Morgan Chase, Bank of America and Morgan Stanley. Most of these firms were directly involved in elaborate fraud and theft. Lehman Brothers hid billions in loans from its investors. Bank of America lied about billions in bonuses. Goldman Sachs failed to tell clients how it put together the born-to-lose toxic mortgage deals it was selling. What’s more, many of these companies had corporate chieftains whose actions cost investors billions—from AIG derivatives chief Joe Cassano, who assured investors they would not lose even “one dollar” just months before his unit imploded, to the $263 million in compensation that former Lehman chief Dick “The Gorilla” Fuld conveniently failed to disclose. Yet not one of them has faced time behind bars.347

Even when a bank such as HSBC engages in money laundering for drug cartels and the Iranian government, and agrees to pay a nearly $2 billion fine for these actions, it remains untouched by criminal prosecution, for fear that an indictment would collapse the bank and set off a chain reaction that could destroy the economy.348 Taibbi contrasted the kid-glove approach taken with HSBC to the routine prosecution of low-level drug users in an April 2014 interview with Amy Goodman of Democracy Now:

This idea that some companies are too big to jail, it makes some sense in the abstract. . . . If you have a company . . . that employs tens (of thousands) or maybe even 100,000 people, you may not want to criminally charge that company willy-nilly and wreck the company and cause lots of people to lose their jobs. But . . . there’s no reason you can’t proceed against individuals in those companies. . . . In the case of a company like HSBC, which admitted to laundering $850 million for a pair of Central and South American drug cartels, somebody has to go to jail in that case. If you’re going to put people in jail for having a joint in their pocket or for slinging dime bags on the corner in a city street, you cannot let people who laundered $800 million for the worst drug offenders in the world walk. . . . In that case, they paid a fine; they paid a $1.9 billion fine. And some of the executives had to defer their bonuses for a period of five years—not give them up, defer them . . . and nobody did a single day in jail in that case [but] somebody at the bottom, he’s a consumer of the illegal narcotics business, and he’s going to jail, and then you have these people who are at the very top of the illegal narcotics business, and they’re getting a complete walk.349

Despite the economic calamities wrought by bankers and financial workers who clearly have no problem destroying the economic security of millions of American families, virtually no one among the nation’s political leadership has been willing to advocate serious punishment, let alone jail time, for their actions. President Obama and his Justice Department have been utterly unwilling to punish financial crimes with any degree of seriousness or even to speak forcefully about the criminality of Wall Street. Unlike the forceful language of FDR, who openly challenged the economic aristocracy he hailed from, it is rare to hear anything remotely as brave from the mouths of modern politicians. Consider these words from Franklin Roosevelt, spoken in October 1936, and ask how often such straightforward sentiments are to be heard from elected officials in the twenty-first century:

We know now that Government by organized money is just as dangerous as Government by organized mob. Never before in all our history have these forces been so united against one candidate as they stand today. They are unanimous in their hate for me—and I welcome their hatred. I should like to have it said of my first Administration that in it the forces of selfishness and of lust for power met their match. I should like to have it said of my second Administration that in it these forces met their master.350

In the modern era, few people appear brave enough to fully challenge the hatred emanating from the financial class, or to invite the anger of the wealthy. Today, politicians are far too dependent on the campaign contributions of such persons to speak truth in the way Roosevelt did.

Meanwhile, as America seems incapable of arresting and prosecuting bankers whose actions very nearly destroyed the world economy, some among us have no problem with the thought of further criminalizing poor people whose only crime is asking for money. Since 2001, the nation has lost about thirteen percent of its low-income housing, thereby contributing to increased homelessness, yet restrictions on loitering, begging or resting in public have proliferated. Eighteen percent of all American cities ban sleeping in public, and a little more than four in ten have even made it illegal to sleep in one’s vehicle.351 Recently, the police chief in San Antonio (where panhandling has been essentially banned since 2011) suggested that giving money to someone who begs for it should also be illegal, and something for which the charitable are ticketed.352 Because giving money to homeless people is apparently a far more serious offense than ripping off investors and homeowners to the tune of hundreds of billions, even trillions of dollars.

Further demonstrating the way that valorization of the rich and the business class skews the dispensation of justice in America, consider the epidemic problem of wage theft and the nation’s pathetic response to it. Wage theft refers to a number of practices that result in business owners keeping money for themselves that has been earned by their employees. Examples include not paying for overtime work, paying less than minimum wage, cheating workers out of tips, or paying workers less than the prevailing wage required on union-negotiated contracts. According to a recent study by the Economic Policy Institute, wage theft of this sort costs workers billions of dollars each year—potentially as much as $50 billion annually—and amounts to transferring money from the hands of employees to the hands of business owners, thereby furthering income inequality. Considering that most wage theft affects low-wage employees who already struggle financially, siphoning off even small amounts from individual workers not only adds up to a huge windfall for bosses, but also can seriously impair the ability of these workers to support themselves and their families.353

Far from a minor concern, according to the FBI, the amount being lost to wage theft dwarfs the amount stolen in all robberies, burglaries, larcenies and motor vehicle thefts combined. Even if we only consider the money recovered by employees whose bosses stole wages from them and who discovered the violation, filed a complaint or hired a private attorney—obviously only a small portion of those from whom wages were stolen—the amount would be almost three times the total amount of money and property taken in all bank, residential, convenience store, gas station and street robberies combined. Yet, whereas those who rob a convenience store or break into your house and steal jewelry, cash or electronics face serious criminal penalties, felony records and possible jail time, people who steal from their employees need not worry about being subjected to such an indignity. There are only eleven hundred investigators in the Department of Labor capable of looking into the problem of wage theft, and the penalties, even for deliberate and repeated violations of the law, are hardly onerous: a maximum of $1,100 for failing to pay minimum wage or required overtime, for instance.

But it’s not only for workplace-related wrongdoing that the wealthy are let off easy: even rather standard crimes manage to go unpunished if one has enough money. In the last few years there have been a number of cases indicating that inequality is not just an economic matter, but a matter of unequal justice as well. In 2013, a sixteen-year-old boy from one of the nation’s wealthiest communities received probation after driving drunk and killing four people.354 Ethan Couch, who according to a psychologist called by his defense team suffers from “affluenza” (in other words, too much privilege and not enough accountability), received yet more privilege and was relieved of accountability by the judge who sentenced him.

Joseph Goodman, a wealthy business owner in Washington State, led police on a high-speed chase while drunk, and despite having a blood alcohol level twice the legal limit (and despite the fact that it was his seventh DUI), he was given a year of work-release from jail, requiring him to spend nights and weekends in jail but allowing him to go to work in the day. In February 2014, he was even allowed to travel to New York for the Super Bowl. Why such lenience? Because according to the judge, to jail him outright would harm his business and employees.355 It’s the same logic that led a Colorado prosecutor not to seek felony charges against Martin Joel Erzinger after he ran over a bicyclist and fled the scene in 2010. Erzinger, a hedge fund manager for Smith Barney, was considered too important to jail. In the words of District Attorney Mark Hurlbert, “Felony convictions have some pretty serious job implications for someone in Mr. Erzinger’s profession.” Because Erzinger oversees more than $1 billion in assets for his rich clients, the D.A. feared that serious punishment would harm the interests of those “ultra-high net worth” individuals, and so he sought only misdemeanor charges that would not carry jail time. Even though Erzinger left the critically injured cyclist for dead and failed to report the incident, prosecutors thought it best to go easy on him.356

The courts are especially lenient on those who are heirs to large fortunes. In the past year, heirs to S.C. Johnson and Sons and DuPont managed to get off lightly for serious offenses in a way that no poor defendant in their position could have. Billionaire Samuel Curtis Johnson III confessed to sexually assaulting his stepdaughter on numerous occasions but was only given four months in prison. His attorney argued, and the judge apparently agreed, that hard time should be reserved for “maximum defendants” rather than wealthy scions like Johnson.357 As for the DuPont heir, although convicted of raping his daughter, Robert Richards IV avoided jail time and was only sentenced to sex offender treatment, because, as the judge put it, he would “not fare well” in prison.358 Apparently if you’re wealthy and white, prison is too harsh for you no matter your crimes, while for Americans who are poor (and especially Americans of color), incarceration is still the preferred option. Such is a justice system in a culture of cruelty, operating under the affluence of a small self-valorizing minority that is given permission to prey upon the citizenry. Clearly, when modern-day Scrooges ask, “Are there no workhouses? No prisons?” they are only inquiring as to their availability for the poor and struggling.