4
Interpretations of Poverty in the Conservative Ascendance

AFTER THE MID-1970S PROGRESS against poverty stalled. The 1973 oil crisis ushered in an era of growing inequality interrupted only briefly by the years of prosperity during the 1990s. Productivity increased, but, for the first time in American history, its gains were not shared by ordinary workers, whose real incomes declined even as the wealth of the rich soared. Poverty concentrated as never before in inner city districts scarred by chronic joblessness and racial segregation. America led western democracies in the proportion of its children living in poverty. It led the world in rates of incarceration. Trade union membership plummeted under an assault by big business abetted by the federal government. Policy responded by allowing the real value of the minimum wage, welfare benefits, and other social protections to erode. The dominant interpretation of America’s troubles blamed the War on Poverty and Great Society and constructed a rationale for responding to misery by retrenching on social spending. A bipartisan consensus emerged for solving the nation’s social and economic problems through a war on dependence, the devolution of authority, and the redesign of public policy along market models.

Urban Transformation

The years after the mid-1970s witnessed a confrontation between massive urban structural transformation and rightward moving social policy that registered in a reconfigured and intensified American poverty in the nation’s cities. It is no easy task to define an American city in the early twenty-first century. Fast-growing cities in the post-war Sun Belt differ dramatically from the old cities of the Northeast and Midwest as any drive through, for example, Los Angeles and Philadelphia makes clear. Nonetheless, all the nation’s central cities and their surrounding metropolitan areas experienced transformations of economy, demography, and space that resulted in urban forms without precedent in history. These transformations hold profound implications for poverty as both fact and idea, and they underscore the need to understand poverty as a problem of place as well as persons. A long tradition of social criticism—from nineteenth-century advocates of slum clearance through the “Chicago school” of the 1920s to the most cutting-edge urban theory of the twenty-first century (discussed in Chapter 5)—presents poverty as a problem of place. In one version, which has dominated discussions, conditions in places—most notably, substandard housing—produce, reinforce, or augment poverty. In an alternate version, poverty is a product of place itself, reproduced independent of the individuals who pass through it. Both versions help explain the link between poverty and the multisided transformation of metropolitan America.

The first transformation was economic: the death of the great industrial city that flourished from the late nineteenth century until the end of World War II. The decimation of manufacturing evident in Rust Belt cities resulted from both the growth of foreign industries, notably electronics and automobiles, and the corporate search for cheaper labor. Cities with economic sectors other than manufacturing (such as banking, commerce, medicine, government, and education) withstood deindustrialization most successfully. Those with no alternatives collapsed, while others struggled with mixed success. Some cities such as Las Vegas built economies on entertainment, hospitality, and retirement. With manufacturing withered, anchor institutions, “eds and meds,” increasingly sustained the economies of cities lucky enough to house them; they became, in fact, the principal employers. In the late twentieth century, in the nation’s twenty largest cities, “eds and meds” provided almost 35 percent of jobs.1 As services replaced manufacturing everywhere, office towers emerged as the late twentieth century’s urban factories. Services include a huge array of activities and jobs, from the production of financial services to restaurants, from high paid professional work to unskilled jobs delivering pizza or cleaning offices. Reflecting this division, economic inequality within cities increased, accentuating both wealth and poverty.

The second kind of urban transformation was demographic. First was the migration of African Americans and white southerners to northern, midwestern, and western cities. Between World War I and 1970, about seven million African Americans moved north. The results, of course, transformed the cities into which they moved. Between 1940 and 1970, for example, San Francisco’s black population multiplied twenty-five times and Chicago’s grew five times. The movement of whites out of central cities to suburbs played counterpoint. Between 1950 and 1970, the population of American cities increased by ten million people while the suburbs exploded with eighty-five million.2

The idea that the white exodus to the suburbs represented “flight” from blacks oversimplifies a process with other roots as well. A shortage of housing; urban congestion; mass-produced suburban homes made affordable with low interest, long-term, federally insured loans; and a new highway system all pulled Americans out of central cities to suburbs. At the same time, through “blockbusting” tactics, unscrupulous real estate brokers fanned racial fears, which accelerated out-migration. In the North and Midwest, the number of departing whites exceeded the incoming African Americans, resulting in population loss and the return of swaths of inner cities to empty, weed-filled lots that replaced working-class housing and factories—a process captured by the great photographer Camilo Jose Vergara with the label “green ghetto.” By contrast, population in Sun Belt cities such as Los Angeles moved in the opposite direction. Between 1957 and 1990, the combination of economic opportunity, a warm climate, annexation, and in-migration boosted the Sun Belt’s urban population from 8.5 to 23 million.3

A massive new immigration also changed the nation and its cities. As a result of the nationality based quotas enacted in the 1920s, the Great Depression, and World War II, immigration to the United States plummeted. The foreign-born population reached its nadir in 1970. The lifting of the quotas in 1965 began to reverse immigration’s decline. Immigrants, however, now arrived from new sources, primarily Latin America and Asia. More immigrants entered the United States in the 1990s than during any other decade in its history. These new immigrants fueled population growth in both cities and suburbs. Unlike the immigrants of the early twentieth century, they often bypassed central cities to move directly to suburbs and spread out across the nation. In 1910, for example, 84 percent of the foreign born in metropolitan Philadelphia lived in the central city. By 2006 the proportion had dropped to 35 percent. New immigrants have spread beyond the older gateway states to the Midwest and South, areas from which prior to 1990 immigrants largely were absent.4 Thanks to labor market networks in agriculture, construction, landscaping, construction, and domestic service, Hispanics spread out of central cities and across the nation faster than any other ethnic group in American history. This new immigration has proved essential to labor market growth and urban revitalization. Again in metropolitan Philadelphia, between 2000 and 2006, the foreign born accounted for 75 percent of labor force growth. A New York City research report “concluded that immigrant entrepreneurs have become an increasingly powerful economic engine for New York City … foreign-born entrepreneurs are starting a greater share of new businesses than native-born residents, stimulating growth in sectors from food manufacturing to health care, creating loads of new jobs and transforming once-sleepy neighborhoods into thriving commercial centers.” Similar reports came in from around the nation from small as well as large cities and from suburbs.5

Suburbanization became the first major force in the spatial transformation of urban America. Although suburbanization extends well back in American history, it exploded after World War II as population, retail, industry, services, and entertainment all suburbanized. In the 1950s, suburbs grew ten times as fast as central cities. Even though the Supreme Court had outlawed officially mandated racial segregation in 1917 and racial exclusions in real estate deeds in 1948, suburbs found ways to use zoning and informal pressures to remain largely white until late in the twentieth century, when African Americans began to suburbanize.6 Even in suburbs, however, they clustered in segregated towns and neighborhoods. Suburbs, it should be stressed, never were as uniform as their image. In the post-war era, they came closer than ever before to the popular meaning of “suburb” as a bedroom community for families with children. But that meaning had shattered completely by the end of the twentieth century, as a variety of suburban types populated metropolitan landscapes, rendering distinctions between city and suburb increasingly obsolete. The collapse of the distinction emerged especially in older inner ring suburbs where the loss of industry, racial transformation, immigration, and white out-migration registered in shrinking tax bases, eroding infrastructure, and increased poverty.7

Gentrification and a new domestic landscape furthered the spatial transformation of urban America. Gentrification may be redefined as the rehabilitation of working-class housing for use by a wealthier class. Outside of select neighborhoods, gentrification by itself could not reverse the economic and population decline of cities, but it did transform center city neighborhoods with renovated architecture and new amenities demanded by young white professionals and empty-nesters who had moved in. At the same time, it often displaced existing residents, adding to a crisis of affordable housing that helped fuel homelessness and other hardships.

The new domestic landscape resulted from the revolutionary rebalancing of family types that accelerated after 1970. In 1900 married couples with children made up 55 percent of all households, single-mother families 28 percent, empty-nesters 6 percent, and nonfamily households (mainly young people living together) 10 percent, with a small residue living in other arrangements. By 2000 the shift was astonishing. Married couple households now made up only 25 percent of all households, single-mother families 30 percent, empty-nesters 16 percent, and nonfamily households 25 percent. (The small increase in single-mother families masked a huge change. Earlier in the century they were mostly widows; by century’s end they were primarily never married, divorced, or separated.) What is stunning is how after 1970 these trends characterized suburbs as well as central cities, eroding distinctions between them. Between 1970 and 2000, for example, the proportion of census tracts where married couples with children comprised more than half of all households plummeted from 59 percent to 12 percent and in central cities from 12 percent to 3 percent. In the same years, the proportion of suburban census tracts where single mothers composed at least 25 percent of households jumped an astonishing 440 percent—from 5 percent to 27 percent—while in central cities it grew from 32 percent to 59 percent. The share of census tracts with at least 30 percent nonfamily households leaped from 8 to 35 percent in suburbs and from 28 to 57 percent in cities. These changes took place across America, in Sun Belt as well as Rust Belt. Truly, a new domestic landscape eroding distinctions between city and suburb had emerged within metropolitan America. Its consequences were immense. The rise in single-mother families living in poverty shaped new districts of concentrated poverty and fueled the rise in suburban poverty. Immigration brought young, working-class families to many cities and sparked revitalization in neighborhoods largely untouched by the growth and change brought about by gentrification.8

Racial segregation also transformed urban space. The first important point about urban racial segregation is that it was much lower early rather than late in the twentieth century. In 1930 the neighborhood in which the average African American lived was 31.7 percent black; in 1970 it was 73.5 percent. No ethnic group in American history ever experienced comparable segregation. Sociologists Douglas Massey and Nancy Denton, with good reason, described the situation as “American apartheid.” In sixteen metropolitan areas in 1980, one of three African Americans lived in areas so segregated along multiple dimensions that Massey and Denton labeled them “hypersegration.” Even affluent African Americans were more likely to live near poor African Americans than affluent whites. Racial segregation, argued Massey and Denton, by itself produced poverty.9 Areas of concentrated poverty, in turn, existed largely outside of markets—any semblance of functioning housing markets had dissolved, financial and retail services had decamped, jobs in the regular market had disappeared.10 Concentrated poverty and chronic joblessness went hand in hand. Public infrastructure and institutions decayed, leaving them epicenters of homelessness, crime, and despair. Even though segregation declined slightly in the 1990s, at the end of the century, the average African American lived in a neighborhood 51 percent black, many thousands in districts marked by a toxic combination of poverty and racial concentration. This progress reversed in the first decade of the twentieth century. “After declining in the 1990s,” reported a Brookings Institution study, “the population in extreme-poverty neighborhoods—where at least 40 percent of individuals lived below the poverty line—rose by one-third from 2000 to 2005–09.”11

Despite continued African American segregation, a “new regime of residential segregation” began to appear in American cities, according to Massey and his colleagues. The new immigration did not increase ethnic segregation; measures of immigrant segregation remained “low to moderate” while black segregation declined modestly. However, as racial segregation declined, economic segregation increased, separating the poor from the affluent and the college educated from high school graduates. Spatial isolation marked people “at the top and bottom of the socioeconomic scale.” The growth of economic inequality joined increased economic segregation to further transform urban space. America, wrote three noted urban scholars, “is breaking down into economically homogeneous enclaves.” This rise in economic segregation afflicted suburbs as well as inner cities, notably sharpening distinctions between old inner ring suburbs and more well-to-do suburbs and exurbs. Early in the twenty-first century, as many poor people lived in suburbs as in cities, and poverty within suburbs was growing faster within them.12

In the post-war decades, urban redevelopment also fueled urban spatial transformation. Urban renewal focused on downtown land use, clearing out working-class housing, small businesses, and other unprofitable uses, and replacing them with high-rise office buildings, anchor institutions, and expensive residences. The 1949 Housing Act kicked off the process by facilitating city governments’ aspirations to assemble large tracts of land through eminent domain and sell them cheaply to developers. The Act authorized 810,000 units of housing to re-house displaced residents; by 1960, only 320,000 had been constructed. These new units of public housing remained by and large confined to racially segregated districts and never were sufficient in number to meet existing needs. “Between 1956 and 1972,” report Peter Dreier and his colleagues, experts in urban policy, “urban renewal and urban freeway construction displaced an estimated 3.8 million persons from their homes” but rehoused only a small fraction. The costs of urban renewal to the social fabric of cities and the well-being of their residents were huge. Urban renewal “certainly changed the skyline of some big cities by subsidizing the construction of large office buildings that housed corporate headquarters, law firms, and other corporate activities” but at the price of destroying far more “low-cost housing than it built” and failing “to stem the movement of people and businesses to suburbs or to improve the economic and living conditions of inner-city neighborhoods. On the contrary, it destabilized many of them, promoting chaotic racial transition and flight.”13

Neither the War on Poverty nor Great Society slowed or reversed the impact of urban redevelopment and racial segregation on the nation’s cities. President John F. Kennedy finally honored a campaign pledge in 1962 with a federal regulation prohibiting discrimination in federally supported housing—an action that “turned out to be more symbolic than real” on account of weak enforcement.14 In the 1968 Fair Housing Act, President Lyndon Johnson extended the ban on discrimination, and the practices that produced it, to the private housing market. Unfortunately, weak enforcement mechanisms left it, too, inadequate to the task throughout the 1970s and 1980s.15

For the most part, the War on Poverty and Great Society rested on an understanding of poverty as a problem of persons, or, in the case of community action, of power, but less often of place. Opportunity-based programs addressed the deficiencies of individuals, not the pathologies of the places in which they lived. This hobbled their capacity from the outset. The conservatives who seized on the persistence of poverty to underscore and exaggerate the limits of the poverty war and Great Society retained this individual-centered understanding of poverty as they developed a critique of past efforts and a program for the future, neither of which was adequate to the task at hand.

The coincidence of America’s urban slide into deep urban racial segregation, concentrated poverty, deindustrialization, physical decay, and near-bankruptcy coincided with the manifest failures of public policy, notably in urban renewal, and in the efforts of government to wage war on poverty. No matter that the story as popularly told was riddled with distortions and omissions. This narrative of catastrophic decline and public incompetence produced the trope of the “urban crisis,” which, in turn, handed conservatives a gift: a ready-made tale—a living example—to use as evidence for the bundle of ideas they had been nurturing for decades and which emerged triumphant by the late 1970s.

The Conservative Ascendance

The growth of urban poverty did not rekindle compassion or renew the faltering energy of the Great Society. Instead, a war on welfare accompanied the conservative revival of the 1980s. City governments, teetering on the edge of bankruptcy, cut social services; state governments trimmed welfare rolls with more restrictive rules for General Assistance (state outdoor relief); and the federal government attacked social programs. As President Ronald Reagan famously remarked, government was the problem, not the solution. The result of these activities reduced the availability of help from each level of government during the years when profound structural transformations in American society increased poverty and its attendant hardships.16

Several sources fed the conservative restoration symbolized by Ronald Reagan’s election as president in 1980. Business interests, unable to compete in an increasingly international market, wanted to lower wages by reducing the influence of unions and cutting social programs that not only raised taxes but offered an alternative to poorly paid jobs. The energy crisis of 1973 ushered in an era of stagflation in which public psychology shifted away from its relatively relaxed attitude toward the expansion of social welfare. Increasingly worried about downward mobility and their children’s future, many Americans returned to an older psychology of scarcity. As they examined the sources of their distress, looking for both villains and ways to cut public spending, ordinary Americans and their elected representatives focused on welfare and its beneficiaries, deflecting attention from the declining profits and returns on investments that, since the mid-1970s, should have alerted them to the end of unlimited growth and abundance.17

Desegregation and affirmative action fueled resentments. Many whites protested court-ordered busing as a remedy for racial segregation in education, and they objected to civil rights laws, housing subsidies, and public assistance support for blacks who wanted to move into their neighborhoods while they struggled to pay their own mortgages and grocery bills. White workers often believed they lost jobs and promotions to less qualified blacks. Government programs associated with Democrats and liberal politics became the villains in these interpretations, driving blue-collar workers decisively to the right and displacing anger away from the source of their deteriorating economic conditions onto government, minorities, and the undeserving poor.

Suburbanization, the increased influence of the South on electoral politics and the politicization of conservative Protestantism, also fueled the conservative ascendance. “Suburbia,” political commentator Kevin Phillips asserted, “did not take kindly to rent subsidies, school balance schemes, growing Negro migration or rising welfare costs.… The great majority of middle-class suburbanites opposed racial or welfare innovation.” Together, the Sun Belt and suburbs, after 1970 the home to a majority of voters, constituted the demographic base of the new conservatism, assuring the rightward movement of politics among Democrats as well as Republicans and reinforcing hostility toward public social programs that served the poor—especially those who were black or Hispanic. The “middle class” became the lodestone of American politics, the poor its third rail.

Prior to the 1970s, conservative Christians (a term encompassing evangelicals and fundamentalists) largely distrusted electoral politics and avoided political involvement. This stance reversed in the 1970s when conservative Christians entered politics to protect their families and stem the moral corruption of the nation. Among the objects of their attack was welfare, which they believed weakened families by encouraging out-of-wedlock births, sex outside of marriage, and the ability of men to escape the responsibilities of fatherhood. Conservative Christians composed a powerful political force, about a third of the white electorate in the South and a little more than a tenth in the North. By the 1990s they constituted the largest and most powerful grassroots movement in American politics. In the 1994 elections, for the first time a majority of evangelicals identified themselves as Republicans. Although the inspiration for the Christian Right grew out of social and moral issues, it forged links with free-market conservatives. Fiscal conservatism appealed to conservative Christians whose “economic fortunes depend more on keeping tax rates low by reducing government spending than on social welfare programs that poor fundamentalists might desire,” asserted sociologists Robert Wuthnow and Matthew P. Lawson. The conservative politics that resulted fused opposition to government social programs and permissive legislation and court decisions (abortion, school prayer, gay civil rights, the Equal Rights Amendment, teaching evolution) with “support of economic policies favorable to the middle-class”—a powerful combination crucial for constructing the electoral and financial base of conservative politics.18

Two financial sources bankrolled the rightward movement of American politics. Political action committees mobilized cash contributions from grassroots supporters while conservative foundations, corporations, and wealthy individuals supported individual candidates, organized opposition to public programs, and developed a network of think tanks—including the American Enterprise Institute, the Heritage Foundation, and the libertarian Cato Institute—designed to counter liberalism, disseminate conservative ideas, and promote conservative public policy. Within a year of its founding in 1973, the Heritage Foundation had received grants from eighty-seven corporations and six or seven other major foundations. In 1992 to 1994 alone, twelve conservative foundations holding assets worth $1.1 billion awarded grants totaling $300 million. In 1995 the top five conservative foundations enjoyed revenues of $77 million compared to only $18.6 million for “their eight political equivalents on the left.”19

As well as producing ideas, conservative think tanks marketed them aggressively. Historian James Smith writes that, “marketing and promotion” did “more to change the think tanks’ definition of their role (and the public’s perception of them)” than did anything else. Their conservative funders paid “meticulous attention to the entire ‘knowledge production process,’” represented as a “conveyor belt” extending from “academic research to marketing and mobilization, from scholars to activists.” Their “sophisticated and effective outreach strategies” included policy papers, media appearances, advertising campaigns, op ed articles, and direct mail. In 1989 the Heritage Foundation spent 36 percent of its budget on marketing and 15 percent on fundraising. At the same time, wealthy donors countered the liberal politics of most leading social scientists with “lavish amounts of support on scholars willing to orient their research” toward conservative outcomes and a “grow-your-own approach” that funded “law students, student editors, and campus leaders with scholarships, leadership training, and law and economics classes aimed at ensuring the next generation of academic leaders has an even more conservative cast than the current one.”20

Conservative politics fused three strands: economic, social, and nationalist. The economic strand stressed free markets and minimal government regulation. The social emphasized the protection of families and the restoration of social order and private morality. Where the state intervened in the right to pray or in religiously sanctioned gender relations, it opposed federal legislation and the intrusion of the courts. Where the state sanctioned or encouraged family breakdown and immoral behavior, as in abortion or welfare, it favored authoritarian public policies. Militant anti-communism composed the core of conservatism’s nationalist strand, fusing the other two in opposition to a common enemy. It favored heavy public spending on the military and focused on both the external enemy—the Soviet Union—and the internal foe—anyone or anything threatening the socialist takeover of America. With the collapse of the Soviet Union, the bond holding together the social and economic strands of conservatism weakened, replaced at last by a new enemy, militant Islam embodied in Iraq and Iran and in the Taliban and Al Qaeda.

Conservatives triumphed intellectually in the 1980s because they offered ordinary Americans a convincing narrative that explained their manifold worries. In this narrative, welfare, the undeserving poor, and the cities they inhabited became centerpieces of an explanation for economic stagnation and moral decay. Welfare was an easy target, first because its rolls and expense had swollen so greatly in the preceding several years and, second, because so many of its clients were the quintessential undeserving poor—unmarried black women. Welfare, it appeared, encouraged young black women to have children out of wedlock; discouraged them from marrying; and, along with generous unemployment and disability insurance, fostered indolence and a reluctance to work. Clearly, it appeared, however praiseworthy the intentions, the impact of the War on Poverty and the Great Society had been perverse. By destroying families, diffusing immorality, pushing taxes unendurably high, maintaining crippling wage levels, lowering productivity, and destroying cities they had worsened the very problems they set out to solve.

Even though these arguments were wrong, liberals failed to produce a convincing counter-narrative that wove together a fresh defense of the welfare state from new definitions of rights and entitlements, emergent conceptions of distributive justice, ethnographic data about poor people, and revised historical and political interpretations of the welfare state. This inability to synthesize the elements needed to construct a new narrative and compelling case for the extension of the welfare state was one price paid for the capture of poverty by economists and the new profession of public policy analysis. It resulted, as well, from a lack of empathy: an inability to forge a plausible and sympathetic response to the intuitive and interconnected problems troubling ordinary Americans: stagflation; declining opportunity; increased taxes and welfare spending; crime and violence on the streets; and the alleged erosion of families and moral standards.

Conservatives Confront Welfare and Poverty

The conservative criticism of federal anti-poverty programs updated the oldest and most coherent tradition in the political economy of welfare. In An End to Poverty?, the historian Gareth Stedman Jones excavates the origins of this tradition. The “moment of convergence between the late Enlightenment and the ideals of a republican and democratic revolution,” writes Stedman Jones, “was a fundamental historical turning point. However brief its appearance, however vigorously it was thereafter repressed, it marks the beginning of all modern thought about poverty.” The “first practicable proposals to end poverty,” found in the writings of Condorcet and Thomas Paine, “date back to the 1790s, and were a direct product of the American and French revolutions.” In their aftermath, attacks on the institutions of state and church in Britain and France provoked a fierce reaction fueled by Paine’s wild popularity in Britain. “The effort to thwart this revolutionary subversion of beliefs demanded the mobilization of unprecedented numbers of the population and engaged the energies of every organ of church and state in every locality.” The result “stamped upon the still protean features of political economy … a deeply anti-utopian cast of mind, transforming future enquiry in the area into a gloomy and tirelessly repeated catechism.” The “ambition to combat poverty,” writes Stedman Jones, “was henceforward conceived as a bleakly individual battle against the temptations of the flesh.”21

The most enduring tradition in the political economy of poverty in the United States as well as Britain is a product of this history. The poor constitute the unfortunate casualties of a dynamic, competitive economy in which they fail to grasp or hold onto the levers of opportunity. The widowed, the sick, and a few others remain exceptions, but for the most part the poor are losers, too incompetent or ill-disciplined to reap the bounty of increased productivity. Aiding them with charity or relief only interferes with the natural working of markets, retards growth, and, in the end, does more harm than good. From the social Darwinists of the nineteenth century through the work of contemporary political economists on the Right, this idea, dressed often with quantitative sophistication and theoretical skill, has retained an amazing purchase on popular thought and on politics as well.22

The modern conservative assault on the welfare state, which echoed this ancient interpretation, began in the 1970s with an attempt to deny that poverty remained a major problem. In 1978 Martin Anderson, who had been a domestic policy advisor to Richard Nixon, argued that poverty was no longer a serious problem in America. His book, Welfare: The Political Economy of Welfare Reform in the United States, attacked the concept of a guaranteed income (he had staunchly opposed the Family Assistance Plan from within the Nixon administration) and tried to show that the combination of in-kind benefits (food stamps, Medicare, housing) with public assistance and social insurance had eliminated all but residual pockets of poverty. He recommended a scaled-back, more efficiently administered version of the existing welfare state, whose political economy, he believed, left it impervious to fundamental reform.23

As he reflected on the claim of Anderson and others that poverty remained only a small, residual problem, Michael Harrington wryly observed: the “most astounding conservative discovery of the 1970s” was that poverty had “disappeared and no one noticed.” Within only a few years, he continued, “the statistical abolition of poverty had turned into an academic cottage industry in the United States.” The Reagan administration welcomed this new industry’s product as scientific support for its proposed reduction of social benefits, and the media publicized the good news.24

Almost all academic and political attention, Harrington observed, focused on possible ways the poor had been overcounted—largely through the failure to include in-kind benefits in the definition of poverty. In fact, official poverty statistics regularly undercounted the poor in two ways. First, manipulations of the official poverty line, as we have seen, excluded several million people from the ranks of the poor. Second, the Census Bureau count did not include undocumented workers, most of whom do not earn enough to escape poverty. Harrington estimated that in 1984 as many as thirty million more people—roughly double the Census Bureau’s count—could be labeled poor by the original official standards.25 Contrary to Anderson and other proponents of the poverty reduction thesis, poverty did not trend downward. After the mid-1970s, the real value of public assistance decreased, and it represented an increasingly smaller proportion of median income. Although the cash value of Medicaid grew, its rise reflected the increased cost of health care, not a wider or improved delivery of services. In fact, poverty rates had started to climb. Reading the poverty reduction literature, one writer observed, it seemed as though a social problem had disappeared “like magic.” Nonetheless, the growth and persistence of poverty mocked accounts of its disappearance, and soon even conservatives could no longer base policy on the assumption that in-kind benefits had combined with public assistance to eliminate want.

By the early 1980s, the impact of in-kind benefits on poverty-level incomes was beside the point. Whatever statisticians might conclude, their arguments seemed distracting quibbles beside the mounting evidence of hunger, homelessness, and destitution. Because conservatives could not redefine poverty out of existence, they needed a fresh set of reasons for cutting social benefits. In 1981 a best-selling Book-of-the-Month Club selection, Wealth and Poverty by George Gilder, gave the new administration the intellectual ammunition it needed to justify an ambitious attempt to cut social spending on the poor and reduce taxes on the rich.

Wealth and Poverty received lavish praise from Jack Kemp, David Stockman, Barron’s, and The New York Times and became, according to one reviewer, the “Bible of the Reagan administration.”26 In 1984, as Gilder’s influence waned, Charles Murray’s more sober and conventional Losing Ground provided conservatives with an allegedly authoritative argument against direct government spending to combat the undeniable growth of poverty. In the same years, those who needed a more sophisticated philosophic justification for reducing the role of government could turn to Anarchy, the State, and Utopia, by Harvard philosopher Robert Nozick.27

More a moralist than a social scientist, Gilder exalted capitalism as he mounted the barricades to defend it against its enemies, which included redistributive taxation, the welfare state, and feminism. As he rummaged through intellectual history, choosing bits of conservative anthropology, economics, and theology, Gilder played on the anti-intellectualism never far from the surface of American culture. Although he often drew on their conclusions for support, social scientists emerged as the most dangerous foes—muddleheaded, arrogant, self-aggrandizing technocrats whose narrow, amoral approach to policy had very nearly destroyed America.

Above all, Wealth and Poverty was a paean to capitalism. According to Gilder, the essence of capitalism is altruism, not self-interest. “Capitalism begins with giving. Not from greed, avarice, or even self-love can one expect the rewards of commerce but from a spirit closely akin to altruism, a regard for the needs of others, a benevolent, outgoing, and courageous temper of mind.” Capitalism takes the universal “gift impulse” and transforms it into a “disciplined process of creative investment based on a continuing analysis of the needs of others.” Not surprisingly, Gilder’s hero is the small entrepreneur, the daring risk-taker, agent of change, foundation of Schumpeter’s “creative destruction.”28

Gilder celebrated both great wealth and inequality because they embody not only the just rewards of success, but more important, the leaven for raising the living standards of all, including the poor. Poverty results from indolence, cynicism, and the demoralizing impact of public policy. “The only dependable route from poverty,” asserted Gilder, “is always work, family, and faith. The first principle is that in order to move up, the poor must not only work, they must worker harder than the classes above them.… But the current poor, white even more than black, are refusing to work hard.” The demoralization of the poor was the consequence of a perverse welfare system, which eroded “work and family” and thus kept “poor people poor.”29

Gilder’s second principle of upward mobility is the maintenance of monogamous marriage. Married men, “spurred by the claims of family,” channel their “otherwise disruptive male aggressions” into providing for wives and children. The increase in female-headed families therefore perpetuates the poverty of women and children and unleashes the primitive impulses of men. “The key to lowerclass life in contemporary America,” he asserted, was that “unrelated individuals” had become so “numerous and conspicuous” that they set the tone for the entire community.” Neither “matriarchy” nor race constituted the core problem. Instead, it was “familial anarchy among the concentrated poor of the inner city, in which flamboyant and impulsive youths rather than responsible men provide the themes of aspiration.”30

Wealth and Poverty is riddled with inconsistencies and contradictions. Gilder’s glorification of great wealth sits uneasily besides his heroic portrait of small entrepreneurs or attack on the bailout of the Chrysler Corporation. Nor was Gilder’s equation of capitalism with disinterested public love consistent with his stress on sober self-interest as a guide to how tax policy and economic incentives actually work. Nonetheless, his relentless assault on any public policy that retarded the individual pursuit of wealth did not swerve as he ranged across taxation, environmental regulation, affirmative action, and welfare. Most of his arguments were not new. His concrete criticisms of welfare, for instance, restated the classic arguments against the dole, which, as always, were couched in the best interests of the poor.31 But two themes set Gilder’s attack on welfare policy apart. One was the harshness of his assault on affirmative action. The other was his belief in the biological basis of sex roles. Affirmative action, he maintained, had aggravated the demoralizing effects of welfare by perpetuating “false theories of discrimination and spurious claims of racism and sexism as the dominant forces in the lives of the poor.” The fact of the matter was that “it would seem genuinely difficult to sustain the idea that America is still oppressive and discriminatory.” As for gender, based on his reading of anthropology, Gilder asserted that “female sexuality, as it evolved over the millennia, is psychologically rooted in the bearing and nurturing of children.” Civilization therefore depends on “the submission of the short-term sexuality of the young to the extended maternal horizons of women.” Welfare destroys constructive male values by appropriating the role of provider from husbands and fathers and giving it to the state. As a result, men are “cuckolded by the compassionate state.”32

Gilder played fast and loose with his sources and often relied on proof by haphazard anecdote. Overwhelming evidence refuted most of his claims about poverty and welfare, for instance. However, whether the data supported his theories did not matter all that much. For Gilder was primarily a moralist and theologian who rested his case on faith and courage in the face of a wild, unpredictable universe. More to the point, Gilder, more than careful and responsible social scientists, spoke to the interlaced economic, personal, and moral anxieties that fueled conservatism’s triumph in the era of Ronald Reagan.33

Gilder’s paean to capitalism attacked the social and economic policies of the War on Poverty and Great Society, but it did not engage John Rawls’s philosophic defense of redistributive government or the concept of distributive justice on which it rested. Instead, the major challenge to Rawls came from his Harvard colleague Robert Nozick. Nozick’s Anarchy, the State, and Utopia (1974) and Rawls’s Theory of Justice, noted one reviewer in a judgment from which few would dissent, were the “two most important books in political ethics since World War II.” Together, observed another reviewer, Rawls and Nozick were “inaugurating a needed renaissance in political philosophy.” Nozick, who also found a more popular audience, attracted a growing number of followers. Indeed, Anarchy, the State, and Utopia, observed The New York Times Book Review, was “welcomed by American business journals as a ringing defense of private enterprise and a devastating critique of the welfare state.”34

It was ironic that conservatives praised Nozick, for he did not consider himself one of them. He intended Anarchy, the State, and Utopia to give comfort to no political party and identified himself most closely with the libertarian position. Indeed, his argument runs directly counter to the moral authoritarian strand within contemporary conservatism. Nonetheless, readers often appropriate books for purposes other than those their author intended. Given Nozick’s summary statement of his thesis, little mystery exists about the attraction of Anarchy, the State, and Utopia for the political Right:

Our main conclusions about the state are that a minimal state, limited to the narrow functions of protection against force, theft, fraud, enforcement of contracts, and so on, is justified; that any more extensive state will violate persons’ rights not to be forced to do certain things, and is unjustified; and that the minimal state is inspiring as well as right. Two noteworthy implications are that the state may not use its coercive apparatus for the purpose of getting some citizens to aid others, or in order to prohibit activities to people for their own good or protection.35

Anarchy, the State, and Utopia rests on the assumption that “individuals are ends and not merely means; they may not be sacrificed or used for achieving of other ends without their consent. Individuals are inviolable.” Two major arguments follow from this radical individualist premise. The first defends the existence of the state with a hypothetical account of its origins. The second attempts to show why arguments in favor of extending the scope of the state are wrong. A final brief section delineates a libertarian utopia whose possibility, for Nozick, makes the minimal state inspiring as well as just.36

Like Rawls, Nozick began with a state of nature; only, its inhabitants did not make decisions behind a Rawlsian “veil of ignorance.” Instead, they shrewdly confronted dangers by creating protective associations that, over time and without prior intent, they merged into a monopoly with the essential characteristics of a state. Because it arose from a process that did not violate individual rights, the monopoly or minimal state was both necessary and legitimate. Nonetheless, with one important exception, any extensions of its scope impermissibly violated individual rights. All major theories that attempted to legitimate these extensions were, for Nozick, fatally flawed.

Nozick concentrated most on Rawls and Marxism. Beyond their individual failings as theories, both shared the weakness of almost all theories of distributive justice: They were “end-state” theories, in that they advocated some optimal distribution of resources and evaluated societies on the basis of how closely they approximated it. They showed little concern, however, with how distribution decisions were reached, especially with the inescapable conclusion that they were attainable only through the violation of inviolable individual rights. Nozick proposed, to the contrary, to evaluate distributions according to three criteria: “the principle of acquisition of holdings, the principle of transfer of holdings, and the principle of rectification of violations of the first two principles.” Individual holdings acquired and transferred through morally permissible means are entitlements. Individuals deserve them; the state may not take them away. “Taxation of earnings from labor is on a par with forced labor.” The state may not appropriate the wealth of one individual for the benefit of another. It has no moral right to coerce any person to share resources. It has no obligation to assist the poor through the public purse, nor may it intervene to prohibit behavior that does not violate the inviolable rights of others.37

Through the principle of rectification of violations, Nozick provided a back door for an activist, redistributive state: “Although to introduce socialism as the punishment for our sins would be to go too far,” he observed, “past injustices might be so great as to make necessary in the short run a more extensive state in order to rectify them.” In fact, distribution patterns might be taken as “rough rules of thumb” for identifying the result of historic injustices, and “a rough rule of thumb for rectifying injustice” might be to “organize society so as to maximize the position of whatever group ends up least well-off in society.” Nozick therefore did not rule out ending up with the same practical politics as Rawls, even though he would reach them by an entirely different route.38

That route entailed a radical and curious disjunction of method. His account of the origins of the state rested on a wholly hypothetical state of nature, whose lack of concrete historical foundation he vigorously defended. Yet he criticized most theories of distributive justice for their ahistorical basis. As end-state theories, they remained unconcerned with how societies reached desired distributions and were thus insensitive to violations of individual rights. Only through historical accounts of the acquisition and transfer of holdings, he countered, may individuals’ entitlements to their possessions be sanctioned as legitimate, or condemned as its opposition.

Despite his stress on historical process, Nozick offered no evidence that contemporary distributions of wealth were outcomes of just processes of acquisition and transfer. Nor did he provide any but the most general guide for assessing them. Indeed, using his principles, few historians would have difficulty reaching a conclusion opposite to that which he implied—namely, the entitlement of contemporary Americans to the undisturbed enjoyment of all their wealth. For evidence of fraud, collusion, violence, and the violation of individual rights abound in the nation’s past.

To Nozick, property was wholly a matter of things. He entered the debate about distributive justice among philosophers and political theorists but ignored its counterpart among legal scholars, thereby avoiding questions about the definition of property such as those raised by Charles Reich. Could he agree that property represents a relationship sanctioned by, and not antecedent to, the state? How would acknowledgment of changing forms of property affect his argument?

However Nozick intended his arguments to be used, they lent themselves easily to the retrenchment of social benefits and the exaltation of greed fashionable in the early 1980s. It is a greatly oversimplified, distorted, and vulgar, but nonetheless comprehensible, step from Nozick’s dazzling scholarship to Reagan’s Director of the Office of Management and Budget, David Stockman’s, assertion that no one is entitled to claim any social benefits from government. It is an even less precipitous step to Charles Murray’s attack on the welfare state.

In Losing Ground (1984), Charles Murray quoted Robert Nozick only once. His chapter on the purposes of social welfare (“What Do We Want to Accomplish?”) started with Nozick’s observation that “The legitimacy of altering social institutions to achieve greater equality of material condition is, though often assumed, rarely argued for.” Like Nozick and Gilder, Murray is not an egalitarian. His slogan is, “Billions for equal opportunity, not one cent for equal outcome.” The legitimacy of social inequality underpinned his attack on social welfare, just as it did Gilder’s defense of wealth and Nozick’s concept of entitlement.39 Together, Gilder and Murray provided the perfect social theories for an age of expanding inequality.

Another assumption, not wholly consistent with the first, lurked just beneath the surface of Murray’s argument. The first assumption justified inequality with equal opportunity. The second assumed a harsh world of limited possibilities in which reward mirrored merit. “The tangible incentives that any society can realistically hold out to the poor youth of average abilities and average industriousness are mostly penalties, mostly disincentives.” With public support stripped away, as Murray wanted, most people could look forward only to hard work and limited gains. Social policy, therefore, must emphasize the stick rather than the carrot.40

Murray’s contention that social welfare harmed the poor updated an old position in the endless debates about poor laws, and his stance on the classification of poor people also echoed ancient arguments. “Some people,” he wrote, “are better than others. They deserve more of society’s rewards, of which money is only one small part.” Despite centuries of failed attempts to draw the line between the deserving and undeserving poor, for Murray the distinction between them emerged clearly enough to serve as the basis of social policy.

One reason for the spectacular success and influence of Losing Ground was Murray’s concentration on the core preoccupations within poverty discourse. Another was his style. Murray wrote clearly and in the manner of a social scientist. Losing Ground bristles with graphs and quantitative data. It has none of the bizarre flights of fancy or overt misogamy of Gilder’s work. A third was its marketing by the Manhattan Institute, which funded Murray to write it. Murray’s success illustrates the role of big money in the marketplace of ideas. William Hammett, president of the conservative Manhattan Institute, read a pamphlet Murray had written and invited him to the institute, where he was supported for the two years during which he wrote Losing Ground.41 Hammett invested in the production and in the promotion of Murray’s book. He spent about $15,000 to send more than 700 free copies to influential politicians, academics, and journalists, and he paid for a public relations specialist, Joan Taylor Kennedy, to manage the “Murray campaign.” Kennedy aggressively booked Murray on TV shows and the lecture circuit; arranged conferences with editors and academics; and contacted newspapers and magazines. The institute even organized a seminar on Losing Ground with intellectuals and journalists influential in policy circles. Participants were paid honoraria of $500 to $1,500 and housed at an expensive New York hotel. As one observer commented, “the quality of Murray’s intellectual goods” was not the only reason for his success.42

Murray’s argument fit the Reagan agenda perfectly. At precisely the appropriate moment, it provided what appeared to be an authoritative rationale for reducing social benefits and dismantling affirmative action. Nearly every reviewer commented on Murray’s influence. In March 1985, policy expert Robert Greenstein observed: “Congress will soon engage in bitter battles over where to cut the federal budget, and Losing Ground is already being used as ammunition by those who would direct more reductions at programs for the poor.” Murray’s name, pointed out the prominent sociologist Christopher Jencks, “has been invoked repeatedly in Washington’s current debates over the budget—not because he has provided new evidence of the effects of particular government programs, but because he is widely presumed to have proven that federal social policy as a whole made the poor worse off over the past twenty years.” Losing Ground, others pointed out, was the Reagan administration’s new bible.43

The core of Murray’s thesis may be restated in the form of several propositions:

• Despite massively swollen spending on social welfare after 1965, the incidence of both poverty and antisocial behavior increased.

• Neither the growth of poverty nor antisocial behavior resulted from economic conditions, which were improving.

• Black unemployment increased during the period because young blacks voluntarily withdrew from the labor market.

• Female-headed black families increased because young men and women saw less reason to marry.

• Labor market and family behavior (also criminal behavior) reflected rational short-term responses to economic incentives.

• These incentives were the perverse result of federal social policy after 1965.

All these propositions, as one commentator after another showed, were wrong. For one thing, welfare did not cause the rise in black out-of-wedlock births, and Murray had the facts about incentives backward. Welfare benefits, in constant dollars, fell steeply after 1972, during the same period in which Murray claimed their generosity acted as a perverse incentive. (Murray nowhere mentioned this large decline in AFDC benefits.) The number of black children supported by AFDC declined by 5 percent from 1972 to 1980. No correlations existed between state-level benefits and the size of AFDC rolls. Out-of-wedlock births also rose sharply among women who did not receive welfare.

Similarly, Murray confused the relation between the growth of the economy and poverty. Poverty increased because the economy worsened after 1973. The Gross National Product, on which Murray relied, was an inadequate measure of either opportunity or individual well-being. Real wages declined, productivity dropped, inflation soared, and unemployment increased in part because the economy did not grow fast enough to absorb the large number of entering workers.

Christopher Jencks’s reworking of Census Bureau statistics showed that the share of the population living below the official poverty line was almost twice as high in 1965 as in 1980, and almost three times as high in 1950 as in 1980. The reduction in poverty appeared remarkable when set against the unemployment rate, which doubled between 1968 and 1980.44 As for the rationality of behavior, the relative advantage of work versus welfare increased during the 1970s, in contrast to Murray’s claim that it decreased. Murray’s assertion rested on the hypothetical example of a couple, Harold and Phyllis, who must choose whether or not to marry when Phyllis becomes pregnant. By 1980, claimed Murray, it made less economic sense for them to marry than ever before. As Robert Greenstein showed, Murray’s argument was flat wrong. First, it was based not on the nation but on Pennsylvania, where welfare benefits grew twice as fast during the 1970s as in the country as a whole. It also miscalculated income by incorrectly assuming that the family would have lost food stamp benefits had Harold worked. (Murray, however, included foods stamps in calculating the family income.)

With accurate computation, work at a minimum-wage job was more profitable than welfare throughout most of the country; in the South, minimum-wage jobs often paid twice as much. Murray failed to provide a 1980 budget for Harold and Phyllis. Had he done so, it would have shown that the value of all welfare benefits packaged together had dropped by 20 percent during the 1970s. Conversely, after 1975 the Earned Income Tax Credit increased the advantages of working. As Greenstein pointed out: “in 1980—even in Pennsylvania—Harold and Phyllis would have one-third more income if Harold worked than if he remained unemployed and Phyllis collected welfare.”45

Murray distorted or ignored the accomplishments of social programs. He did not recognize the decline in poverty among the elderly, increased access to medical care and legal assistance, the drop in infant mortality rates, or the near abolition of hunger prior to the Reagan administration’s policies. He did not observe the irony that without federal affirmative action programs and other anti-discrimination measures, the black economic progress that he had lauded could not have occurred.

Murray was also mostly wrong concerning the history of poverty and social welfare in America, including social policy since 1965. Only because he told the story in a contextual vacuum was he able to argue that the federal government stumbled into a set of misguided policies that worsened the condition of the poor—which, without massive public intervention, had started to “improve.” For instance, he pointed to rising black unemployment in the 1960s but did not connect it to the mechanization of southern agriculture in the 1950s that drove so many from the land and toward northern cities. His book remained innocent of any discussion of the transformations within American cities described in this chapter. Murray had nothing to say about the role of shifting occupational structures and spatial patterns in promoting poverty. Only by adding these omissions to his neglect of declining real wages, rising unemployment, and faulty economic and social history was Murray able to assert the unmediated and demoralizing impact of federal social policy on the poor during a period of growing prosperity and opportunity.

Because social policy is usually either futile or perverse, Murray recommended draconian cuts: the elimination of virtually all social benefits except Social Security (the reasons for whose stay of execution he did not explain) and reconstituted, limited unemployment insurance. In his view, only by cutting the cord that bound them to the government could federal policy truly help the poor. However, even the Reagan administration could not persuade Congress to dismantle the welfare state. Sophisticated conservatives in the 1980s still accepted the inevitability of big government in modern America. Their problem was to make it work for their ends and to set it on a plausible theoretical and moral base. This was the task begun by Lawrence Mead as he helped launch a new stage in the conservative ascendance.

By the mid-1980s, few conservatives still urged dismantling the welfare state. One reason was the intractable nature of poverty, especially among minorities in inner cities. As homelessness and children’s poverty became national issues, only the most stubborn conservative could argue that cutting social benefits would improve the condition of poor people by prodding them toward independence. A second reason was moral. Conservatives objected not only to government intervention in the economy, liberal foreign policy, and decreased military spending, but to trends they believed threatened family life and violated moral values. As the Reagan revolution failed to check abortion, divorce, out-of-wedlock pregnancy, and drug use, social conservatives reasserted the importance of authority in public life. Because only government had the power to prohibit or enforce behavior, the future of conservatism necessitated its reconciliation with the state. In social policy, the first major book to justify big government in conservative terms was Lawrence Mead’s Beyond Entitlement: The Social Obligations of Citizenship, published in 1986.

Mead did not quote Adam Smith; his preferred philosophers were Hobbes, Burke, and Tocqueville. His concern was society, not the individual, and he worried more about order than liberty. His target was permissive social policy, and his solution, enforced work obligations for the poor.

“My question,” wrote Mead, “is why federal programs since 1960 have coped so poorly with the various social problems that have come to afflict American society.” Although his question echoed Murray, Mead’s answer was different. The major problem with the welfare state, he claimed, was “its permissiveness, not its size.” By permissiveness, Mead meant that federal programs “award their benefits essentially as entitlements, expecting next to nothing from the beneficiaries in return.” These permissive federal social programs resulted partly from the structure of American government and partly from an intellectually flabby liberalism grounded in sociological explanations of poverty that denied the importance of authority and obligation.46

Mead believed that “functioning” in American society had declined during the previous two decades. By functioning, he meant competence as reflected in the proportion of the population on welfare, the unemployment rate, the amount of serious crime, and SAT scores. Americans, he concluded, not only were rejecting their social obligations to one another; they were losing their ability to cope with the ordinary tasks of everyday life. The fault, Mead was clear, did not lie with social structure or economic conditions. Its source was individual will conditioned by government programs that “shield their clients from the threats and rewards that stem from private society—particularly from the market place.” Instead of “blaming people as they deviate,” government must persuade them to “blame themselves.” As for the poor, Mead asserts, “the main barrier to acceptance is no longer unfair social structures, but their own difficulties in coping.”47

With Gilder, Nozick, and Murray, Mead shared a dark view of human nature. Gilder’s unbridled male aggression, Nozick’s warlike state of nature, Murray’s natural indolence and amorality, and Mead’s inability to resist the snares of permissiveness all circumscribed the limits of reform and mandated public coercion. Gilder, Murray, and Mead assumed that many people would always have to work hard at badly paid, dull jobs they detest. Workplace reform, high wages, the constructive use of automation to increase leisure and decrease alienation played no role in their visions of the future. Instead, they believed public policy should help Americans adapt to their gloomy prospects by lowering their expectations. Gilder, Murray, and Mead therefore rejected equality of condition as a dangerous and illusory social goal. The American definition of equality, asserted Mead, did not rest on income or status. Rather, equality meant “the enjoyment of equal citizenship, meaning the same rights and obligations as others.” Mead defended his definition of equality by expedience rather than on constitutional or philosophical grounds: “The great virtue of equal citizenship as a social goal is that it is much more widely achievable than status.”48

Mead assumed that anyone who wanted a job could find one. Deindustrialization and structural unemployment played no larger a role in his argument than in Murray’s. “Unemployment has more to do with functioning problems of the jobless themselves than with economic conditions.” The lack of child care, for example, did not excuse unemployment among women AFDC beneficiaries. A “lack of government child care,” he claimed, “seems seldom to be a barrier; most prefer to arrange care with friends or relatives.” Others remained unemployed because they were unwilling to relocate, accept or remain at unpleasant and badly paid jobs, or commute more than twenty miles to work. The point was simple: “disadvantaged workers are unlikely to labor regularly unless they are required to as a condition of support of society.”49

The quality and material rewards of work remained irrelevant for Mead. “There are good grounds to think,” he asserted, “that work at least in ‘dirty,’ low-wage jobs, can no longer be left solely to the initiative of those who labor.” For them, “employment must become a duty, enforced by public authority, rather than an expression of self interest.” Low-wage work “apparently must be mandated,” he wrote, “just as a draft has sometimes been necessary to staff the military.” Government “need not make the desired behavior worthwhile to people. It simply threatens punishment.” What is more, the refusal to work was a grave act against the state. “Nonwork,” asserted Mead, “is a political act” that underlines the “need for authority … In an open political system rebellious actions, even if not overtly political, tend to provoke countervailing forces.” With plenty of jobs available, continued unemployment reflected more than indolence; it was subversion.50

The primary responsibility of government is not to raise living standards, increase personal satisfaction, or even to facilitate markets. “Government is really a mechanism by which people force themselves to serve and obey each other in necessary ways.” Obedience necessitates the enforcement of shared values. “Federal policymakers must start to ask how programs can affirm the norms for functioning on which social order depends.” Because social order demanded the public creation of norms, government “must take over the socializing role.”51

Mead used the condition of blacks to show how permissive social policy had backfired. Before the civil rights movement, he claimed, black society was more “coherent” than after; “at least racism did not exempt blacks from normal social demands as recent federal policy has done.” With no supporting evidence, Mead asserted that the lack of accountability built into federal social programs was “among the reasons why nonwork, crime, family breakup, and other problems are much commoner among recipients [of government benefits] than Americans generally.” The remedy is an “authoritative social policy” that enforces social obligations. Mead called for an enhanced, intrusive state to recapture social policy from the soft, muddled liberal intellectuals whose influence had moved government away from the values and desires of the vast majority of Americans.52

Mead nonetheless remained ambivalent about the scope of the state. He would assign it a key role in socialization but deny it one in employment. He redefined employment in public works projects as just another form of dependence. Great Society training programs offered “what amounted to welfare through the allowances and other benefits.” Indeed, by effectively relaxing the work obligation for many men, “the employment programs probably increased joblessness rather than reducing it.”53 The word “probably” was the key: Mead had no hard evidence for his speculation because none existed. Similarly, his brief, inaccurate, and derogatory comments on the community action programs ignored the grassroots origins of the civil rights movement, its role as a catalyst of the War on Poverty, and the pivotal ideological and administrative position of community action. Instead, Mead treated the War on Poverty as a conspiracy by the elite to advance its own power and position by trapping the disadvantaged in a web of dependence.54

Mead read the history of American political reform from 1900 to 1965 as a progressive tradition directed toward “the elimination of barriers to competent citizens.” These reforms assumed the competence of ordinary Americans, who made “good use of new opportunities with little further help from government.” Because Mead assumed these movements met their goals, he believed no major structural barriers to advancement now blocked the path toward prosperity or social justice.55 Mead’s reading of history distorted the past to support his interpretation of the present. A more accurate way to read the same events is this: Minorities and working people, no matter how competent, could not—and cannot—reduce discrimination, improve wages and working conditions, or escape periodic unemployment without the intervention of an active state operating on their behalf to legitimate and protect collective bargaining, set the minimum wage and hours of work, provide a social safety net, and enforce civil rights, among other crucial functions.

Mead’s account of the past also missed the authoritarian strand in the history of American social reform. Since the early nineteenth century, reformers had tried to regulate behavior and use government as an agent of socialization through, for example, compulsory public education, the temperance movement, and breaking up poor families. This history is important because it illustrates that the intrusive, authoritarian moments in the history of American reform usually failed to meet their goals. Prohibition provoked law-breaking, adulterated whiskey, and violence. Compulsory education did remove some poor children from the streets, but it had had little impact on crime, poverty, or public morality, as its advocates had promised. Child protection agencies did not stop child abuse, and within two decades family breakup had been discredited as an object of social policy. Juvenile courts failed to stem delinquency and disappointed their founders. Welfare regulations did not change the sexual behavior of poor women.56 Given this history, Mead’s stress on authority as the foundation of social policy appeared neither novel nor promising.

Mead was right about one key piece of history: throughout the nation’s past, Americans have held the work ethic sacred, and they have defined the undeserving poor by its absence. Whether alcoholics, tramps, unwed mothers, or young black men, the undeserving poor have remained outside the regular labor market by reason of their own personal deficiencies, not because of the difficulty of finding work. On examination, this harsh implication of work’s deification always has collapsed. In Mead’s case the claim that labor demand exceeded supply constituted the empirical centerpiece of his argument. But his evidence was unconvincing.57

Even if forced to concede a job shortage, Mead almost certainly would have staged only a tactical retreat. For his case had a moral rather than an empirical core. It rested, that is, on his concept of citizenship and its obligations—one of the core concerns in the history of poverty and welfare. To Mead, citizenship demanded the successful discharge of social and political obligations. “The capacities to learn, work, support one’s family, and respect the right of others,” wrote Mead, “amount to a set of social obligations alongside the political ones [such as voting, paying taxes, serving in the military].” He defined a “civic society” as one in which “people are competent in all these senses, as citizens and as workers.” In the social realm, government programs defined social expectations, as did the Constitution in the political realm. As a result, the structure of program benefits and requirements constituted “an operational definition of citizenship.” Except in the narrow legal definition, citizenship is not an entitlement of birth; it must be earned daily through competent and responsible behavior.58 As Mead used the term, “competence” became the badge of the deserving poor. Low SAT scores, unemployment, criminal convictions, and welfare dependence became interchangeable signals of incompetence—hallmarks of the undeserving poor whom Mead wrote out of citizenship.

Mead tried to be clear about what poor people owe other Americans. However, obligation implies mutual responsibilities, and Mead failed to ask what we, in our organized capacity as government or philanthropy, owe in return. Is it merely survival, or something more generous? Are obligations graduated? Should longer and harder work bring more benefits? There can be no legal or moral justification for asking people in need to sign a contract whose terms remain undisclosed. Unless it provides the prerequisites of competence, society (another of Mead’s ill-defined abstractions) lacks a moral title to obligation. Potential citizens should expect the resources essential for learning, work, and family life. These include adequate schools, affordable housing, reasonably priced child care, first-class health care, and decent jobs. In America, poor people can count on none of these.59

All the poor may know is that they are obliged to work. Why is less clear. In places, Mead implied that work is necessary for self-esteem and mental health. Any work is preferable to dependent idleness. More often, Mead assigned work a different purpose: oiling the gears of productivity. Whether individuals like their work is beside the point. Society needs their labor, and the needs of society always trump the preferences of individuals. If necessary, Mead would subsidize the wages of poorly paid workers rather than force their employers to pay them more. Mead did not explain why he was willing to underwrite private profits with public subsidies. In fact, the harder one pushes, the more Mead’s concept of social obligation collapses into a new strategy for preserving a pool of cheap, docile labor, an updated version of “regulating the poor,” one of welfare’s historic functions.60 As such it is a euphemism. For without mutuality, obligation becomes coercion.61

Beyond Entitlement may have failed as moral philosophy and social science, but it succeeded as politics. Mead tapped the widespread hostility toward the dependent poor that underlay the ferocious assault on welfare, as embodied in AFDC, which culminated in the 1996 “welfare reform” legislation. As he read political trends in his 1992 book, The New Politics of Poverty: The Non-Working Poor in America, a new politics of dependency had replaced the old politics of class.62 His argument had five parts:

1. “Nonwork” is the major cause of a new form of poverty. Young African American men and single mothers constitute the overwhelming number of the new poor.

2. Neither economic trends, racism, segregation, inadequate child care, nor other tangible obstacles explain the emergence of the new poverty.

3. Instead, the new poverty’s roots lie in psychology, culture, and human nature.

4. As a response to the new poverty, a new politics of dependence based on “social and personal” issues has emerged to replace the old redistributive politics of class.

5. At its most constructive, the new politics of dependence realizes that only public policy that utilizes the authority of the state to enforce acceptable behavior can alleviate the new poverty.

Mead was correct about the link between nonwork and poverty in the 1990s, although with the further erosion of wages, intensifying inequality, and Great Recession, his focus on nonworking African American mothers and young African American men seems increasingly anachronistic. His dismissal of “tangible obstacles” in favor of “psychology, culture, and human nature” is contradicted by virtually all credible historical and social science research and suffused with a racial animus, which Chapter 5 will explore in its discussion of African American poverty and progress. But his identification of a new politics of dependency based on social and personal issues was right on the money, and his recognition that the new politics led straight to an authoritarian role for the state as enforcer of acceptable behavior found confirmation in the Republican Party’s 1994 Contract with America and then in 1996 in the successful and ultimately bipartisan campaign to abolish the entitlement to welfare by replacing AFDC with Temporary Assistance for Needy Families.

The Liberal Retreat from Inequality

Liberals failed to block the bipartisan assault on the welfare state. No liberal supporters of vigorous anti-poverty programs seized public attention with the force of George Gilder, Charles Murray, or Lawrence Mead. In fact, either implicitly or explicitly, the relatively few scholars defending the expansion of social welfare ceded vital terrain to the conservatives.

That terrain was equality. Most liberals rejected greater equality as the ground on which to attack poverty or defend the welfare state. Instead, they stressed either the immorality of deprivation, the threat to community, or a combination of the two. Some argued that severe deprivation violated moral and even constitutional obligations. Others contended that poverty inhibited participation in civic life and eroded the basis of community. As they formulated their case, however, nearly all liberal writers on poverty and welfare criticized exclusive reliance on market models as the basis for social policy. (The major exceptions to this general neglect of inequality were the work of Ronald Dworkin, whose writings defended greater equality as the goal of liberal social and political policy, and Amartya Sen, who offered a technically and philosophically sophisticated defense of “needs” rather than “desert” as the metric of inequality for distributional judgments and a definition of poverty as “capability deprivation.”)63

Writing in the Harvard Law Review in 1969, Frank Michelman first developed his case for the constitutional basis of welfare in the Fourteenth Amendment. The “judicial ‘equality’ explosion of recent times,” he observed, “has been largely ignited by reawakened sensitivity, not to equality, but to a quite different sort of value or claim which might better be called ‘minimum welfare.’” Welfare’s purpose, he was very clear, was not the promotion of economic equality, but “minimum protection against social hazard.” The “injury” resulting from poverty, he claimed, “consists more essentially of deprivation than of discrimination,” and “the cure accordingly lies more in provision than in equalization.”

A decade later, Michelman added provision of the conditions for political participation to his brief on behalf of the constitutional foundation of welfare rights. Welfare rights, he asserted, are “part of constitutionally guaranteed democratic representation.” Poverty, he stressed, not only disadvantaged individuals politically; it identified them as members of a group whose interests, despite their numbers, were “systematically subordinated” in the formation of political coalitions and the routine exercise of political influence. The relation between poverty and political deprivation remained especially severe for blacks. For them, meeting their “basic welfare interests” remained crucial to eliminating “vestiges of slavery from the system of democratic representation.”64

In contrast to Michelman, the contributors to Democracy and the Welfare State, edited by Amy Gutmann, based their arguments more on considerations of community and civic participation than on deprivation or vulnerability. “The primary focus of many of the papers in this volume,” wrote Gutmann, “is not individual virtue, equality, or self-realization, but democratic citizenship. The pivotal questions are: What social institutions are necessary to encourage and protect citizenship? What rights do citizens have, and what duties are required of them?” Emphasis on the responsibilities of citizens, however, is a slippery idea that can lead toward a harsh and authoritarian state—to Lawrence Mead—unless it is accompanied by an inclusive definition of citizenship and an appreciation of the conditions that make possible the full exercise of citizenship. A definition of citizenship that rests on obligations and contributions, as the political theorist T. H. Marshall recognized, runs the risk of marginalizing those who do not work in the regular labor market, and creating second-class citizens. At the same time, poverty and deprivation undermine democracy by eroding the capacity to participate fully in civic life.65 “Unless everybody can live a life free of elementary fears,” warned political theorist Ralf Dahrendorf, “constitutional rights can be empty promises and worse, a cynical pretense of liberties that in fact stabilize privilege.” A welfare state, as many of its theorists have recognized, is a precondition for modern democracy.66

In his contribution to Democracy and the Welfare State, J. Donald Moon responded to Gutmann’s questions by linking criticism of exclusive reliance on market models—the major trend in social policy—to the basis of civic participation. “[The] justification for organizing economic life through the market,” observed Moon, rests on “a conception of the individual as agent, capable of choice and deliberation, and entitled to certain rights and to be treated with respect.” Consequently, the “justification of the market” weakens when its normal operation “deprives some people—through no fault of their own—of the very means of survival, not to mention the possibility of maintaining their well-being and dignity.” Poverty’s significance extends beyond suffering to “an undeserved exile from society.” Moon found “something deeply and undeniably unjust about a social order that necessarily frustrates fulfillment of the promises it makes.”67

For Michael Walzer as well, poverty violates the basis of community. His argument derived from his concept of complex equality, developed in Spheres of Justice, the first major theoretical work on distributive justice to follow Rawls and Nozick. To Walzer, the primary enemy of justice is domination rather than the inegalitarian distribution of goods. Justice is “the opposite of tyranny,” and the recognition of complex equality the guarantor of democracy. Complex equality assumed the division of goods into multiple distributive spheres, each guided by its own rules, each relatively autonomous. “Every social good or set of goods,” he wrote, “constitutes as it were, a distributive sphere within which only certain criteria and arrangements are appropriate.” Protecting the relative autonomy of spheres requires constant policing of their boundaries.

The greatest danger of violation usually comes from the market sphere. Powerful men and women most often use the resources accrued there to invade other spheres; “market power,” which tends to “overspill the boundaries” turns into a form of tyranny, “distorting distributions in other spheres.” Only democracy has the capacity to protect the autonomy of spheres. “Once we have located ownership, expertise, religious knowledge, and so on in their proper places and established their autonomy, there is no alternative to democracy in the political sphere,” wrote Walzer.

Because citizenship is active and participatory, public policy, according to Walzer, should have as one goal empowerment, or widespread “participation in communal activities, the concrete realization of membership.” Membership, he contended, is “the primary social good that we distribute to one another,” and the “denial of membership is always the first of a long train of abuses.” Walzer’s emphasis on membership raises two important questions. What rights do individuals possess as members of communities, and what circumstances deprive them of full membership?68

Poverty and prolonged unemployment deprive people of membership because they represent a “kind of economic exile, a punishment that we are loath to say that anyone deserves.” Poverty creates exiles by stripping people of self-respect, which requires some substantial connection to the group; it thereby dilutes the meaning of citizenship, and turns neighbors into strangers.69

For Walzer, the public response to poverty should reflect three principles that together demand an extensive welfare state. Political communities should meet the needs of their members as they are collectively understood, distribute goods in proportion to need, and honor the “underlying equality of membership.” By their arrogance and the dependence they breed, public relief programs too often adopt the worst practices of private charity. “The old patterns survive; the poor are still deferential, passive, and humble, while public officials take on the arrogance of their private predecessors.” Public programs, therefore, should “aim at setting up the poor on their own” through “rehabilitation, retraining, subsidizing small businesses, and so on.” Because participation is so central to citizenship, the participation of the poor in the life of the community should not await the abolition of poverty; “rather, the struggle against poverty (and against every other sort of neediness) is one of those activities in which many citizens, poor and not so poor and well-to-do alike, ought to participate.”70

Walzer’s inclusive model of community and participatory definition of citizenship rest on a presumption of human dignity. Only an irreducible commitment to individual human worth justifies his horror of domination and emphasis on self-respect. Similar assumptions underpin the most comprehensive explanation of the injustice of poverty written in the 1980s—the 1986 Economic Justice for All, the Catholic bishops’ pastoral letter on the US economy.

Poverty, asserted the pastoral letter, “is not merely the lack of financial resources. It entails a profound kind of deprivation, a denial of full participation in the economic, social, and political life of society and an inability to influence decisions that affect one’s life.” For the bishops, as for Walzer, poverty represents a violation of community, a deprivation of citizenship, and an essential powerlessness that “assaults not only one’s pocketbook but also one’s fundamental human dignity.”71

The bishops’ definition of poverty reflected their search for a language of inclusion capable of appealing to non-Catholic audiences and broadly shared American social values. Their “option for the poor,” they stressed, should “not mean pitting one group against another, but rather, strengthening the whole community by assisting those who are most vulnerable.” Their reluctance to use divisive or provocative language, however, did not prevent the bishops from voicing unambiguous outrage at the persistence of poverty in America. “That so many people are poor in a nation as rich as ours,” they wrote, “is a social and moral scandal that we cannot ignore.”72

The letter’s brief against poverty was moral, its starting point human dignity, which, “realized in community with others and with the whole of God’s creation, is the norm against which every social institution should be measured.” Human dignity derives from the creation of humans in God’s image. Because it comes from God, it inheres in everyone, independent of “nationality, race, sex, economic status, or any accomplishment.” Dignity manifests itself “in the ability to reason and understand”; in “freedom to shape their own lives and the life of their communities, and in the capacity for love and friendship.”73

The recommendations in the pastoral letter rested on important assumptions about human rights, the conditions of human dignity, the nature of social obligations, and the quality of work. They assumed, first, that human rights encompass economic as well as civil and political rights. In this, they reflected the most controversial premise of contemporary poverty law, now generally rejected by federal courts but resurgent within the human rights community (as discussed in Chapter 5). They also assumed the social basis of human dignity, which cannot be realized apart from community. By implication, therefore, they rejected the individualism inherent in classical liberalism, advocated by free-market conservatives or libertarians such as Nozick.

Commitment to community leads to an emphasis on social obligations, which the latter, unlike Mead, based on reciprocity. “Social justice implies that persons have an obligation to be active and productive participants in the life of society and that society has a duty to enable them to participate in this way.” Forcing people to work at unrewarding, deadening, or degrading work, as Mead would permit, clearly violates both human dignity and the reciprocity essential to the realization of community, because social justice requires organizing “economic and social institutions so that people can contribute to society in ways that respect their freedom and the dignity of their labor.” Work is necessary for human fulfillment, but it “should enable the working person to become ‘more a human being,’ more capable of acting intelligently, freely, and in ways that lead to self-realization.” (Ronald Dworkin made a similar point. “Treating people as equals requires a more active conception of membership. If people are asked to sacrifice for their community, they must be offered some reason why the community which benefits from that sacrifice is their community.”)74

Although the letter considered inequality in contemporary America too severe, its main goals remained the realization of community and the protection of human dignity. Indeed, in keeping with Catholic teaching, the bishops not only accepted but celebrated “the private ownership of productive property.” At the same time, the Church’s teaching rejected the notion that a free market “automatically” produces justice. Instead, the bishops, like Walzer, argued that there are some goods that money cannot buy. Markets, they asserted, are “limited by fundamental human rights. Some things are never to be bought and sold. This conviction has prompted positive steps to modify the operation of the market when it harms vulnerable members of society.”

Nor did they prefer government as the primary agent of social justice. Rather, they advanced the principle of subsidiarity, that “in order to protect basic justice, government should undertake only those initiatives which exceed the capacity of individuals or private groups acting independently.” Subsidiarity, much like Walzer’s separate spheres, protects freedom through “institutional pluralism” and links individuals to society through “mediating structures” composed of “small-and-intermediate-sized communities or institutions.” Subsidiarity also implies the diffusion of moral responsibility for the poor through society and all its institutions.75

Despite its anchor in Catholic theology, the pastoral letter embodied most of the major themes in the liberal attempt to reconstruct the intellectual basis of the welfare state. Like a variety of secular sources, it grounded its advocacy of expanded social benefits more in deprivation (or vulnerability) and the conditions of community than in inequality; incorporated both public and private action in the quest for social justice; relied on institutional pluralism to protect liberty; and, though it assumed the legitimacy of private property, resisted the intrusion of the market beyond its appropriate sphere. Nonetheless, even Economic Justice for All reflected the retreat from equality that underpinned liberal writing on poverty and welfare.

The retreat from equality appeared to make good strategic sense in an era when “liberal” had become a pejorative label. But events proved this avenue a dead end. It led, first, away from a confrontation with the economic inequality spreading like wildfire through American society and exacerbating the problem of poverty. And, second, the reconstructed defense of the welfare state utterly failed to rehabilitate the idea of welfare or to penetrate public policy, which pivoted around a bipartisan attempt to redesign the American welfare state on completely different principles.

Ending Welfare

In the 1980s public policy coalesced around three major goals. “The first was the war to end dependence—not only the dependence of young unmarried mothers on welfare, but all forms of dependence on public and private support and on the paternalism of employers. The second was to devolve authority, that is, to transfer power from the federal government to the states, from state to counties, and from the public to the private sector. The third was the application of market models to social policy. Everywhere the market … triumphed as the template for a redesigned welfare state.”76 The design of “welfare reform” in 1996 reflected the interweaving of these three goals. It also reflected the triumph of Mead’s call to replace entitlement with obligation based on work and to make full citizenship depend on participation in the regular labor market. The irony is that Mead’s vision was implemented by Democratic president Bill Clinton.

The Welfare Reform Bill of 1996 represented the endpoint of a long, largely bipartisan effort to tie welfare to work. Known since the 1960s as workfare, this was in fact a very old idea. Since the workhouses of the eighteenth century, welfare reformers, to use the modern term, had tried unsuccessfully to make claimants work for benefits. “Twenty years ago,” wrote Jamie Peck in his 2001 Workfare States, “the issue of ‘workfare,’ which at the time signified a particular type of U.S. work program requiring participants to ‘work off’ their welfare checks, was pretty much a marginal concern for anything other than a specialist audience.… a rather perverse preoccupation of a small but influential cadre of intellectuals and social visionaries on the U.S. right.” By the 1990s, the picture had changed. “While the keyword workfare retains many of its pejorative connotations, its various generics such as ‘welfare-to-work,’ ‘labor-force attachment,’ ‘active-benefit systems,’ and ‘work-first welfare reform’ now trip off the tongues of politicians and policymakers across the political spectrum.”77 Peck’s survey of The New York Times, Washington Post, and Wall Street Journal found more references to workfare in 1995 alone than in the entire period from 1971 to 1980.78 Workfare, Peck argues, represented more than a policy innovation designed to punish poor people, frighten them away from welfare, or lower the cost of public assistance. Rather, it played a central role in the “attempt to restructure the ‘boundary institutions’ of the labor market”—to manage the transition away from what he calls a “welfarist” regime to a post-welfare regime marked by flexible labor markets and contingent workers where the former “discourses of needs, decency, compassion, and entitlement have been discredited” and replaced by new, “reworked discourses of work, responsibility, self-sufficiency and empowerment.”79

In 1967, with the Work Incentive Program (WIN), popularly known as workfare, the federal government, using a mix of sanctions and incentives, tried to revive the idea that employable welfare recipients should work for their benefits. Like all earlier programs, WIN failed—in the program’s first twenty months, only 10.6 percent of the 1.6 million cases referred for work were considered employable and the AFDC rolls continued to grow. WIN was caught in its own contradictions and the divergent priorities of its sponsors, who offered varied definitions of the problem welfare reform was supposed to solve. Was workfare to be the first phase of a broad attack on poverty? Was it a long-term strategy for reducing the cost of AFDC and enforcing a universal obligation to work? Was it a strategy designed primarily to restore family structure? Was it a means of solving the labor force problems of the low-wage service sector? How many of these purposes could workfare serve simultaneously? Were they consistent with each other? These were questions scarcely debated in the heady days when workfare appeared to be the hitherto elusive means with which to reform America’s welfare system.

Workfare represented a new social policy synthesis that rejected the Great Society’s emphasis on compassion, empowerment, and entitlement. Instead, its key concept revealed an increased belief by both liberals and conservatives that welfare recipients should earn their benefits through work and good behavior. In the 1970s workfare was defined narrowly to mean that people should work off their welfare grants—that is, welfare recipients should be required to work, even in make-work jobs, in exchange for receiving their benefits. However, this punitive conception of workfare failed in the few places where it was tried.80

In its 1981 budget act, Congress allowed states to test “new employment approaches to welfare reform, officially called CWEP (Community Work Experience Programs).” This, claimed social policy expert Richard Nathan, stimulated “new-style workfare”:

New-style workfare embodies both the caring commitment of liberals and the themes identified with conservative writers such as Charles Murray, George Gilder, and Lawrence Mead. It involves a strong commitment to reducing welfare dependency on the premise that dependency is bad for people, that it undermines their motivation to support themselves, and isolates and stigmatizes welfare recipients in a way that over a long period feeds into and accentuates the underclass mindset and conditions.81

As Fred Block and John Noakes argue, the two other events that moved new-style workfare high on the national agenda were its bipartisan support by the National Governors Association in 1985 and the introduction of welfare reform bills in both the Senate and House in 1987. New-style workfare, they contend, proved especially appealing to congressional Democrats who could use it to show their leadership in forging a bipartisan solution uniting compassion with efficiency and thereby solving a heretofore intractable problem.82 New-style work-fare consisted of obligational state programs encompassing a variety of employment and training services and activities: job search, job training, education programs, and also community work experience. By the mid-1980s, more than two-thirds of the states, asserted Nathan, had experimented with new-style workfare, and an intensive study of eight by MDRC, the policy evaluation firm, showed “promising”, if not “large and dramatic”, effects on increased earnings and reduced welfare dependency.83 Workfare advocates pounced on the results as justification for further reform tying welfare to work. However, a hard look at the data by Block and Noakes raised serious doubts about the outcome of new-style workfare. At best, they found, the programs helped state governments save some money by churning their rolls. They did not move participants into permanent self-sufficiency or even temporarily lift them out of poverty.84 Nonetheless, the MDRC results helped fuel the passage of the flawed Family Support Act of 1988, which commanded strong bipartisan support, passing the House 347 to 53 and the Senate 96 to 1.

Hailed in the national press, in reality the Family Support Act offered little hope of reforming welfare. It made unrealistic assumptions about the availability of good jobs for AFDC clients. In fact, few jobs open to them paid enough to lift a family out of poverty and were often unstable, did not offer benefits, and lacked prospects for upward mobility. Nor did working off welfare payments open up routes to unsubsidized jobs and independence. Workfare carried a stigma that made jobs in the regular labor market harder to land. The Family Support Act did enhance child support mechanisms by forcing women to identify the fathers of their children as a condition of support, and it required employers to withhold child support payments from absent fathers’ paychecks. Nonetheless, the rise in out-of-wedlock births outpaced the increase in collections from unmarried fathers. The Act also extended AFDC to two-parent families in several states, helped a small number of clients leave AFDC, and encouraged state experimentation with welfare reform. Severely underfunded, however, the Family Support Act did not live up to its heady promise and as a practical source of welfare reform died a quiet death. Its greatest achievement was paving the way for the harsher, more punitive version of welfare reform of 1996.85

In 1992 Bill Clinton ran for president with a pledge to “end welfare as we know it.” As a slogan, it was sufficiently ambiguous to avoid a major confrontation with his supporters on the political left. After all, everyone hated welfare. Liberals found it mean-spirited, demeaning, and inadequate. Conservatives, following Murray and Mead, believed it eroded the work ethic, fostered dependency, and rewarded the undeserving poor. The Heritage Foundation’s Robert Rector and William Lauber added to the sense of urgency around welfare “reform” by manufacturing a crisis of cost in their preposterous but widely cited America’s Failed $5.4 Trillion War on Poverty.86

In his perceptive Why Americans Hate Welfare, political scientist Martin Gilens found the most important component of Americans’ hostility to welfare to be the “widespread belief that most welfare recipients would rather sit home and collect benefits than work hard to support themselves.”87 Working mothers, now the majority, resented the free ride provided by welfare that excused beneficiaries from juggling the burdens of home, work, and child care that confronted them every day. Welfare also was coded black. In 1994 63 percent of AFDC recipients nationwide—a number much higher in anti-welfare southern states and some older cities—were African American. In Alabama it was 75 percent and in Washington, DC, 97 percent.88 Its racial hue helped drive a wedge between AFDC supporters and the white working poor.

As it happened, none of the arguments advanced against welfare by conservatives found support in empirical data. AFDC was not an expensive program—indeed, its real secret lay in its cheapness. It was hard to imagine a less expensive way to keep millions of nonworking people alive. Nor was it the source of the rise in out-of-wedlock births or family instability. In fact, AFDC rolls increased as both the real value of benefits and the number of children born to AFDC beneficiaries declined. As with Murray and Mead, conservative critics ignored or downplayed the factors that forced women onto welfare rolls, where most—contrary to common belief—remained only for short periods. A lack of jobs, declining wages, parental poverty, poor schools, the influence of neighborhood, racial and gender discrimination—none of these played any role in the conservative assault on welfare. In truth, it was the structure of AFDC that caused most of the program’s weaknesses. Rules for eligibility undermined attempts to build modest savings or keep a car reliable enough to drive to work. The Reagan administration had virtually eliminated rules allowing welfare beneficiaries to keep some of the money they earned through work, thereby removing incentives to supplement AFDC benefits. In fact, without subsidies for child care, employment often remained impossible, and, perhaps worst of all, exchanging AFDC for low-wage work in most cases meant giving up medical insurance. Faced with these irrational impediments to improving welfare, many governors sought and won waivers from the federal requirements, and welfare rolls started to go down before the draconian 1996 legislation.

On August 22, 1996, President Clinton signed The Personal Responsibility and Work Opportunity Reconciliation Act after a long, tortuous legislative struggle. It had passed the House by a vote of 256 to 170 and the Senate by 74 to 24. In one poll, 82 percent of Americans approved.89 Its passage signaled the triumph of Mead’s “new politics of poverty.” The new legislation replaced AFDC with TANF, a time-limited program. In place of AFDC, TANF provided states with two block grants, one giving “cash and other benefits to help needy families support their children while simultaneously requiring families to make verifiable efforts to leave welfare for work and to avoid births outside marriage.” Lifetime benefits were limited to a maximum of five years, although states could set lower limits. The second block grant combined four major child care programs for low income families. Under the new law, legal immigrants lost benefits—they were dropped from Supplemental Security Income and food stamps, and barred from most means-tested programs. In its philosophy and provisions, the new legislation embodied the three goals driving the redesign of social policy. It was, of course, a frontal assault on dependency. At the same time, it devolved significant authority to state governments—a new hallmark of federal policy became setting goals while allowing state and local governments to choose the means with which to implement them, and with the new bill, market models suffused the goals, administration, and philosophy of welfare. Market logic drove the abolition of the entitlement to public assistance. Entitlement, which, contradicted the market imperative, swiftly became one of the most negative terms in the public policy lexicon. Even more, the new legislation reoriented welfare around the transition to work in the private labor market, and it left states free to contract with private providers to administer its provisions. For profit corporations, which seized the opportunity, decided whether American citizens would receive funds essential for their survival. Nina Bernstein at The New York Times reported that Lockheed Martin, “the $30 billion giant of the weapons industry,” was bidding against Electronic Data Systems (Ross Perot’s $12.3 billion information technology company) and Anderson Consulting to administer the $563 million Texas welfare program. Lockheed planned “to market even more comprehensive welfare contracts to states and counties in what is potentially a new multibillion-dollar industry to overhaul and run welfare programs.”90

The new legislation offended some high-level Clinton administration officials, who resigned in protest. One of them, poverty expert Peter Edelman, called it “the worst thing Bill Clinton has done,” and offered a stinging critique predicting that in five years, when the first time limits were reached, many beneficiaries would “fall into the abyss all at once.” The Congressional Budget Office found the legislation badly underfunded, without enough money to reach its goals. Others worried that TANF would prove unable to cope with a serious recession—a prediction fulfilled in the recession that began in 2008. Critics on the political Left, who had found themselves in the ironic position of defending the AFDC program that for years they had excoriated, were outraged and projected dire consequences. At first, the bill’s critics appeared wrong. Welfare rolls dropped farther and faster than anyone had expected. Both Republicans and Democrats touted the legislation as a huge success. A more careful look at the data—at the reasons why the rolls went down and at the consequences of the legislation for poor people—presents a far more ambiguous and unsettling picture. But for ideas about poverty, the triumphalist interpretation had an unexpected consequence. It transformed the uniform image of single mothers of color: no longer lazy and dependent, many of them became plucky moms trying hard to make it on their own with minimal help from the state—an image that proved temporary when social scientists and popular commentators in the early twenty-first century rediscovered family pathology and poor single mothers as culprits in the growth and persistence of urban poverty (as discussed in Chapter 5).91

The conservative narrative of poverty and welfare triumphed with the 1996 Welfare Bill. Poverty and welfare had become so intertwined in public debate that it was perhaps easy to forget that, although related, they were separate issues and that fixing welfare did not touch the structural origins of poverty. In practical terms, the 1996 legislation put into place a new federal public assistance program that met the goals of Murray, Mead, the Heritage Foundation, and others who had been pounding on AFDC from the political Right. At the same time, it pulled in the center Left, which had been largely if not entirely won over by the main lines of the conservative story about welfare, poverty, and dependence. Even more, it set the terms for debate about poverty and welfare, narrowing the scope of discussions of poverty by the political Left largely to the question of whether TANF worked as advertised or harmed current and potential beneficiaries. The major question became whether women forced into paid employment earned enough to escape poverty. The first best-selling book on poverty in many years, social critic Barbara Ehrenreich’s 2001 Nickeled and Dimed, which recounted the struggles of women trying to get by on low pay from miserable jobs, focused on this question. “Welfare reform” intensified attention on making work pay and on the working poor, who became the primary beneficiaries of poverty policy and public sympathy. After Ehrenreich, the next widely heralded book on poverty was Pulitzer Prize-winning journalist David Shipler’s 2007 The Working Poor.92

The fact that even a full-time job did not guarantee escape from poverty was a huge and growing problem, and one that gave the lie to conservative claims like Mead’s that virtually all poverty resulted from nonwork. But exclusive concentration on the working poor deflected attention from poor people who remained out of the regular labor market, largely abandoned by their presumptive political allies, and from large questions about the political economy of poverty. In fact, without realizing it, fixation on the working poor led even those writers and policy officials sympathetic to poverty issues into the oldest trap in the framing of poverty: the reification of the distinction between the able bodied and impotent poor, to use the language of the eighteenth and early nineteenth century, or the deserving and undeserving poor of poverty discourse from the second quarter of the nineteenth century through today. No one, to reiterate a fundamental point of this book, ever has been able to draw the lines between categories with precision. One reason is that poverty is a fluid and usually temporary state. Social welfare scholar Mark Rank, in his powerful, One Nation, Underprivileged: Why American Poverty Affects Us All, showed first that the majority of “Americans who encounter poverty experience a short-term spell of impoverishment, while only a small minority experience poverty for an extended period,” and, second, “rather than being an event occurring among a small minority of the U.S. population, poverty is an experience that touches a clear majority of Americans at some point during their adult lifetimes.”93 Drawing a sharp line between the working and nonworking poor ignores the temporary, fluid, ubiquitous character of poverty, creating fictive distinctions that reinforce the politics of moral condemnation and neglect.

Women remaining on the TANF rolls became a shrinking residuum—the undeserving poor, to be sure—but not a serious public problem. By its victory, the new politics of poverty had dissolved the need for its own continued existence. Aside from the problem of the working poor, poverty and dependence slipped into the background, no longer problems worth political capital, slippery, potentially explosive issues best left unmentioned.