“You’re going to send me to the poorhouse!”
Most of us reference the poorhouse only reflexively today. But the poorhouse was once a very real and much feared institution. At their height, poorhouses appeared on postcards and in popular songs. Local societies scheduled tours for charity-minded citizens and common gawkers. Cities and towns across the country still include streets named for the poorhouses once sited on them. There are Poor Farm Roads in Bristol, Maine, and Natchez, Mississippi; County Home Roads in Marysville, Ohio, and Greenville, North Carolina; Poorhouse Roads in Winchester, Virginia, and San Mateo, California. Some have been renamed to obscure their past: Poor House Road in Virginia Beach is now called Prosperity Road.
The poorhouse in my hometown of Troy, New York, was built in 1821. While most of its residents were too ill, too old, or too young for physical labor, able-bodied inmates worked on a 152-acre farm and in a nearby stone quarry, earning the institution its name: the Rensselaer County House of Industry. John Van Ness Yates, charged by the state of New York with conducting a year-long inquiry into the “Relief and Settlement of the Poor” in 1824, used Troy’s example to argue that the state should build a poorhouse in every county. His plan succeeded: within a decade, 55 county poorhouses had been erected in New York.
Despite optimistic predictions that poorhouses would furnish relief “with economy and humanity,” the poorhouse was an institution that rightly inspired terror among poor and working-class people. In 1857, a legislative investigation found that the House of Industry confined the mentally ill to 4½-by-7-feet cells for as long as six months at a time. Because they had only straw to sleep on and no sanitary facilities, a mixture of straw and urine froze onto their bodies in the winter “and was removed only by thawing it off,” causing permanent disabilities.
“The general state of things described as existing at the Poor House, is bad, every way,” wrote the Troy Daily Whig in February 1857. “The contract system, by which the maintenance of the paupers is let out to the lowest bidder, is in a very great measure responsible.… The system itself is rotten through and through.” The county superintendent of the poor, Justin E. Gregory, won the contract for the House of Industry by promising to care for its paupers for $1 each per week. As part of the contract, he was granted unlimited use of their labor. The poorhouse farm produced $2,000 in revenue that year, selling vegetables grown by starving inmates.
In 1879, the New York Times reported on its front page that a “Poorhouse Ring” was selling the bodies of deceased residents of the House of Industry to county physicians for dissection. In 1885, an investigation into mismanagement uncovered the theft of $20,000 from the Rensselaer County poor department, forcing the resignation of Keeper of the Poorhouse Ira B. Ford. In 1896, his replacement, Calvin B. Dunham, committed suicide after his own financial improprieties were discovered.
In 1905, the New York State Board of Charities opened an investigation that uncovered rampant sexual abuse at the House of Industry. Nurse Ruth Schillinger testified that a male medical attendant, William Wilmot, regularly attempted to rape female patients. Inmates insisted that Mary Murphy, suffering from paralysis, had been assaulted by Wilmot. “They heard footsteps in the hall and they said it was Wilmot down there again,” Schillinger testified, “and I found the woman the next morning with her legs spread apart and she couldn’t move them herself because they were paralyzed.”1
In his defense, John Kittell, the keeper of the House of Industry and Wilmot’s boss, claimed that his management had saved the county “five to six thousand dollars per year” by reducing the cost of inmate care. Wilmot faced no charges; action to improve conditions was not taken until 1910. Troy’s poorhouse remained open until 1954.
While poorhouses have been physically demolished, their legacy remains alive and well in the automated decision-making systems that encage and entrap today’s poor. For all their high-tech polish, our modern systems of poverty management—automated decision-making, data mining, and predictive analytics—retain a remarkable kinship with the poorhouses of the past. Our new digital tools spring from punitive, moralistic views of poverty and create a system of high-tech containment and investigation. The digital poorhouse deters the poor from accessing public resources; polices their labor, spending, sexuality, and parenting; tries to predict their future behavior; and punishes and criminalizes those who do not comply with its dictates. In the process, it creates ever-finer moral distinctions between the “deserving” and “undeserving” poor, categorizations that rationalize our national failure to care for one another.
This chapter chronicles how we got here: how the brick-and-mortar poorhouse morphed into its data-based descendants. Our national journey from the county poorhouse of the nineteenth century to the digital poorhouse today reveals a remarkably durable debate between those who wish to eliminate and alleviate poverty and those who blame, imprison, and punish the poor.
* * *
America’s first poorhouse was built in Boston in 1662, but it wasn’t until the 1820s that imprisoning the indigent in public institutions became the nation’s primary method of regulating poverty. The impetus was the catastrophic economic depression of 1819. After a period of extravagant financial speculation following the War of 1812, the Second Bank of the United States nearly collapsed. Businesses failed, agricultural prices dropped, wages fell as much as 80 percent, and property values plummeted. Half a million Americans were out of work—about a quarter of the free adult male population. But political commentators worried less about the suffering of the poor than they did about the rise of “pauperism,” or dependence on public benefits. Of particular concern was outdoor relief: food, fuel, medical care, clothing, and other basic necessities given to the poor outside of the confines of public institutions.
A number of states commissioned reports about the “pauper problem.” In Massachusetts, Josiah Quincy III, scion of a wealthy and influential Unitarian family, was appointed to the task. Quincy genuinely wanted to alleviate suffering, but he believed that poverty was a result of bad personal habits, not economic shocks. He resolved the contradiction by suggesting that there were two classes of paupers. The impotent poor, he wrote in 1821, were “wholly incapable of work, through old age, infancy, sickness or corporeal debility,” while the able poor were just shirking.2
For Quincy, the pauper problem was caused by outdoor relief itself: aid was distributed without distinguishing between the impotent and the able. He suspected that indiscriminate giving destroyed the industry and thriftiness of the “labouring class of society” and created a permanently dependent class of paupers. His solution was to deny “all supply from public provision, except on condition of admission into the public institution [of the poorhouse].”3
It was an argument that proved alluring for elites. At least 77 poorhouses were built in Ohio, 79 in Texas, and 61 in Virginia. By 1860, Massachusetts had 219 poorhouses, one for every 5,600 residents, and Josiah Quincy was enjoying his retirement after a long and rewarding career in politics.
From the beginning, the poorhouse served irreconcilable purposes that led to terrible suffering and spiraling costs. On the one hand, the poorhouse was a semi-voluntary institution providing care for the elderly, the frail, the sick, the disabled, orphans, and the mentally ill. On the other, its harsh conditions were meant to discourage the working poor from seeking aid. The mandate to deter the poor drastically undercut the institution’s ability to provide care.
Inmates were required to swear a pauper’s oath stripping them of whatever basic civil rights they enjoyed (if they were white and male). Inmates could not vote, marry, or hold office. Families were separated because reformers of the time believed that poor children could be redeemed through contact with wealthy families. Children were taken from their parents and bound out as apprentices or domestics, or sent away on orphan trains as free labor for pioneer farms.
Poorhouses provided a multitude of opportunities for personal profit for those who ran them. Part of the keeper of the poorhouse’s pay was provided by unlimited use of the grounds and the labor of inmates. Many of the institution’s daily operations could thus be turned into side businesses: the keeper could force poorhouse residents to grow extra food for sale, take in extra laundry and mending for profit, or hire inmates out as domestics or farm-workers.
While some poorhouses were relatively benign, the majority were overcrowded, ill-ventilated, filthy, insufferably hot in the summer and deathly cold in the winter. Health care and sanitation were inadequate and inmates lacked basic provisions like water, bedding, and clothing.
Though administrators often cut corners to save money, poorhouses also proved costly. The efficiencies of scale promised by poorhouse proponents required an able-bodied workforce, but the mandate to deter the able poor virtually guaranteed that most inmates were unable to work. By 1856 about a quarter of poorhouse residents in New York were children. Another quarter were mentally ill, blind, deaf, or developmentally delayed. Most of the rest were elderly, ill, physically disabled, or poor mothers recovering from childbirth.
Despite their horrid conditions, poorhouses—largely through their failings—succeeded in offering internees a sense of community. Inmates worked together, endured neglect and abuse, nursed the sick, watched each other’s children, ate together, and slept in crowded common rooms. Many used the poorhouse cyclically, relying on them between growing seasons or during labor market downturns.
Poorhouses were among the nation’s first integrated public institutions. In his 1899 book, The Philadelphia Negro, W.E.B. Du Bois reported that African Americans were overrepresented in the city’s poorhouses because they were refused outdoor relief by exclusively white overseers of the poor. Residents described as Black, Negro, colored, mulatto, Chinese, and Mexican are common in poorhouse logbooks from Connecticut to California. The racial and ethnic integration of the poorhouse was a sore spot for white, native-born elites. As historian Michael Katz reports, “In 1855, a New York critic complained that the ‘poor of all classes and colors, all ages and habits, partake of a common fare, a common table, and a common dormitory.’”4
Poorhouses were neither debtors’ prisons nor slavery. Those arrested for vagrancy, drunkenness, illicit sex, or begging could be forcibly confined in them. But for many, entry was technically voluntary. The poorhouse was a home of last resort for children whose families could not afford to keep them, travelers who fell on hard times, the aged and friendless, deserted and widowed, single mothers, the ill and the handicapped, freed slaves, immigrants, and others living on the margins of the economy. Though most poorhouse stays lasted less than a month, elderly and disabled inmates often stayed for decades. Death rates at some institutions neared 30 percent annually.5
Poorhouse proponents reasoned that the institution could provide care while instilling moral values of thrift and industry. The reality was that the poorhouse was an institution for producing fear, even for hastening death. As social work historian Walter Trattner has written, elite Americans of the time “believed that poverty could, and should, be obliterated—in part, by allowing the poor to perish.” Nineteenth-century social philosopher Nathanial Ware wrote, for example, “Humanity aside, it would be to the best interest of society to kill all such drones.”6
* * *
Despite their cruelty and high cost, county poorhouses were the nation’s primary mode of poverty management until they were overwhelmed by the Panic of 1873. A postwar economic boom collapsed under the weight of Gilded Age corruption. Rampant speculation led to a run of bank failures, and financial panic resulted in another catastrophic depression. Rail construction fell by a third, nearly half of the industrial furnaces in the country closed, and hundreds of thousands of laborers were thrown out of work. Wages dropped, real estate markets tumbled, and foreclosures and evictions followed. Local governments and ordinary individuals responded by creating soup kitchens, establishing free lodging houses, and distributing cash, food, clothing, and coal.
The Great Railroad Strike of 1877 began when workers for the Baltimore & Ohio Railroad learned that their wages would be cut yet again—to half their 1873 levels—while the railroad’s shareholders took home a 10 percent dividend. Railroad workers stepped off their trains, decoupled engines, and refused to let freight traffic through their yards. As historian Michael Bellesiles recounts in 1877: America’s Year of Living Violently, when police and militia were sent in with bayonets and Gatling guns to break the strikes, miners and canal workers rose up in support. Half a million workers—roustabouts and barge captains, miners and smelters, factory linemen and cannery workers—eventually walked off the job in the first national strike in US history.
Bellesiles reports that in Chicago the Czechs and the Irish, traditionally ethnic adversaries, cheered each other. In Martinsburg, West Virginia, white and Black railroad workers shut down the train yard together. The working families of Hornellsville, New York, soaped the rails of the Erie railroad track. As strikebreaking trains attempted to ascend a hill, they lost traction and slid back into town.
The depression also affected Germany, Austria-Hungary, and Britain. In response, European governments introduced the modern welfare state. But in America, middle-class commentators stoked fears of class warfare and a “great Communist wave.”7 As they had following the 1819 Panic, white economic elites responded to the growing militancy of poor and working-class people by attacking welfare. They asked: How can legitimate need be tested in a communal lodging house? How can one enforce work and provide free soup at the same time? In response, a new kind of social reform—the scientific charity movement—began an all-out attack on public poor relief.
Scientific charity argued for more rigorous, data-driven methods to separate the deserving poor from the undeserving. In-depth investigation was a mechanism of moral classification and social control. Each poor family became a “case” to be solved; in its early years, the Charity Organization Society even used city police officers to investigate applications for relief. Casework was born.
Caseworkers assumed that the poor were not reliable witnesses. They confirmed their stories with police, neighbors, local shopkeepers, clergy, schoolteachers, nurses, and other aid societies. As Mary Richmond wrote in Social Diagnosis, her 1917 textbook on casework procedures, “the reliability of the evidence on which [caseworkers] base their decisions should be no less rigidly scrutinized than is that of legal evidence by opposing counsel.”8 Scientific charity treated the poor as criminal defendants by default.
Scientific charity workers advised in-depth investigation of applications for relief because they believed that there was a hereditary division between deserving and undeserving poor whites. Providing aid to the unworthy poor would simply allow them to survive and reproduce their genetically inferior stock. For middle-class reformers of the period, like scientific social worker Frederic Almy, social diagnosis was necessary because “weeds should not have the same culture as flowers.”9
The movement’s focus on heredity was influenced by the incredibly popular eugenics movement. The British strain of eugenics, originated by Sir Francis Galton, encouraged planned breeding of elites for their “noble qualities.” But in America, eugenics practitioners quickly turned their attention to eliminating what they saw as negative characteristics of the poor: low intelligence, criminality, and unrestricted sexuality.
Eugenics created the first database of the poor. From a Carnegie Institution–funded laboratory in Cold Spring Harbor, New York, and state eugenics records offices stretching from Vermont to California, social scientists fanned out across the United States to gather information about poor people’s sex lives, intelligence, habits, and behavior. They filled out lengthy questionnaires, took photographs, inked fingerprints, measured heads, counted children, plotted family trees, and filled logbooks with descriptions like “imbecile,” “feeble-minded,” “harlot,” and “dependent.”
Eugenics was an important component of the wave of white supremacy that swept the nation in the 1880s. Jim Crow rules were institutionalized and restrictive immigration laws were passed to protect the white race from “outside threats.” Eugenics was intended to cleanse the race from within by shining a clinical spotlight on what Dr. Albert Priddy called the “shiftless, ignorant, and worthless class of anti-social whites of the South.” Both eugenics and scientific charity amassed hundreds of thousands of family case studies in what George Buzelle, general secretary of the Brooklyn Bureau of Charities, characterized as an effort to “arrange all the human family according to intellect, development, merit, and demerit, each with a label ready for indexing and filing away.”10
The movement blended elite anxieties about white poverty with fears of increased immigration and racist beliefs that African Americans were innately inferior. Popular manifestations of eugenics theory reproduced and fed these distinctions: African Americans were utterly cast out, northern European–descended wealthy whites occupied the pinnacle of the eugenic hierarchy, and everyone in-between was suspect. Fitter family contests at state fair eugenics exhibits always had alabaster-skinned winners. The economically struggling hordes represented as drains on the public treasury were often racialized: “degenerate” genetic lines always had darker skin, lower brows, and broader features.
Widespread reproductive restrictions were perhaps the inevitable destination for scientific charity and eugenics. In the Buck v. Bell case that legalized involuntary sterilization, Supreme Court Justice Oliver Wendell Holmes famously wrote, “It is better for all the world if, instead of waiting to execute degenerate offspring for crime or to let them starve for their imbecility, society can prevent those who are manifestly unfit from continuing their kind. The principle that sustains compulsory vaccination is broad enough to cover cutting the Fallopian tubes.”11 Though the practice fell out of favor in light of Nazi atrocities during World War II, eugenics resulted in more than 60,000 compulsory sterilizations of poor and working-class people in the United States.
Unlike the intermittently integrated poorhouses, scientific charity considered African American poverty a separate issue from white poverty, and, according to social historian Mark Peel, “more or less deliberately ignored what late nineteenth-century Americans called the ‘Negro Problem.’”12 Thus, the movement offered paltry resources to a small number of “deserving” white poor. They used investigative techniques and cutting-edge technology to discourage everyone else from seeking aid. If all else failed, scientific charity turned to institutionalization: those who weren’t morally pure enough for their charity or strong enough to support themselves were sent to the poorhouse.
The scientific charity movement relied on a slew of new inventions: the caseworker, the relief investigation, the eugenics record, the data clearinghouse. It drew on what lawyers, academics, and doctors believed to be the most empirically sophisticated science of its time. Scientific charity staked a claim to evidence-based practice in order to distinguish itself from what its proponents saw as the soft-headed emotional, or corruption-laden political, approaches to poor relief of the past. But the movement’s high-tech tools and scientific rationales were actually systems for disempowering poor and working-class people, denying their human rights, and violating their autonomy. If the poorhouse was a machine that diverted the poor and working class from public resources, scientific charity was a technique of producing plausible deniability in elites.
* * *
Like the poorhouse before it, scientific charity ruled poor relief for two generations. But even this powerful movement could not survive the Great Depression. At its peak, an estimated 13–15 million American workers lost their jobs, with unemployment nearing 25 percent nationwide and topping 60 percent in some cities. Families who had been solidly middle class before the crash sought public relief for the first time. The always-fuzzy line between the deserving and the undeserving poor was swept away in the face of the nationwide crisis.
As the Great Depression gained steam in 1930 and 1931, scientific charity was stretched beyond its limits. Bread lines burgeoned, evicted families crowded into shared apartments and municipal lodging houses, and local emergency relief programs broke down in the face of overwhelming need. Poor and working people protested deteriorating conditions and rallied together to help one another.
Thousands of unemployed workers organized to loot food stores; miners carried off and distributed bootlegged coal. There were bread lines, soup lines, cabbage lines. As Frances Fox Piven and Richard Cloward report in Regulating the Poor, local aid agencies were harassed by protestors who picketed, shouted, and refused to leave until relief agencies released money and goods to waiting crowds. Rent strikers resisted foreclosures and evictions and reversed gas and electric shutoffs. In 1932, 43,000 “Bonus Army” marchers camped near the US Capitol in vacant lots and on the banks of the Potomac River.
Franklin D. Roosevelt rode to the presidency on this wave of citizen unrest. He launched a massive return to outdoor relief: the Federal Emergency Relief Administration (FERA), a program that distributed commodities and cash to families in need. His administration also created new federal employment programs, such as the Civilian Conservation Corps (CCC) and the Civil Works Administration (CWA), which put unemployed people to work in infrastructure improvement projects, construction of public facilities, government administration, health care, education, and the arts.
The New Deal reversed the trend toward private charity, and by early 1934 federal programs such as FERA, CCC, and CWA were assisting 28 million people with work or home relief. The programs were able to do so much for so many so quickly because of sufficient public funding—FERA alone eventually expended four billion dollars—and because they abandoned the in-depth investigations pioneered by scientific charity caseworkers.
As during the depressions of 1819 and 1873, critics blamed relief programs for creating dependence on public assistance. Roosevelt himself had serious misgivings about putting the federal government in the business of providing direct relief. He quickly capitulated to middle-class backlash, shuttering FERA, the program that provided cash and commodities, and replacing it with the Works Progress Administration (WPA). Against the protests of some in the Roosevelt camp who called for the creation of a federal department of welfare, the administration shifted its focus from distributing resources to encouraging work.
New Deal legislation undoubtedly saved thousands of lives and prevented destitution for millions. New labor laws led to a flourishing of unions and built a strong white middle class. The Social Security Act of 1935 established the principle of cash payments in cases of unemployment, old age, or loss of a family breadwinner, and it did so as a matter of right, not on the basis of individual moral character. But the New Deal also created racial, gender, and class divisions that continue to produce inequities in our society today.
Roosevelt’s administration capitulated to white supremacy in ways that still bear bitter fruit. The Civilian Conservation Corps capped Black participation in federally supported work relief at 10 percent of available jobs, though African Americans experienced 80 percent unemployment in northern cities. The National Housing Act of 1934 redoubled the burden on Black neighborhoods by promoting residential segregation and encouraging mortgage redlining. The Wagner Act granted workers the right to organize, but allowed segregated trade unions. Most importantly, in response to threats that southern states would not support the Social Security Act, both agricultural and domestic workers were explicitly excluded from its employment protections. The “southern compromise” left the great majority of African American workers—and a not-insignificant number of poor white tenant farmers, sharecroppers, and domestics—with no minimum wage, unemployment protection, old-age insurance, or right to collective bargaining.
New Deal programs also enshrined the male breadwinner as the primary vehicle for economic support of women and families. Federal protections were tied to wages, union membership, unemployment insurance, and pensions. But by incentivizing long-term wage-earning and full-time, year-round work, the protections privileged men’s employment patterns over women’s. Another signature program of the New Deal, Aid to Dependent Children (ADC, called Aid to Families with Dependent Children, or AFDC, after 1962), was structured to support a tiny number of widows with children after the death of a male wage earner. Women’s economic security was thus tied securely to their roles as wives, mothers, or widows, guaranteeing their continued economic dependence.
The design of New Deal relief policies reestablished the divide between the able and the impotent poor. But it flipped Josiah Quincy’s script. The able poor were still white male wage workers thrown into temporary unemployment. But, reversing the preceding hundred years of poverty policy, they were suddenly considered the deserving poor and offered federal aid to reenter the workforce. The impotent poor were still those who faced long-term challenges to steady employment: racial discrimination, single parenthood, disability, or chronic illness. But they were suddenly characterized as undeserving, and only reluctantly offered stingy, punitive, temporary relief.
Excluded workers, single mothers, the elderly poor, the ill, and the disabled were forced to rely on what welfare historian Premilla Nadasen calls “mop-up” public assistance programs.13 The distinctions between the unemployed and the poor, men’s poverty and women’s poverty, northern white male industrial laborers and everyone else created a two-tiered welfare state: social insurance versus public assistance.
Public assistance programs were less generous because benefit levels were set by states and municipalities, not the federal government. They were more punitive because local and state welfare authorities wrote eligibility rules and had financial incentive to keep enrollments low. They were more intrusive because income limits and means-testing rationalized all manner of surveillance and policing of applicants and beneficiaries.
In distinguishing between social insurance and public assistance, New Deal Democrats planted the seeds of today’s economic inequality, capitulated to white supremacy, sowed conflict between the poor and the working class, and devalued women’s work. By abandoning the idea of a universal benefits program, Roosevelt resurrected scientific charity’s investigation, policing, and diversion. But rather than being directed at a broad spectrum of the poor and working class, these techniques were selectively applied to a new target group that was just emerging. They would come to be known as “welfare mothers.”
* * *
Though all of the programs created by the Social Security Act are properly considered public assistance, the most controversial of the mop-up programs, ADC, has become synonymous with “welfare.” If not for its eventual role as the focal point for a massively successful political movement of poor women, ADC/AFDC would be a historical footnote. For its first 35 years, the program was aimed narrowly at middle-class white widows. Very few families applied, and about half of those who did were turned away.
State and county rules excluded huge numbers of eligible recipients, especially women of color. “Employable mother” rules excluded domestics and farmworkers, whose wage labor was considered by legislators more important than caring for their children. “Suitable home” rules excluded never-married mothers, the divorced and abandoned, lesbians, and other women considered sexually immoral by welfare departments. “Substitute father” rules made any man in a relationship with a woman on public assistance financially responsible for her children. Residence restrictions denied benefits to anyone who moved across state lines. Welfare required that poor people trade their rights—to bodily integrity, safe work environments, mobility, political participation, privacy, and self-determination—for meager aid for their families.
Discriminatory eligibility rules gave caseworkers broad latitude to investigate clients’ relationships, dig into all aspects of their lives, and even raid their homes. In 1958 police and welfare workers in Sweet Home, a small white working-class community in Oregon, planned a series of collaborative raids, all taking place between midnight and 4:30 a.m. In 1963, caseworkers in Alameda County, California, invaded the homes of 700 welfare recipients on one cold January night, rousting mothers and children from their beds in an attempt to uncover unreported paramours. Victims complained that raiders failed to identify themselves, used unnecessarily abusive language, and “even broke down doors when denied admittance,” reported Howard Kennedy in the Los Angeles Times. The NAACP charged that the Alameda raids were “conducted mainly against Negro and Mexican-American ANC [Aid to Needy Children] recipients and that discrimination may be involved.”14
The return to scientific charity–type investigation of ADC/AFDC recipients was a reaction to changing migration patterns and civil rights activism, which were shifting the racial composition of the program. Fleeing white supremacist terrorism and sharecropper evictions in the south, more than three million African Americans moved to northern cities between 1940 and 1960. Many found safer housing, better jobs, and more dignity and freedom. But discrimination in employment, housing, and education resulted in much higher unemployment rates for nonwhites, and migrants reached out to public assistance to help support their families.
At the same time, the civil rights movement articulated a moral right to equal public accommodation and political participation for African Americans. The argument that supported the integration of public schools and the expansion of the vote was easily extended to the integration of public assistance. Mothers for Adequate Welfare, an early welfare rights group, was formed after several of its members attended the 1963 March on Washington for Jobs and Freedom. According to historian Premilla Nadasen, they were inspired by the march to fight back against the daily indignities and discrimination they suffered as Black welfare mothers and returned home to Boston eager to start a food distribution program.15 Across the country, local organizations joined to form a national movement that challenged the unjust status quo: at least half the people eligible for AFDC were not receiving it.
The welfare rights movement shared information about eligibility, helped fill out applications, sat-in in welfare offices to challenge discriminatory practices, lobbied legislatures, crafted policies, and challenged all the assumptions that New Deal programs had left unquestioned. Most importantly, members of the movement insisted that motherwork is work. Though they supported any woman’s right to paid employment if she desired, welfare rights organizations actively resisted all programs requiring that single mothers of young children work outside the home.
The successes of the welfare rights movement were extraordinary. It birthed the 30,000-member National Welfare Rights Organization (NWRO). It won increased access to special grants to obtain furniture, school clothing, and other household items. It spearheaded a fight for a guaranteed minimum income available to all poor families regardless of marital status, race, or employment. Recognizing that the exclusion of Black women and single mothers from public assistance was unconstitutional, the movement also mounted legal challenges to reverse discriminatory eligibility rules.
A victory in King v. Smith (1968) overturned the “substitute father” rule and guaranteed basic rights of personal and sexual privacy. In Shapiro v. Thompson (1969), the Supreme Court agreed that residency rules were unconstitutional restrictions of a person’s right to mobility. Goldberg v. Kelly (1970) enshrined the principle that public assistance recipients have a right to due process, and that benefits cannot be terminated without a fair hearing. These legal victories established a truly revolutionary precedent: the poor should enjoy the same rights as the middle class.
AFDC became so embattled that President Richard Nixon proposed a guaranteed annual income program, the Family Assistance Program (FAP), to replace it in 1969. The program would guarantee a minimum income of $1,600 a year for a family of four. It would provide benefits to two-parent families earning low wages, who were excluded from AFDC. It would do away with the 100 percent penalty on earned income, allowing welfare beneficiaries to retain the first $720 of their yearly earnings without reducing benefits.
But the minimum income Nixon proposed would have still kept a family of four well below the poverty line. The NWRO proposed a competing Adequate Income Act that set the base income for a family of four at $5,500. Nixon’s program also included built-in work requirements; this was a sticking point for single mothers with small children. Unpopular with both conservatives and progressives, the FAP failed, and pressure on AFDC continued to mount.
* * *
Emboldened by social movements, more families applied for public assistance; protected by legal victories, fewer were turned away. As eligibility limitations were struck down, AFDC expanded. The raw numbers are startling: there were 3.2 million recipients of AFDC in 1961 but almost 10 million in 1971. Federal spending on the program increased from $1 billion (in 1971 dollars) to $3.3 billion over the same decade. Most of the movement’s gains went to poor children. Only a quarter of poor children received support from AFDC in 1966; by 1973, the program reached more than four-fifths of them.
The members of the NWRO were mostly poor African American women, but the welfare rights movement had middle-class allies and saw interracial organizing as crucial to achieving its long-term goals. Reflecting their disproportional vulnerability to poverty, African Americans accounted for roughly 50 percent of the AFDC rolls by 1967. But Johnnie Tillmon, first chairwoman of the NWRO, recognized that white welfare recipients were fellow sufferers and potential allies. As she explained in a 1971 interview, “We can’t afford racial separateness. I’m told by the poor white girls on welfare how they feel when they’re hungry, and I feel the same way when I’m hungry.”16
But if welfare rights activists envisioned integration and solidarity, opposition to the expansion of AFDC drummed up white middle-class animosity to turn back the movement’s successes. As backlash against welfare rights grew, news coverage of poverty became increasingly critical. “As news stories about the poor became less sympathetic,” writes political scientist Martin Gilens, “the images of poor blacks in the news swelled.”17 Stories about welfare fraud and abuse were most likely to contain images of Black faces. African American poverty decreased dramatically during the 1960s and the African American share of AFDC caseloads declined. But the percentage of African Americans represented in news magazine stories about poverty jumped from 27 to 72 percent between 1964 and 1967.
Hysteria about welfare costs, fraud, and inefficiency increased as the 1973 recession took hold. Driven by Ronald Reagan and other conservative politicians, a taxpayer revolt against AFDC challenged the notion that the poor should have the full complement of rights promised by the Constitution. But the welfare rights movement’s successes were enshrined into law, so exclusion from public assistance could no longer be accomplished through discriminatory eligibility rules.
Elected officials and state bureaucrats, caught between increasingly stringent legal protections and demands to contain public assistance spending, performed a political sleight of hand. They commissioned expansive new technologies that promised to save money by distributing aid more efficiently. In fact, these technological systems acted like walls, standing between poor people and their legal rights. In this moment, the digital poorhouse was born.
Computers gained ground in the early 1970s as neutral tools for shrinking public spending by increasing scrutiny and surveillance of welfare recipients. In 1943, Louisiana had been the first state to establish an “employable mother” rule that blocked most African American women from receiving ADC. Thirty-one years later, Louisiana became the first state to launch a computerized wage matching system. The program checked the self-reported income of welfare applicants against electronic files of employment agencies and unemployment compensation benefit data.
By the 1980s, computers collected, analyzed, stored, and shared an extraordinary amount of data on families receiving public assistance. The federal Department of Health, Education, and Welfare (HEW) shared welfare recipients’ names, social security numbers, birthdays, and other information with the Department of Defense, state governments, federal employers, civil and criminal courts, local welfare agencies, and the Department of Justice. New programs searched burgeoning case files for inconsistencies. Fraud detection programs were carefully programmed and launched. Databases were linked together to track recipient behavior and spending across different social programs. The conflict between the expanding legal rights of welfare recipients and weakened support for public assistance was resolved by a wave of high-tech tools.
* * *
Because public assistance programs are federally funded and locally controlled, the uptake of welfare administration technology varied from state to state. But the route followed by New York provides an illuminating example. New York had the largest, most vocal welfare rights movement and the fastest expanding AFDC rolls in the country. By the late 1960s, one out of ten of the nation’s welfare recipients lived in New York City, and they had organized into somewhere between 60 and 80 local welfare rights groups.
The movement began a campaign of daily demonstrations throughout the city in spring 1968, including a three-day sit-in at welfare department headquarters that was ended only by mounted police. Influenced by such visible activism, caseworkers began to see their role as advocating for applicants rather than diverting them from aid. According to a 1973 RAND Institute report titled Protest by the Poor, Bronx and Brooklyn caseworkers threatened to strike unless the city’s Department of Social Services “cut red tape in order to process the flood of client demands.”18
In 1969, the state of New York petitioned to participate in the Nationwide Demonstration Project, a HEW effort to develop a “computer-based management information system for the administration of public welfare.” At the time, Republican governor Nelson Rockefeller was convinced that Nixon’s FAP would pass, and that the state’s welfare problems would be solved by a federal takeover of state and local welfare costs.
After the FAP failed to pass Congress in 1970, Rockefeller announced that New York had “no alternative but to continue to do its best to provide for the needs of its poor,” while calling the state’s current welfare system “outmoded” and a “tremendous burden.” A few months later, in a statement to the legislature, he laid out his growing concern that if welfare was not radically changed, it “will ultimately overload and break down our society” because “rather than encouraging human dignity, independence and individual responsibility, the system, as it is functioning, encourages permanent dependence on government.”19
Rockefeller announced a statewide welfare reform package that established one-year residency requirements and proposed a “voluntary resettlement plan” offering current welfare recipients transportation and a cash bonus if they agreed to move out of state. His proposed reforms required welfare recipients to take any available job or lose benefits, and removed caseworker discretion for deciding which recipients were “employable” and for determining the size of welfare grants. Rockefeller repealed minimum salary requirements for caseworkers, lowered educational qualifications for the job, and strengthened penalties against caseworkers “who improperly assist welfare recipients in obtaining eligibility or additional benefits.”
Rockefeller also established a new office, the Inspector General of Welfare Administration, and appointed his campaign fundraiser George F. Berlinger to lead it. In the office’s first annual report in February 1972, Berlinger charged that administrative mismanagement had allowed a “disease” of “cheats, frauds and abusers” to infect the city’s welfare rolls. “Major surgery is in order,” he wrote.
Berlinger proposed a central computerized registry for every welfare, Medicaid, and food stamp recipient in the state. Planners folded Rockefeller’s fixation with ending the welfare “gravy train” into the system’s design. The state hired Ross Perot’s Electronic Data Systems to create a digital tool to “reduce well documented ineligibility, mismanagement and fraud in welfare administration,” automate grant calculations and eligibility determinations, and “improve state supervision” of local decision-making.20 Design, development, and implementation of the resulting Welfare Management System (WMS) eventually cost $84.5 million.
The rapid increase in the welfare rolls in New York plateaued in the mid-1970s, as the WMS was brought on line. Then, the proportion of poor individuals receiving AFDC began to plummet. The pattern was repeated in state after state. A combination of restrictive new rules and high-tech tools reversed the gains of the welfare rights movement. In 1973, nearly half of the people living under the poverty line in the United States received AFDC. A decade later, after the new technologies of welfare administration were introduced, the proportion had dropped to 30 percent. Today, it is less than 10 percent.
The Personal Responsibility and Work Opportunity Reconciliation Act (PRWORA) of 1996 is often held responsible for the demise of welfare. The PRWORA replaced AFDC with Temporary Assistance to Needy Families (TANF) and enforced work outside the home at any cost. TANF limited lifetime eligibility for public assistance to 60 months with few exceptions, introduced strict work requirements, ended support for four-year college education, and put into effect a wide array of sanctions to penalize noncompliance.
Sanctions are imposed, for example, for being late to an appointment, missing a volunteer work assignment, not attending job training, not completing drug testing, not attending mental health counseling, or ignoring any other therapeutic or job-training activity prescribed by a caseworker. Each sanction can result in a time-limited or permanent loss of benefits.
It is true that the PRWORA achieved striking contractions in public assistance. Almost 8.5 million people were removed from the welfare rolls between 1996 and 2006. In 2014, fewer adults were being served by cash assistance than in 1962. In 1973, four of five poor children were receiving benefits from AFDC. Today, TANF serves fewer than one in five of them.
But the process of winnowing the rolls began long before Bill Clinton promised to “end welfare as we know it.” More aggressive investigation and increasingly precise tracking technologies provided raw material for apocryphal stories about widespread corruption and fraud. These stories birthed more punitive rules and draconian penalties, which in turn required an explosion of data-based technologies to monitor compliance. The 1996 federal reforms simply finished a process that began 20 years earlier, when the revolt against welfare rights birthed the digital poorhouse.
* * *
The advocates of automated and algorithmic approaches to public services often describe the new generation of digital tools as “disruptive.” They tell us that big data shakes up hidebound bureaucracies, stimulates innovative solutions, and increases transparency. But when we focus on programs specifically targeted at poor and working-class people, the new regime of data analytics is more evolution than revolution. It is simply an expansion and continuation of moralistic and punitive poverty management strategies that have been with us since the 1820s.
The story of the poorhouse and scientific charity demonstrates that poverty relief becomes more punitive and stigmatized during times of economic crisis. Poor and working-class people resist restrictions of their rights, dismantle discriminatory institutions, and join together for survival and mutual aid. But time and again they face middle-class backlash. Social assistance is recast as charity, mutual aid is reconstructed as dependency, and new techniques to turn back the progress of the poor proliferate.
A well-funded, widely supported, and wildly successful counter-movement to deny basic human rights to poor and working-class people has grown steadily since the 1970s. The movement manufactures and circulates misleading stories about the poor: that they are an undeserving, fraudulent, dependent, and immoral minority. Conservative critics of the welfare state continue to run a very effective propaganda campaign to convince Americans that the working class and the poor must battle each other in a zero-sum game over limited resources. More quietly, program administrators and data scientists push high-tech tools that promise to help more people, more humanely, while promoting efficiency, identifying fraud, and containing costs. The digital poorhouse is framed as a way to rationalize and streamline benefits, but the real goal is what it has always been: to profile, police, and punish the poor.