It is a warm April day in 2017, and I am walking to the public library to find pictures of the Los Angeles County Poor Farm, known today as Rancho Los Amigos. A middle-aged African American man in a pink baseball cap and a grimy hoodie stands on the sidewalk near the corner of 5th and South Grand. He moves as if buffeted by winds, arms swimming in front of him as he turns in tortured circles. He is keening: a high, surprisingly gentle sound, halfway between singing and sobbing, with no words. Dozens of people—white, Black, Latino, tourist and local, rich and poor—walk around him without even turning their heads. As we pass his swaying figure, we look away from each other, our mouths set in grim lines. No one stops to ask if he needs help.
In the United States, wealth and privation exist side by side. The contrast is particularly stark in downtown Los Angeles, where everyday urban professionals drink lattes and check their smartphones within arm’s reach of the utterly destitute. But the invisible membrane between those who struggle to meet their basic daily needs and those who do not exists in every American city, town, and village. I saw it in Muncie, Indiana, and in Munhall, Pennsylvania. I see it in my hometown.
Poverty in America is not invisible. We see it, and then we look away.
Our denial runs deep. It is the only way to explain a basic fact about the United States: in the world’s largest economy, the majority of us will experience poverty. According to Mark Rank’s groundbreaking life-course research, 51 percent of Americans will spend at least a year below the poverty line between the ages of 20 and 65. Two-thirds of them will access a means-tested public benefit: TANF, General Assistance, Supplemental Security Income, Housing Assistance, SNAP, or Medicaid.1 And yet we pretend that poverty is a puzzling aberration that happens only to a tiny minority of pathological people.
Our relationship to poverty in the United States has always been characterized by what sociologist Stanley Cohen calls “cultural denial.” Cultural denial is the process that allows us to know about cruelty, discrimination, and repression, but never openly acknowledge it. It is how we come to know what not to know. Cultural denial is not simply a personal or psychological attribute of individuals; it is a social process organized and supported by schooling, government, religion, media, and other institutions.
When we passed the anguished man near the Los Angeles Public Library and did not ask him if he needed help, it was because we have collectively convinced ourselves that there is nothing we can do for him. When we failed to meet each others’ eyes as we passed, we signaled that, deep down, we know better. We could not make eye contact because we were enacting a cultural ritual of not-seeing, a semiconscious renunciation of our responsibility to each other. Our guilt, kindled because we perceived suffering and yet did nothing about it, made us look away. That is what the denial of poverty does to us as a nation. We avoid not only the man on the corner, but each other.
Denial is exhausting and expensive. It is uncomfortable for individuals who must endure the cognitive dissonance required to both see and not-see reality. It contorts our physical geography, as we build infrastructure—suburbs, highways, private schools, and prisons—that allow the professional middle class to actively avoid sharing the lives of poor and working-class people. It weakens our social bonds as a political community; people who cannot meet each others’ eyes will find it very difficult to collectively govern.
Poverty in America is actively denied by the way we define it: as falling below an arbitrary income line at a single moment in time. The official poverty line makes poverty look like a regrettable anomaly that can be explained away by poor decisions, individual behavior, and cultural pathology. In fact, poverty is an often-temporary state experienced cyclically by a huge number of people from wildly different backgrounds displaying a nearly infinite range of behaviors.
Our public policy fixates on attributing blame for poverty rather than remedying its effects or abolishing its causes. The obsession with “personal responsibility” makes our social safety net conditional on being morally blameless. As political theorist Yascha Mounk argues in his 2017 book, The Age of Responsibility, our vast and expensive public service bureaucracy primarily functions to investigate whether individuals’ suffering might be their own fault.
Poverty is denied by the media and political commentators, who portray the poor as a pathologically dependent minority dangerous to professional middle-class society. This is true from both conservative and liberal perspectives: voices from the Right tend to decry the poor as parasitic while voices from the Left paternalistically hand-wring about the poor’s inability to exert agency in their own lives. The framing of poor people and communities as without hope or value is so profoundly limiting that most of us, even those who experience poverty directly, downplay or deny it in our life stories.
Our habits of denial are so vigorous that poverty is only acknowledged when poor and working-class people build grassroots movements that directly challenge the status quo through disruptive protest. As Frances Fox Piven and Richard Cloward famously pointed out in their classic texts Poor People’s Movements and Regulating the Poor, when poor people organize and fight for their rights and survival, they win. But the institutions of poverty management—the poorhouse, scientific charity, the public welfare system—are remarkably adaptable and durable. The push to divert, contain, police, and punish the poor persists, though the shape of institutions that regulate poverty shift over time.
For example, the Great Railroad Strike of 1877 dramatized not just the suffering of the poor but also their immense political power. Poor and working people’s activism terrified elites and won significant accommodations: a return to a poor-relief system focused on distributing cash and goods and a move away from institutionalization. But almost immediately, scientific charity rose to take its place. The techniques changed—scientific casework focused on investigation and policing rather than containing the poor in quasi-prisons—but the results were the same. Tens of thousands of people were denied access to public resources, families were torn apart, and the lives of the poor were scrutinized, controlled, and imperiled.
The pattern repeated during the Great Depression and again during the backlash against welfare rights in the 1970s. It is happening again now.
In short, when poor and working people in the United States become a politically viable force, relief institutions and their technologies of control shift to better facilitate cultural denial and to rationalize a brutal return to subserviency. Relief institutions are machines for undermining the collective power of poor and working-class people, and for producing indifference in everyone else.
* * *
When we talk about the technologies that mediate our interactions with public agencies today, we tend to focus on their innovative qualities, the ways they break with convention. Their biggest fans call them “disruptors,” arguing that they shake up old relations of power, producing government that is more transparent, responsive, efficient, even inherently more democratic.
This myopic focus on what’s new leads us to miss the important ways that digital tools are embedded in old systems of power and privilege. While the automated eligibility system in Indiana, the coordinated entry system in Los Angeles, and the predictive risk model in Allegheny County may be cutting-edge, they are also part of a deep-rooted and disturbing history. The poorhouse preceded the Constitution as an American institution by 125 years. It is mere fantasy to think that a statistical model or a ranking algorithm will magically upend culture, policies, and institutions built over centuries.
Like the brick-and-mortar poorhouse, the digital poorhouse diverts the poor from public resources. Like scientific charity, it investigates, classifies, and criminalizes. Like the tools birthed during the backlash against welfare rights, it uses integrated databases to target, track, and punish.
In earlier chapters, I provided an on-the-ground view of how new high-tech tools are operating in social service programs across the country. It’s crucial to listen to those who are their primary targets; the stories they tell are different than those told from the perspective of administrators and analysts. Now, I will zoom out to give a bird’s-eye view of how these tools work together to create a shadow institution for regulating the poor.
Divert the poor from public resources: Indiana.
The digital poorhouse raises barriers for poor and working-class people attempting to access shared resources. In Indiana, the combination of eligibility automation and privatization achieved striking reductions in the welfare rolls. Cumbersome administrative processes and unreasonable expectations kept people from accessing the benefits they were entitled to and deserved. Brittle rules and poorly designed performance metrics meant that when mistakes were made, they were always interpreted as the fault of the applicant, not the state or the contractor. The assumption that automated decision-making tools were infallible meant that computerized decisions trumped procedures intended to provide applicants with procedural fairness. The result was a million benefit denials.
But unequivocal diversion can only ever have limited success. In Indiana, the visible and seemingly haphazard suffering caused by benefit denials stoked outrage, creating vigorous resistance. Those denied benefits told their stories. Advocates gathered their allies. Lawsuits were launched. And ordinary Hoosiers won … to a degree. While Governor Mitch Daniels canceled IBM’s contract and the FSSA launched the hybrid system, TANF receipt is still at a historic low in the state.
The eligibility experiment in Indiana collapsed because it failed to create a convincing story about “unworthiness.” The Daniels administration’s hostility to the poor was indiscriminate. The automation’s effects touched six-year-old girls, nuns, and grandmothers hospitalized for heart failure. Advocates argued that these were blameless victims, and the plan could not stand up against Hoosiers’ natural inclination toward charity and compassion.
While automated social exclusion is growing across the country, it has key weaknesses as a strategy of class-based oppression. So, when direct diversion fails, the digital poorhouse creates something more insidious: a moral narrative that criminalizes most of the poor while providing life-saving resources to a lucky handful.
Classify and criminalize the poor: Los Angeles.
Homeless service providers in Los Angeles County want to use resources efficiently, to collaborate more effectively, and, perhaps, to outsource the heartbreaking choice of who among 60,000 unhoused people should receive help.
According to its designers, the county’s coordinated entry system matches the greatest need to the most appropriate resource. But there is another way to see the ranking function of the coordinated entry system: as a cost-benefit analysis. It is cheaper to provide the most vulnerable, chronically unhoused with permanent supportive housing than it is to leave them to emergency rooms, mental health facilities, and prisons. It is cheaper to provide the least vulnerable unhoused with the small, time-limited investments of rapid re-housing than to let them become chronically homeless. This social sorting works out well for those at the top and the bottom of the rankings. But if, like Gary Boatwright, the cost of your survival exceeds potential taxpayer savings, your life is de-prioritized.
The data of unhoused Angelenos who receive no resources at all—21,500 people as of this writing—stay in the Homeless Management Information System for seven years. There are few safeguards to protect personal information, and the Los Angeles Police Department can access it without a warrant. This is a recipe for law enforcement fishing expeditions. The integration of policing and homeless services blurs the boundary between the maintenance of economic security and the investigation of crime, between poverty and criminality, tightening a net of constraint that tracks and traps the unhoused. This net requires data-based infrastructure to surround and systems of moral classification to sift.
The data collected by coordinated entry also creates a new story about homelessness in Los Angeles. This story can develop in one of two ways. In the optimistic version, more nuanced data helps the county, and the nation, face its cataclysmic failure to care for our unhoused neighbors. In the pessimistic version, the very act of classifying homeless individuals on a scale of vulnerability erodes public support for the unhoused as a group. It leaves professional middle-class people with the impression that those who are truly in need are getting help, and that those who fail to secure resources are fundamentally unmanageable or criminal.
When the digital poorhouse simply bars access to public benefits, as in Indiana, it is fairly easy to confront. But classification and criminalization work by including poor and working-class people in systems that limit their rights and deny their basic human needs. The digital poorhouse doesn’t just exclude, it sweeps millions of people into a system of control that compromises their humanity and their self-determination.
Predict the future behavior of the poor: Allegheny County.
Assessing tens of thousands of unhoused people in Los Angeles to produce a moral classification system is laborious and expensive. Prediction promises to produce hierarchies of worth and deservingness using statistics and existing data instead of engaging human beings with clinical methods. When diversion fails and classification is too costly, the digital poorhouse uses statistical methods to infer. Surveys such as Los Angeles’ VI-SPDAT ask what action a person has already taken. Predictive systems such as Allegheny County’s AFST speculate what action someone is likely to take in the future, based on behavioral patterns of similar people in the past.
Classification measures the behavior of individuals to group like with like. Prediction is aimed instead at networks. The AFST is run on every member of a household, not only on the parent or child reported to the hotline. Under the new regime of prediction, you are impacted not only by your own actions, but by the actions of your lovers, housemates, relatives, and neighbors.
Prediction, unlike classification, is intergenerational. Angel and Patrick’s actions will affect Harriette’s future AFST score. Their use of public resources drives Harriette’s score up. Patrick’s run-ins with CYF when Tabatha was a child will raise Harriette’s score as an adult. Angel and Patrick’s actions today may limit Harriette’s future, and her children’s future.
The impacts of predictive models are thus exponential. Because prediction relies on networks and spans generations, its harm has the potential to spread like a contagion, from the initial point of contact to relatives and friends, to friends’ networks, rushing through whole communities like a virus.
No poverty regulation system in history has concentrated so much effort on trying to guess how its targets might behave. This is because we, collectively, care less about the actual suffering of those living in poverty and more about the potential threat they might pose to others.
The AFST responds to a genuine and significant problem. Caregivers sometimes do terrible things to children, and it is appropriate for the state to step in to protect those who cannot protect themselves. But even the possibility of extraordinary harm cannot rationalize unchecked experimentation on the families of the poor. The professional middle class would never tolerate the AFST evaluating their parenting. That it is deployed against those who have no choice but to comply is discriminatory, undemocratic, and unforgivable.
In the nineteenth century, the growing desire for cadavers for medical school dissection led to a rash of grave-robbing and strict laws against the theft of bodies. Poorhouse burial grounds quickly became favorite targets for the now-illegal body trade. In response to escalating pressure from hospitals and doctors for cheaper cadavers, states passed legislation legalizing the black market in poor corpses: unclaimed bodies of poorhouse and prison inmates could be given to medical schools for dissection. What was unimaginable treatment for the bodies of the middle class was seen as a way that the poor could contribute to science.
Forensic anthropologists still routinely find skeletons in poorhouse burying grounds that show evidence of being tampered with: saw marks on femurs and pelvic bones, skulls with tops that lift off like lids.2 Yesterday, we experimented on the corpses of the poor; today, we tinker with their futures.
* * *
A dangerous form of magical thinking often accompanies new technological developments, a curious assurance that a revolution in our tools inevitably wipes the slate of the past clean. The metaphor of the digital poorhouse is meant to resist the erasure of history and context when we talk about technology and inequality.
The parallels between the county poorhouse and the digital poorhouse are striking. Both divert the poor from public benefits, contain their mobility, enforce work, split up families, lead to a loss of political rights, use the poor as experimental subjects, criminalize survival, construct suspect moral classifications, create ethical distance for the middle class, and reproduce racist and classist hierarchies of human value and worth.
However, there are ways that the analogy between high-tech tools in public services and the brick-and-mortar poorhouse falls short. Just as the county poorhouse was suited to the Industrial Revolution, and scientific charity was uniquely appropriate for the Progressive Era, the digital poorhouse is adapted to the particular circumstances of our time. The county poorhouse responded to middle-class fears about growing industrial unemployment: it kept discarded workers out of sight but nearby, in case their labor was needed. Scientific charity responded to native elites’ fear of immigrants, African Americans, and poor whites by creating a hierarchy of worth that controlled access to both resources and social inclusion.
Today, the digital poorhouse responds to what Barbara Ehrenreich has described as a “fear of falling” in the professional middle class. Desperate to preserve their status in the face of the collapse of the working class below them, the grotesque expansion of wealth above them, and the increasing demographic diversity of the country, Ehrenreich writes, the white professional middle class has largely abandoned ideals of justice, equity, and fairness.3 Until the election of Donald Trump, their increasing illiberalism was somewhat moderated in public. It was a kind of “dog whistle” cruelty: turning fire hoses on Black schoolchildren would not be tolerated, but the fatal encounters of Michael Brown, Freddie Gray, Natasha McKenna, Ezell Ford, and Sandra Bland with law enforcement wouldn’t be condemned. Involuntary sterilization of the poor was a nonstarter, but welfare reforms that punish, starve, and criminalize poor families were tacitly approved. The digital poorhouse is born of, and perfectly attuned to, this political moment.
While they are close kin, the differences between the poorhouse of yesterday and the digital poorhouse today are significant. Containment in the physical institution of a county poorhouse had the unintentional result of creating class solidarity across race, gender, and national origin. When we sit at a common table, we might see similarities in our experiences, even if we are forced to eat gruel. Surveillance and digital social sorting drive us apart as smaller and smaller microgroups are targeted for different kinds of aggression and control. When we inhabit an invisible poorhouse, we become more and more isolated, cut off from those around us, even if they share our suffering.
What else is new about the digital poorhouse?
The digital poorhouse is hard to understand. The software, algorithms, and models that power it are complex and often secret. Sometimes they are protected business processes, as in the case of the IBM and ACS software that denied needy Hoosiers access to cash benefits, food, and health care. Sometimes operational details of a high-tech tool are kept secret so its targets can’t game the algorithm. In Los Angeles, for example, a “Dos and Don’ts” document for workers in homeless services suggested: “Don’t give a client a copy of the VI-SPDAT. Don’t mention that people will receive a score. [W]e do not want to alert clients [and] render the tool useless.” Sometimes the results of a model are kept secret to protect its targets. Marc Cherna and Erin Dalton don’t want the AFST risk score to become a metric shared with judges or investigating caseworkers, subtly influencing their decision-making.
Nevertheless, transparency is crucial to democracy. Being denied a public service because you earn too much to qualify for a particular program can be frustrating and feel unfair. Being denied because you “failed to cooperate” sends another message altogether. Being denied benefits to which you know you are entitled and not being told why says, “You are worth so little that we will withhold life-saving support just because we feel like it.”
Openness in political decision-making matters. It is key to maintaining confidence in public institutions and to achieving fairness and due process.
The digital poorhouse is massively scalable. High-tech tools like automated decision-making systems, matching algorithms, and predictive risk models have the potential to spread very quickly. The ACS call centers in Indiana rejected welfare applications at a speed never before imaginable, partly because the call centers’ employees required less time-consuming human connection than public caseworkers. The coordinated entry system went from a privately funded pilot project in a single neighborhood to the government-supported front door for all homeless services in Los Angeles County—and its 10 million residents—in less than four years. And while the AFST is being held to modest initial goals by a thoughtful human services administration, similar child abuse risk models are proliferating rapidly, from New York City to Los Angeles and Oklahoma to Oregon.
In the 1820s, supporters argued that there should be a poorhouse in every county in the United States. But it was expensive and time-consuming to build so many prisons for the poor. Though we still ended up with more than a thousand of them across the country, county poorhouses were difficult to scale. Eugenicist Harry Laughlin proposed ending poverty by involuntarily sterilizing the “lowest one-tenth” of the nation’s population, approximately 15 million people. But Laughlin’s science of racial cleansing only scaled in Nazi Germany, and his plan for widespread sterilization of the “unfit” fell out of favor after World War II.4
The digital poorhouse has much lower barriers to rapid expansion.
The digital poorhouse is persistent. Once they scale up, digital systems can be remarkably hard to decommission. Think, for example, about what might happen if the world learned about a gross violation of trust at a large data company like Google. For the sake of argument, say that the company was selling calendar data to an international syndicate of car thieves. There would be a widespread and immediate outcry that the policy is unfair, dangerous, and probably illegal. Users would rush to find other services for email, appointments, document storage, video conferencing, and web search.
But it would take some time for us to disentangle our electronic lives from the grasp of Google. You’d have to forward your Gmail to a new email account for a while, otherwise no one would be able to find you. A Google calendar might be the only one that works with your Android phone. Google’s infrastructure has been integrated into so many systems that it has an internal momentum that is hard to arrest.
Similarly, once you break caseworkers’ duties into discrete and interchangeable tasks, install a ranking algorithm and a Homeless Management Information System, or integrate all your public service information in a data warehouse, it is nearly impossible to reverse course. New hires encourage new sets of skills, attitudes, and competencies. Multimillion-dollar contracts give corporations interests to protect. A score that promises to predict the abuse of children quickly becomes impossible to ignore. Now that the AFST is launched, fear of the consequences of not using it will cement its central and permanent place in the system.
New technologies develop momentum as they are integrated into institutions. As they mature, they become increasingly difficult to challenge, redirect, or uproot.
The digital poorhouse is eternal. Data in the digital poorhouse will last a very, very long time. Obsolescence was built in to the age of paper records, because their very physicality created constraints on their storage. The digital poorhouse promises, instead, an eternal record.
Past decisions that hurt others should have consequences. But being followed for life by a mental health diagnosis, an accusation of child neglect, or a criminal record diminishes life chances, limits autonomy, and damages self-determination. Additionally, retaining public service data ad infinitum intensifies the risk of inappropriate disclosure and data breaches. The eternal record is punishment and retribution, not justice.
Forty years ago, the French National Commission on Informatics and Liberties established the principle of a “right to be forgotten” within data systems. As David Flaherty reports in Protecting Privacy in Surveillance Societies, the commission believed that data should not be stored indefinitely in public systems by default. Instead, electronic information should be preserved only if it serves a necessary purpose, especially when it poses significant risk if disclosed.
The idea has provoked much resistance in the United States. But justice requires the possibility of redemption and the ability to start over. It requires that we find ways to encourage our data collection systems to forget. No one’s past should entirely delimit their future.
We all live in the digital poorhouse. We have all always lived in the world we built for the poor. We create a society that has no use for the disabled or the elderly, and then are cast aside when we are hurt or grow old. We measure human worth based only on the ability to earn a wage, and suffer in a world that undervalues care and community. We base our economy on exploiting the labor of racial and ethnic minorities, and watch lasting inequities snuff out human potential. We see the world as inevitably riven by bloody competition and are left unable to recognize the many ways we cooperate and lift each other up.
But only the poor lived in the common dorms of the county poorhouse. Only the poor were put under the diagnostic microscope of scientific charity. Today, we all live among the digital traps we have laid for the destitute.
* * *
Think of the digital poorhouse as an invisible spider web woven of fiber optic strands. Each strand functions as a microphone, a camera, a fingerprint scanner, a GPS tracker, an alarm trip wire, and a crystal ball. Some of the strands are sticky. They are interconnected, creating a network that moves petabytes of data. Our movements vibrate the web, disclosing our location and direction. Each of these filaments can be switched on or off. They reach back into history and forward into the future. They connect us in networks of association to those we know and love. As you go down the socioeconomic scale, the strands are woven more densely and more of them are switched on.
Together, we spun the digital poorhouse. We are all entangled in it. But many of us in the professional middle class only brush against it briefly, up where the holes in the web are wider and fewer of the strands are activated. We may have to pause a moment to extricate ourselves from its gummy grasp, but its impacts don’t linger.
When my family was red-flagged for a health-care fraud investigation, we only had to wrestle one strand at a time. We weren’t also tangled in threads emerging from the criminal justice system, Medicaid, and child protective services. We weren’t knotted up in the histories of our parents or the patterns of our neighbors. We challenged a single delicate strand of the digital poorhouse and we prevailed. If we survived our encounter, so can many of the people currently reading this book. So why should professional middle-class Americans care about an invisible network that mostly acts to criminalize the poor?
IT IS IN OUR SELF-INTEREST
At the most ignoble level, the professional middle class should care about the digital poorhouse because it is in our self-interest to do so. We may very well end up in the stickier, denser part of the web. As the working class hollows out and the economic ladder gets more crowded at the very top and bottom, the professional middle class becomes ever more likely to fall into poverty. Even if we don’t cross the official poverty line, we are likely to use a means-tested program for support at some point.
The programs we encounter will be shaped by the contempt we held for their initial targets: the chronically poor. We will endure invasive and complicated procedures meant to divert us from accessing public resources. Vast amounts of our data will be collected, mined, analyzed, and shared. Our worthiness, behavior, and network of associations will be investigated, our missteps criminalized. Once we fall into the stickier levels of the digital poorhouse, its web of threads will make it difficult for us to recover from the bad luck or poor choices that put us there.
Or, the system may come to us. The strands at the top of the web are only widely spaced and switched off for now. As Dorothy Allen, the mom in Troy, reminded me almost 20 years ago, technological tools tested on the poor will eventually be used on everyone. A national catastrophe or a political regime change might justify the deployment of the digital poorhouse’s full surveillance capability across the class spectrum. Because the digital poorhouse is networked, whole areas of professional middle-class life might suddenly be “switched on” for scrutiny. Because the digital poorhouse persists, a behavior that is perfectly legal today but becomes criminal in the future can be used to persecute retroactively.
AUTOMATED INEQUALITY HURTS US ALL
Taking a step back from narrow self-interest, we should all care about the digital poorhouse because it intensifies discrimination and creates an unjust world. Key to understanding how the digital poorhouse automates inequality is University of Pennsylvania communications scholar Oscar Gandy’s concept of “rational discrimination.”5 Rational discrimination does not require class or racial hatred, or even unconscious bias, to operate. It only requires ignoring bias that already exists. When automated decision-making tools are not built to explicitly dismantle structural inequities, their speed and scale intensify them.
For example, from 1935 to 1968, the Federal Home Loan Bank Board and the Home Owners’ Loan Corporation collected data to draw boundaries around African American neighborhoods, characterizing them as high-risk investments. Both public and private lenders then refused loans in these areas. Real estate redlining was based in blatant racial hostility and greed. As Douglas S. Massey and Nancy A. Denton explain in their 1993 classic American Apartheid: Segregation and the Making of the Underclass, racial hostility was exploited through practices like blockbusting, where realtors would select working-class white neighborhoods for racial turnover, acquire a few homes, and quietly sell them to Black families. They would then go door-to-door stoking racist fears of an “invasion” and offering to purchase white homes at cut-rate prices. Redlining had such a profound impact on the shape of our cities that zip codes still serve as remarkably effective proxies for race.
But as openly discriminatory practices became politically unacceptable, facially race-neutral practices took their place. Today, data-based “reverse” redlining has replaced earlier forms of housing discrimination. According to Seeta Peña Gangadharan of the London School of Economics and Political Science, financial institutions use metadata purchased from data brokers to split the real estate market into increasingly sophisticated micro-populations like “Rural and Barely Making It” and “X-tra Needy.” While the algorithms that drive this target-marketing don’t explicitly use race to make decisions—a practice outlawed by the Fair Housing Act of 1968—a category like “Ethnic Second-City Strugglers” is clearly a proxy for both race and class.6 Disadvantaged communities are then targeted for subprime lending, payday loans, or other exploitative financial products.
Reverse redlining is rational discrimination. It is not discriminatory in the sense that it relies on hostile choices being made by racist or classist individuals. In fact, it is often characterized as inclusionary: it provides access to financial products in “underbanked” neighborhoods. But its outwardly neutral classifications mask discriminatory outcomes that rob whole communities of wealth, compounding cumulative disadvantage.
The digital poorhouse replaces the sometimes-biased decision-making of frontline social workers with the rational discrimination of high-tech tools. Administrators and data scientists focus public attention on the bias that enters decision-making systems through caseworkers, property managers, service providers, and intake center workers. They obliquely accuse their subordinates, often working-class people, of being the primary source of racist and classist outcomes in their organizations. Then, managers and technocrats hire economists and engineers to build more “objective” systems to root out the human foibles of their economic inferiors. The classism and racism of elites are math-washed, neutralized by technological mystification and data-based hocus-pocus.
I spent much of my November 2016 trip to Pittsburgh trying to spy one of Uber’s famous driverless cars. I didn’t have any luck because the cars are found mostly downtown and in the Strip District, neighborhoods that are gentrifying quickly. I spent my time in Duquesne, Wilkinsburg, the Hill District, and Homestead. I didn’t see a single one.
The autonomous cars use a vast store of geospatial data collected from Uber’s human drivers and a two-person team of onboard engineers to learn how to get around the city and interact with other vehicles, bikes, and pedestrians. Asked by Julia Carrie Wong of The Guardian how he felt about his role in Uber’s future, Rob Judge, who had been driving for the company for three months, said, “It feels like we’re just rentals. We’re kind of like placeholders until the technology comes out.”7
I asked Bruce Noel, the regional office director in Allegheny County, if he’s concerned that the intake workers he manages might be training an algorithm that will eventually replace them. “No,” he insisted. “There will never be a replacement for that human being and that connection.” But in a very real sense, humans have already been removed from the driver’s seat of human services. In the past, during times of economic hardship, America’s elite threw the poor under the bus. Today, they are handing the keys to alleviating poverty over to a robotic driver.
THE DIGITAL POORHOUSE COMPROMISES OUR NATIONAL VALUES
We should all care about the digital poorhouse because it is inconsistent with our most dearly held collective values: liberty, equity, and inclusion.
Americans have professed to cherish liberty since the nation’s founding. It is an inalienable right named in the Declaration of Independence. The Fifth and Fourteenth Amendments assure that “no person … shall be deprived of life, liberty, or property, without due process of law.” Schoolchildren pledge their allegiance to a republic promising “liberty and justice for all.”
Conflict arises, though, when we stop talking in generalities and try to decide the best way to secure liberty for the greatest number of people in a diverse nation. Agreement about how to interpret liberty tends to accumulate around two poles. On one side liberty is freedom from government interference and the right to do what you want. Groups who want to decrease government regulation of business in order to lower barriers to competition, for example, are asking for freedom from. On the other side, liberty is freedom to act with self-determination and exert agency. Groups who want to provide federal student loans at below market rates, for example, argue that all students should have the freedom to pursue higher education without being crippled by a lifetime of debt.
The digital poorhouse restricts both kinds of liberty.
The digital poorhouse facilitates government interference, scrutiny, and surveillance, undermining freedom from. The rise of high-tech tools has increased the collection, storage, and sharing of data about the behavior and choices of poor and working-class people. Too often, this surveillance primarily serves to identify sanctionable offenses resulting in diversion and criminalization. No one could argue that the systems described in this book promote freedom from red tape and government interference.
The digital poorhouse also impairs the ability of poor and working-class people to exert self-determination and autonomy, undermining freedom to. The complexity of the digital poorhouse erodes targets’ feelings of competence and proficiency. Too often, these tools simply grind down a person’s resolve until she gives up things that are rightfully hers: resources, autonomy, respect, and dignity.
* * *
Americans have also reached broad consensus on equity as a key national value. The Declaration of Independence, though signed by slaveholders, famously proclaims “that all men are created equal; that they are endowed by their Creator with certain unalienable rights.” But like liberty, there are many different ways to interpret equity.
On one hand, many understand equity as equal treatment. Those who argue for mandatory sentencing suggest that like crimes should incur like penalties, regardless of the characteristics of the perpetrator or the circumstances of the crime. On the other hand, many believe that equity is only achieved when different people and diverse groups are able to derive equal value from common goods and political membership. For this kind of equity to thrive, structural barriers to opportunity must be removed.
The digital poorhouse undercuts both kinds of equity.
The digital poorhouse reproduces cultural bias and weakens due process procedures, undermining equity as equal treatment. High-tech tools have a built-in authority and patina of objectivity that often lead us to believe that their decisions are less discriminatory than those made by humans. But bias is introduced through programming choices, data selection, and performance metrics. The digital poorhouse, in short, does not treat like cases alike.
The digital poorhouse also weakens poor and working-class people’s ability to derive equal value from public resources and political membership. It redefines social work as information processing, and then replaces social workers with computers. Humans that remain become extensions of algorithms.
But casework is not information processing. As Supreme Court Justice William J. Brennan, Jr., famously said when reflecting on his decision in Goldberg v. Kelly, equity in public assistance requires “the passion that understands the pulse of life beneath the official version of events.”8 At their best, caseworkers promote equity and inclusion by helping families navigate complex bureaucracies and by occasionally bending the rules in the name of higher justice.
The digital poorhouse also limits equity as equal value by freezing its targets in time, portraying them as aggregates of their most difficult choices. Equity requires the ability to develop and evolve. But as Cathy O’Neil has written, “Mathematical models, by their nature, are based on the past, and on the assumption that patterns will repeat.”9 The political pollsters and their models failed to anticipate Donald Trump’s 2016 presidential victory because voters did not act in the ways statistical analysis of past voter behavior predicted. People change. Movements rise. Societies shift. Justice demands the ability to evolve, but the digital poorhouse locks us into patterns of the past.
* * *
Finally, Americans generally agree on a third national value of political and social inclusion. Inclusion requires participation in democratic institutions and decision-making—what Lincoln named at Gettysburg a government “of the people, by the people, for the people.” Inclusion also requires social and cultural incorporation, a sense of belonging in the nation, of mutual obligation and shared responsibility for each other. This ideal persists in the de facto motto of the United States, E Pluribus Unum (“Out of many, one”), that appears on our passports and money.
Like liberty and equity, there are many ways to define inclusion. One of the most common is inclusion as assimilation, the notion that individuals and groups must conform to existing structures, values, and ways of life in order to belong in a society. Groups that believe US government materials should only be provided in English are promoting inclusion as assimilation. Another way to understand inclusion is by thinking of it as the ability to thrive as your whole self in community. Inclusion as your whole self demands that we shift social and political structures to support and respect the equal value of every child, woman, and man.
The digital poorhouse undercuts both kinds of inclusion.
The digital poorhouse undermines inclusion as assimilation. In the most egregious examples, such as the explosion of public assistance denials in Indiana, it simply acts to exclude people from government programs. More subtly, the digital poorhouse promotes social and political division through policy microtargeting. Data mining creates statistical social groupings, and then policy-makers create customized interventions for each precise segment of society. Bespoke, individualized governance will likely harden social divisions rather than promote inclusion. Customized government might serve some individuals very well, but it will increase intergroup hostility as perceptions of special treatment proliferate.
The digital poorhouse also limits the ability of its targets to achieve inclusion as their whole selves. Poor and working-class people learn lessons about their comparative social worth and value when they come under digital scrutiny. The Stipes family and Shelli Birden learned that their lives mattered less than those of their more well-off neighbors. Lindsay Kidwell and Patrick Gryzb learned that no one can win when they go up against government. Gary Boatwright and Angel Shepherd learned that someone is always watching, expecting shows of compliance and submission. These are terrible lessons in how to be a member of a just and democratic political system.
The digital poorhouse denies access to shared resources. It asks invasive and traumatizing questions. It makes it difficult to understand how government bureaucracy works, who has access to your information, and how they use it. It teaches us that we only belong in political community if we are perfect: never leave a “T” uncrossed, never forget an appointment, never make a mistake. It offers paltry carrots: 15 minutes with a county psychologist, a few dollars cash, a shot at rental assistance. It wields an enormous stick: child removal, loss of health care, incarceration. The digital poorhouse is a “gotcha” system of governance, an invisible bully with a lethally fast punch.
THE DIGITAL POORHOUSE PREEMPTS POLITICS
The digital poorhouse was created in the 1970s to quietly defuse the conflict between the political victories of the welfare rights movement and the professional middle-class revolt against public assistance. To accomplish this goal, its new high-tech tools had to be seen as embodying simple administrative upgrades, not consequential political decisions.
When the digital poorhouse was born, the nation was asking difficult questions: What is our obligation to each other in conditions of inequality? How do we reward caregiving? How do we face economic changes wrought by automation and computerization? The digital poorhouse reframed these big political dilemmas as mundane issues of efficiency and systems engineering: How do we best match need to resource? How do we eliminate fraud and divert the ineligible? How do we do the most with the least money? The digital poorhouse allowed us to drop the bigger, more crucial conversation.
Today, we are reaping the harvest of that denial. In 2012, economic inequality in the United States reached its highest level since 1928. A new class of the extreme poor, who live on less than $2 per day, has emerged. Enormous accumulation of wealth at the top has led observers to describe our moment, without hyperbole, as a second Gilded Age.
And yet, all three systems described in this book share the unstated goals of downsizing government and of finding apolitical solutions to the country’s problems. “By 2040, Big Data should have shrunk the public sector beyond recognition,” AFST designer Rhema Vaithianathan wrote in a 2016 opinion piece for New Zealand’s Dominion Post. “Once our data is up to the task, these jobs won’t need to be done the old-fashioned way by armies of civil servants. The information and insights will be immediate, real time, bespoke and easy to compare over time. And, ideally, agreed by all to be perfectly apolitical.”10 Automated eligibility, coordinated entry, and the AFST all tell a similar story: once we perfect the algorithms, a free market and free information will guarantee the best results for the greatest number. We won’t need government at all.
Troubling this vision of a government governing best by governing least is the fact that, historically, we have only made headway against persistent poverty when mass protest compelled substantial federal investment. Many of the programs of the Social Security Act, the GI Bill, and the War on Poverty suffered from fatal flaws: by excluding women and men of color from their programs, they limited their own equalizing potential. But they offered broadly social solutions to risk and acknowledged that prosperity should be widely shared.
The very existence of a social safety net is premised on an agreement to share the social costs of uncertainty. Welfare states distribute the consequences of bad luck more equally across society’s members. They acknowledge that we, as a society, share collective responsibility for creating a system that produces winners and losers, inequity and opportunity. But the moral calculus of the digital poorhouse individualizes risk and shreds social commitment.
* * *
It would stand us all in good stead to remember that infatuation with high-tech social sorting emerges most aggressively in countries riven by severe inequality and governed by totalitarians. As Edwin Black reports in IBM and the Holocaust, thousands of Hollerith punch card systems—an early version of computer software—allowed the Nazi regime to more efficiently identify, track, and exploit Jews and other targeted populations. The appalling reality is that the serial numbers tattooed onto the forearms of inmates at Auschwitz began as punch card identification numbers.
The passbook system that controlled the movements, work opportunities, health care, and housing of 25 million Black South Africans was made possible by data mining the country’s 1951 census to create a centralized population register assigning every person to one of four racial categories. In an amicus brief filed in 2015 on behalf of Black South Africans attempting to sue IBM for aiding and abetting apartheid, Cindy Cohn of the Electronic Frontier Foundation wrote, “The technological backbone for the South African national identification system … enabled the apartheid regime to efficiently implement ‘denationalization’ of the country’s black population: the identification, forced segregation, and ultimate oppression of South African blacks by the white-run government.”11
Classifying and targeting marginalized groups for “special attention” might offer helpful personalization. But it also leads to persecution. Which direction you think the high-tech tools of the digital poorhouse will pivot largely hinges on your faith—or lack of faith—that the US government will protect us all from such horrors.
We must not dismiss or downplay this disgraceful history. When a very efficient technology is deployed against a despised outgroup in the absence of strong human rights protections, there is enormous potential for atrocity. Currently, the digital poorhouse concentrates administrative power in the hands of a small elite. Its integrated data systems and digital surveillance infrastructure offer a degree of control unrivaled in history. Automated tools for classifying the poor, left on their own, will produce towering inequalities unless we make an explicit commitment to forge another path. And yet we act as if justice will take care of itself.
If there is to be an alternative, we must build it on purpose, brick by brick and byte by byte.