In September 2014, Connect.DC, an internet access initiative from DC’s Office of the Chief Technology Officer (OCTO), hosted a one-day symposium titled “Bridging the Digital Divide in the District of Columbia.” Attendees included representatives from every major social service agency, nonprofits from across the city, the startup community, and the DC Chamber of Commerce. We were housed in the first floor of the Washington Post’s offices, down the street from the 1776 startup accelerator that had rapidly become the center of the District’s tech scene.
On the surface, this was an occasion to review the outcomes from four large internet infrastructure grants from the National Telecommunications and Information Administration (NTIA) the city had received as part of the recovery effort during the 2008 recession. But the stakes were a good deal higher. The city—at least the largely white portion west of the Anacostia River—had done quite well during the recession. But Black unemployment was still stubbornly high, and those new high-wage migrants were driving up rents. It was clear to everyone in the room that even though 150,000 DC households still lacked an internet connection, digital inclusion meant far more than a broadband hookup. Advertising the event, and echoing Al Gore from two decades earlier, Connect.DC asked, “How can we close the gap between the ‘haves’ and ‘have nots’ in the District of Columbia?” Moderator Henry Wingo, from the Chamber of Commerce, called it “a gulf between worlds.” A year later, he would sue to keep a referendum on a fifteen-dollar minimum wage off the ballot.
Digital inclusion was a priority not only because certain jobs or services might be out of reach but because, in the words of Michelle Fox, of nonprofit Code for Progress, we were “leaving talent on the table.” Richard Reyes-Gavilan, new executive director of DC Public Library, said the same, emphasizing that libraries were today more important than ever as premier sites of “human capital development.” He said that he had come to town to rebuild the MLK central library branch, making it a friendly place for the community and the burgeoning startup scene. Patrick Gusman, cofounder of Startup Middle School, bemoaned the lack of technical curricula in DC schools and prescribed computer science as a mandatory language capstone course in which students would “use a form of technology to solve a social problem.” Wingo himself compared the urgency of this conference to that of the civil rights movement.
This idea that a lack of technology and technological skills is what is holding back poor people and poor regions is relatively new, historically speaking. This chapter seeks its source, turning back the clock to uncover the roots of the access doctrine—the political common sense that held that poverty can be solved with the right digital tools and skills. The access doctrine empowers Reyes-Gavilan to bootstrap the MLK Library, remaking not just how it looks but also what it is and who it is for. I find the source of this political common sense in the politics of the Clinton administration as the internet commercialized and it identified the problem of the digital divide. The access doctrine undergirds the digital divide framework, but that work continues even as digital divide fades as a term of art in policy, to be replaced by ideas like digital inclusion. The access doctrine’s hope keeps appearing in places like the OCTO event or the Connect.DC posters—“The Internet: Your Future Depends on It.”
By focusing on the relationship between technology policy and poverty policy, we can see how the problem of access becomes a problem in the first place, in terms everyone understands, with solutions everyone agrees on: public-private partnerships for access extension, the demand that everyone must learn to code, and the wide embrace of a startup mindset that turns welfare state bureaucracies into nimble, twenty-first-century service providers. The creation of the problem is a project of a particular race and class coalition, known at the time as the Atari Democrats, who are still visible at the OCTO events: largely White professionals at the base, led by business leaders from the tech sector and closely linked industries that require state support for their global ambitions.
The Clinton administration’s first report on stratified internet access in the United States, what it would eventually call the digital divide, argued, “While a standard telephone line can be an individual’s pathway to the riches of the Information Age, a personal computer and modem are rapidly becoming the keys to the vault” (NTIA 1995). What is left out of this story is how the vault became locked in the first place. This includes, beginning in the 1970s, increased automation or outsourcing of industrial production, stagnant real wages, increasing healthcare and higher education costs relative to inflation, rollbacks of federal poverty relief programs, and the massive expansion of the carceral state in poor communities (Edelman 2013).
The question is not whether the inequalities identified under the rubric of the digital divide were real. They were. In 1995, urban cores and isolated rural areas really did lag behind wealthier suburbs in internet access, and Black and Native communities really were less well-served by internet service providers than White ones. The question instead is why inequality was described in this way and how it connects to other pieces of contemporary poverty policy. Three pieces of the access doctrine appear in succession: a crisis of national competitiveness is declared, it is then defined in human capital terms, and finally resolved through deregulation of telecommunications markets and targeted public-private partnerships. Exploring these steps provides a new entry point for understanding the post-1970s dismantling of the Keynesian political consensus, its reconstruction as neoliberalism, and the discursive role information technology played in this shift. The access doctrine has deep roots, so first we must briefly review the success of the neoliberal political project in redefining the structural problems of economic dislocation as individual deficiencies of skill, a process that began in the 1970s.
There is a core contradiction in contemporary US poverty policy. Since the 1970s, political institutions and policymakers have advanced two seemingly opposed positions. On the one hand, the neoliberal state must offer promise: with the right skills, the global labor market becomes a space of unlimited potential where anyone can become an entrepreneur. On the other hand, the neoliberal state must threaten punishment: anyone who steps out of line will, at best, have their state support revoked or, at worst, be incarcerated.
The access doctrine emerges from the Clinton administration in the 1990s, as it oversaw the commercialization of the internet and named the problem of the digital divide. It resolves the contradiction between promise and punishment within American poverty policy by presenting a relatively simple recipe for economic security in insecure times: digital tools and skills lead to good jobs. The internet, the story goes, unlocks the fetters of geography and identity. A series of MCI commercials, for example, promised that on the internet there is no race or gender, and no “there,” only “here” (Nakamura 2000). Promise exists anywhere the internet does, and so the blame for poverty must fall on those who cannot act on that promise. Within the access doctrine, race in particular appears as a stubborn remainder of an older political-economic order. It is a problem that must be either resolved with particular upgrades or, if that doesn’t work, contained by the carceral wing of the state.
To see this common sense emerge, we must understand attempts to bridge the digital divide or teach people how to code not just as technology policies, but as poverty policies, working in tandem with attempts to reform unemployment insurance, job training, criminal justice, and so on. In this way, access initiatives are better understood as one component of a new script for the state that reduces or obscures the state’s ability to guard against periodic economic crises and instead highlights its role as a guarantor of competition for its citizens, themselves circumscribed as bundles of human capital entering the market to contribute to national economic fitness.
The access doctrine was not invented out of whole cloth. Technological hope has a long American history. In the 1960s, welfare-state liberals often imagined technology not just as an engine for individual social mobility, but a solution to the urban uprisings of Black Americans who had been kept out of midcentury industrial prosperity. At this stage, the connections between technology policy and poverty policy were clear. RAND Corporation researchers, for example, argued that cable could “provide a possible antidote to the isolation of ‘ghetto residents’” (Light 2001, 718). These attempts to “keep the peace” with the working and workless poor are emblematic of the Keynesian political consensus (Gilmore 1999). Across Western democracies in the postwar era, industrial capital constructed a series of national safety nets for itself by consenting to the creation of safety nets for citizens that included such benefits as old-age pensions, unemployment and disability insurance, large infrastructure projects, subsidized housing, expanded higher education, and more. Such benefits were hard won by the working class but were divided unevenly across that class by race, gender, and citizenship (Katznelson 2013). This consensus kept the peace with industrial labor in particular, guaranteed high rates of consumption, and ensured a few decades of steady growth.
But that growth did not reach everyone. Increasing automation and off-shoring (even just to nonunion Sunbelt states) in the 1960s hit Black communities first and hardest, driving up unemployment rates in the central city areas to which Black families had migrated earlier in the century (Sugrue 2014). Some reformers saw computer training as a solution to these problems of Black poverty and social isolation. Alan Bekelman, a former high school teacher who despaired for his students’ job prospects, pitched the idea to superiors in the Commerce Department. His Scientific-Technical Employment Program (STEP) trained two hundred students in programming every year as part the larger Youth Opportunity Campaign summer jobs program (Loftus 1967). Beneath the “Companies Aid Computer School” headline, a July 1968 New York Times article told the story of twenty-year-old “youth from Harlem” Van Sloan and 140 compatriots “learning the skills of key-punching, computer operations and programming at the Middle West Side Data Processing School, an antipoverty training center.” Twenty-one-year-old Sherry Barnes explained that she quit her day job to enroll because “the computer is a passport to a better job and a guarantee to a sure future, if you are smart enough to take advantage of it” (Smith 1968).1
While pieces of the access doctrine are certainly visible in the Great Society era, those pieces could not come together with the momentum necessary to really make things happen. That would have to wait for the neoliberal revolution. Beginning in the 1970s, revanchist political alliances sought new solutions to crises of profitability born of global economic conditions (e.g., stagflation, the oil crisis), constraints national governments had placed on global capital, and militant social movements the world over. Industrial unions’ power receded as production was increasingly automated or moved beyond organized labor’s reach, first to the periphery of the industrial core and then to the Global South (Silver 2003). The service sector jobs that replaced them had different workplace structures, energized employers supported by an increasingly antiworker state, and a more feminized and racialized set of workers who were both under threat and frequently ignored by traditional unions (Cobble 1991; Lopez 2010). This weakened labor’s power, although there were of course notable exceptions to the trend, particularly in healthcare (Windham 2017).
As the mode of production shifted, policing powers were built up and redeployed. A new carceral state emerged in reaction to the power of midcentury social and labor movements and White fears over Black urban unrest (Berger 2013; Camp 2016). A prison-building boom followed, absorbing surpluses in land, labor, financial capital, and state capacity borne of the economic dislocations of the 1970s (Gilmore 2007).
This is the economic context that both nurtured and was nurtured by neoliberalism—a slippery term. I follow Wacquant’s (2012) definition of neoliberalism as a political project wherein an activist state repurposes its institutions to define and enforce citizenship around market demands. This requires not a shrunken state, but a reengineered one: enhanced, redistributive bureaucratic functions for the upper classes alongside more paternalistic functions for the lower classes, and a massification and glorification of the penal system. The latter is especially important insofar as it shows that the neoliberal state is neither hands-off, focused primarily on deregulation, or miniaturized, with all its various capacities downsized together. The neoliberal state absorbs and redirects the state capacities created by the Great Society.
I diverge from Wacquant, and follow Gilmore (1999, 2007), in considering the prison a model neoliberal institution insofar as it does not only warehouse (particularly Black) people labeled problems but also teaches other people, whether or not they touch the prison system, and institutions the rules of the game for a new political economy. Schools and libraries are part of the same project and so express similar rules; they do not just react to political-economic shifts but produce and reproduce them (Sojoyner 2013; Willis 1981). Although they and their personnel often conflict in their day-to-day operations, these institutions of social reproduction are united in their efforts to develop and direct the capacities of people, places, and public funds toward the ends deemed most “productive.” One set of institutions, studied here, offers opportunity; the other, lurking ever in the background of the lives of the urban poor, punishes those who do not follow the rules and accept the offer. In the 1990s, those rules became, to put it crudely: log on, train up—or else.
This approach took root in the Nixon era, but was cemented by Clinton and his promise to “end welfare as we know it.” It was part of a new mode of social reproduction that remade people for a new labor market, even if they were currently outside it. Categorizing the deserving unemployed through unemployment insurance or disability insurance, for example, implies a group of undeserving unemployed. Both categories require a definition of the ideal worker against which they can be compared (Piven 1998; Piven and Cloward 2012). Both categories underwent massive revision in the transition to neoliberal political rule.
Prior to the 1970s, in the period from Roosevelt’s New Deal through to Johnson’s Great Society, welfare relief integrated with broader goals of industrial policy: “full (male) employment, mass consumption, stabilized social reproduction, and a particular pattern of labor segmentation” (Peck 2001, 46). After much struggle on the part of aid recipients, particularly Black mothers, a general right to welfare was established by the 1960s, supporting some of the work of household social reproduction and acknowledging the gendered terrain of the labor market that either excluded or underpaid poor women (Nadasen 2002; Reese and Newcombe 2003).
These gains were short-lived. As the urban uprisings receded and US industry stuttered under stagflation, Nixon framed both welfare recipients and the institutions serving them as impediments to national economic progress, local productivity, and individual morality: “The task of this government, the great task of our people, is to provide the training for work, the incentive to work, the opportunity to work, and the reward for work” (1969). The state no longer offered a safe harbor; instead it promised to teach you how to sail. Macroeconomic solutions to poverty receded, replaced by training solutions (Lafer 2002).
These skills-training initiatives, like Reagan’s Job Training Partnership Act, occur within a more general punitive turn in poverty policy. This included a sharp reduction in federal housing services and, through budgetary maneuvers, increased eligibility restrictions for the Aid for Families with Dependent Children (AFDC) program and increased funding for state workfare programs. Five hundred thousand people were removed from AFDC, and states were encouraged to remove more through experiments with workfare programs: “the imposition of a range of compulsory programs and mandatory requirements for welfare recipients with a view to enforcing work while residualizing welfare” (Peck 2001, 10; emphasis in original).
Reagan also started a trend of granting states waivers for welfare programs that revised traditional eligibility standards. Both Bush and Clinton issued waivers liberally, as long as the proposed local revisions were cost-neutral. This discouraged states from implementing more expensive, service-oriented workfare programs that provided long-term, high-quality training and counseling, along with other services like transportation or childcare. Cutting the rolls with stringent eligibility requirements was cheaper and easier (Peck 2001, 100). It would seem, then, that the punitive turn undercuts the hope of skills training: How can those on the fringes of the labor market train for the jobs of tomorrow if they cannot put food on the table today? It would take a different set of politicians to marry these two conflicting tendencies.
The conservative political advance of the 1970s and 1980s reframed the role of the state and the problem of poverty in the United States. It also proved immensely popular, and the GOP was rewarded with almost two decades of electoral supremacy in the White House and Congress. But a new generation of Democrats built on these policy lessons with a winning spin on the problem of poverty. They linked issues of skills and issues of dependency into one broader problem: a shortage of willing and able workers to staff the high-tech jobs of the future. This was a more hopeful vision than the one presented by Reagan—who focused on an impotent state and the cheaters trying to scam it—but the people and institutions identified as the problem remained the same: non-White, especially Black, poor and working-class people struggling on the fringes of the labor market, and the welfare state institutions serving them. The racist caricature of the Black “welfare queen” and the broken bureaucracy serving her was wielded by both parties (Hancock 2004; see also Fraser and Gordon 1994).
Clinton and Gore were allies in the Democratic Leadership Council (DLC) of the 1980s, moving the party rightward, away from New Deal social democracy, in order to reverse years of Republican electoral gains. On the campaign trail during the early 1990s recession, and once in office, they described their new generation of Democrats as superior economic managers, willing to make some Keynesian investments in human capital and export-oriented industries, opening borders to free trade, and focusing on deficit reduction (Ferguson 1995). The latter limited any potential stimulus that would counter the recession and showed that Clinton and Gore’s opposition to Reagan and Bush was largely a matter of strategy, emerging from a different electoral base rather than a fundamental political disagreement. They were two different wings of the neoliberal project.
The “discovery” of the digital divide and the birth of the access doctrine cannot be analyzed without this context. The Clinton administration positioned its support for digital training programs for disabled Americans, for example, within a larger mission to “give work back to the American people” (Clinton 2000b), without ever endorsing direct stimulus or job creation. This included the effort “to end welfare as a way of life and make it a path to independence and dignity” (Clinton 1993), which resulted in the Personal Responsibility and Work Opportunity Act (PRWOA) of 1996. Clinton and his Congressional allies celebrated PRWOA, which replaced the AFDC poverty relief program with Temporary Assistance for Needy Families (TANF), for supplanting the American poor’s entitlement culture with a work culture.
Funding for TANF was block-granted so that a countercyclical poverty policy became nearly impossible,2 while limits were placed on recipients who were not working or who were unwed mothers or undocumented immigrants, on top of the five-year lifetime limits applied to everyone. No new training or job-creation programs were paired with these new restrictions (Wacquant 2009). A decade after PRWOA, an additional three hundred thousand children were living in poverty because of new restrictions on aid to households headed by single mothers (Trisi and Saenz 2020). In 1996, 4.7 million families were receiving cash assistance under TANF, by 2013, under AFDC, that number fell to 1.7 million—and never exceeded two million during the great recession (Shaefer, Edin, and Talbert 2015). Unsurprisingly, the percentage of nonelderly households in extreme poverty—surviving on less than two dollars per day—grew sharply in this period, from 1.7 percent in 1996 to 4.3 percent in 2011 (Shaefer and Edin 2013).
These policies punished those already hit hard by deindustrialization and the failure of federal poverty-relief measures to keep pace with inflation, as well as the shorter-term damage of the early 1990s recession. The punishment would continue. In 1994, as part of the Atari Democrats’ tough-on-crime agenda, the Violent Crime and Law Enforcement Act created sixty new death penalty offenses, criminalized gang membership, ended Pell Grants for college education in prison, and funded almost one hundred thousand new police officers. Contemporary projections indicated that these changes, echoing similar state and local policies underway for decades, would result in the incarceration of at least 1.5 million new people and require nearly $351 billion—almost twenty times the 1994 AFDC budget—in new prison operation and construction funds (Duster 1995).
The early 1990s were not only the peak of the punitive turn in poverty policy but also a pivot point for the American political economy. It was the moment when Clinton and Gore pushed the country toward a hopeful New Economy3 led by the tech sector. This is no coincidence. Gramsci (2000, 222–245) argues that in moments of economic transition, when the reins of power are up for grabs, political coalitions secure power partly through activist cultural policy that emphasizes sharp breaks with a denigrated past and new institutional directives fit for new economic demands. The early 1990s were one such moment. The access doctrine helped unite New Economy boosterism with a revanchist social state by telling a new story about what success meant and how it could be achieved. This story has three parts, each building on the last: a declaration of national economic emergency, a human capital measurement project, and a conclusion that the state could not solve a problem of this scale. The state would conduct triage and leave all but the most urgent cases to a deregulated telecommunications sector. In this process of naming, measuring, and attempting to solve the problem of the digital divide, the access doctrine was borne. That common sense had such political power that it would outlast the specific framework of the digital divide. These features—crisis, measurement, public-private partnerships for solutions—would then go on to appear in new frameworks such as digital inclusion or more recent calls to learn to code.
The neoliberal substitution of skills training for job creation and poverty relief set the stage for the poverty policies of the information economy and the scripts for the institutions that would implement those policies. But early 1990s economic conditions raised the stakes, such that even before anyone spoke the words digital divide, it was understood to be a crisis. This is the first part of the access doctrine: a crisis of connectivity, which, because workers cannot connect to the global labor market, becomes a crisis of underutilized labor.
During the 1992 election campaign and throughout its first term, the Clinton administration argued that getting every American plugged into a National Information Infrastructure (NII) was a matter of economic survival. Investment in the fixed capital of fiber optics and the human capital of skilled professionals would cement victory over Soviet communism, end the early 1990s recession, and regain global economic dominance from Germany and Japan. The problem of the digital divide and the solution of the access doctrine are rooted in this refigured economic nationalism. Refigured because while Clinton and Gore distinguished themselves from Reagan and Bush by endorsing some Keynesian stimulus on the campaign trail, most major stimulus plans were dropped in favor of deficit reduction after they took office—and the NII proposals that persisted were, compared to Roosevelt’s rural electrification or Eisenhower’s highways, relatively modest in scope (Ferguson 1995).
Clinton’s 1994 budget, for example, requested from Congress $1.1 billion specifically for the National Information Infrastructure and $45.6 billion for more general training and vocational and adult education (largely grants to states and municipalities) meant to “add to the stock of human capital by developing a more skilled and productive labor force” (72). For context, $277 billion was requested in military spending and $2 billion for modernization and expansion of federal prisons—before the Violent Crime and Law Enforcement Act.
Raw numbers aside, the Clinton administration’s plan to connect every American to the newly privatized internet was framed as an investment in national economic competitiveness. Within this first stage of the digital divide crisis, combating poverty is a problem not only of alleviating suffering in the present but also of making the correct investments in “information have-nots” so as to resolve current crises of underutilized labor, realize future capital growth, and achieve post–Cold War international economic hegemony.
In 1991, then-Senator Gore proposed the High Performance Computing Act to study how to upgrade NSFNET—the still limited, still research-dominated civilian internet—for commercial and consumer use. The bill apportioned $1.547 billion to the NSF to support new regional internet service providers and to build regional inter-networking points that would connect a privatized internet. During the 1992 campaign and immediately after taking office as vice president, Gore repeatedly posed NII buildout as a national economic emergency. Political opponents attacked this as undue state intervention. But Gore had spent years carefully negotiating this terrain, publicly connecting networked technologies to collective economic fitness, individual consumer choice, and democratic deliberation. He argued for his NII proposals in a 1991 issue of Scientific American alongside other early internet architects:
The unique way in which the US deals with information has been the real key to our success. Capitalism and representative democracy rely on the freedom of the individual, so these systems operate in a manner similar to the principle behind massively parallel computers. These computers process data not in one central unit but rather in tiny, less powerful units.
Capitalism works on the same principle. People who are free to buy and sell products or services according to their individual calculations of the costs and benefits of each choice process a relatively limited amount of information but do it quickly. When millions of individuals process information simultaneously, the aggregate result is incredibly accurate and efficient decisions.… Communism, by contrast, attempted to bring all the information to a large and powerful central processor, which collapsed when it was overwhelmed by ever more complex information. (151)
This conflation of different scales—infrastructure and individual, personal computing and national markets—was not just Atari Democrat spin, but an overarching regulatory regime emphasizing market competition as the primary political calculus and market citizenship as the primary political unit. Nor was the anti-Communism element simple cheerleading. Clinton and Gore (1993) positioned NII buildout and basic research into technologies of “commercial relevance” as the place to shift funds no longer required for Cold War militarization—even if the actual funds, both requested and disbursed, were paltry compared to what went into prisons or defense.
Because the internet would necessarily exceed the boundaries of the United States, it was also posed as an instrument of soft power—especially within newly capitalist, post-Soviet states (Gore 1994c)—to the benefit of US software producers that supported the Clinton-Gore campaign and depended on both liberalized trade and the dominance of English as the language of commerce (Ferguson 1995, 301). The administration took this economic nationalism so seriously that while running for reelection Gore accused Bob Dole of “unilateral disarmament” for threatening “to cut America’s science and technology budget by one-third” (Clinton and Gore 1996b).
Internet infrastructure buildout was a crucial part of the administration’s plan to upgrade the workforce for an information economy. This New Economy was based on transmitting and manipulating information but was not limited to software coding or computer manufacturing: it was post-sectoral. “Everyone will be in the bit business,” Gore said (1994c). Within the Technology for America’s Economic Growth policy initiative, released a month after Clinton and Gore took office, any gaps in connectivity were a blow to the nation’s standing in the New Economy—to the point where “schools can themselves become high-performance workplaces” to train tomorrow’s technologists (14).
“Because information means empowerment, the government has a duty to ensure that all Americans have access to the resources of the Information Age”—a duty that, in the administration’s telling, Reagan and Bush had neglected (Department of Commerce 1993). But that duty did not demand traditional Keynesian public works responses. The provision of access was meant to create new markets or better position American exporters in existing ones—not provide market alternatives. Funding requests for infrastructure buildout were not particularly large, certainly not sufficient stimulus for the early 1990s recession: $600 million for the High-Performance Computing Act, $100 million per year for the NII (Department of Commerce 1993). The bulk of the cost for extending the commercial internet to every American would be borne by telecommunications firms incentivized by deregulation.
This mild Keynesianism was supported by an investment bloc of capital-intensive, export-oriented industries, especially high-technology companies that felt threatened by the rise of German and Japanese competitors. They needed the state to relax import tariffs for components, negotiate lower export tariffs abroad, educate a new generation of professionals, protect intellectual property, and provide at least the groundwork for an internationally competitive communications infrastructure (Ferguson 1995). Donors from this sector, including lifelong Republicans such as John Young of Hewlett-Packard and John Sculley of Apple, formed the Council on Competitiveness and provided pivotal funding and public support for the 1992 Clinton-Gore campaign (Sims 1992). Many of these elite donors were then recruited to the Advisory Council on the NII to advise the secretary of commerce on all matters internet.
This cemented the Clinton administration’s links with high-technology companies and created a new business coalition to challenge the one that had backed the Bush and Reagan administrations, which had been largely based in oil, food, investment banking, and less-mechanized manufacturing sectors such as textiles (Cate 1994; Ferguson 1995). This was the C-suite version of Clinton and Gore’s appeals to the Atari Democrats’ electoral base: A new voting bloc identified with office park corridors like that of Route 128 in suburban Massachusetts and named after the popular Atari videogame system (Geismer 2014; Wayne 1982). As organized labor retreated, the Democratic Party increasingly prioritized these largely White, highly educated professionals. In the primary campaigns of 1984 and 1988, these voters were organized against the multiracial working-class coalition of Jesse Jackson’s Rainbow Coalition.
Speaking at the 1997 Microsoft CEO Summit, Gore emphasized the competitive advantages his public-private infrastructure project had borne and warned of isolationists and fiscal conservatives attempting to stymie his efforts. He was clear that extending access to all Americans was a key part of a rich and free capitalism. Whereas in the old economy “growth depended largely on capital and labor [and] the task of policy makers was to keep those factors of production in sync,” in the New Economy the main assets were ideas, “our core capacity as human beings,” brought to market through the internet.
While the Atari Democrats framed their technological investments against the “pure” laissez-faire of neoconservatives, their interpretation of poverty in the New Economy as a national crisis of competitiveness—and the proposed definitions of and solutions to that crisis—was strikingly similar to neoconservative ideas about crises in education. Secretary of Education Terrel Bell convinced Reagan to make education a conservative issue—and not defund the agency Bell led—through 1983’s Nation at Risk report. It framed a decade of falling SAT scores, in an era when the pool of test-takers rapidly expanded, as a “rising tide of mediocrity” that left students so deficient in the skills needed in the global labor market that “if an unfriendly foreign power had attempted to impose on America the mediocre educational performance that exists today, we might well have viewed it as an act of war” (National Commission on Excellence in Education 1983).
Then governor Clinton had picked up this torch as chair of the 1989 National Education Summit, endorsing a national program of outcomes-based standards, charter schools, and standardized testing (Scott 2011). Schools here were positioned not as welfare state social supports but as skills-training centers. With the federal deficit ever in mind, supporting these skills-training centers demanded a program to measure the extent of the skills gap and carefully target interventions. Standardized testing was one manifestation of this measurement program. The mapping of the digital divide was another, a project to identify gaps in internet access and digital literacy so as to best direct the public partnerships that would provide PCs, modems, and training.
After the Atari Democrats declared a crisis of national economic fitness, it had to be mapped so that appropriate interventions could be identified. The second piece of the access doctrine sought to measure the depths of the crisis. Digital divide policy framed problems of poverty as problems of performance; poor people were poor because of insufficient or underutilized human capital. The state needed to understand the nature of underskilled, uncompetitive labor and the locations where technological infrastructure could be placed so as to get the most bang for the buck in connecting the digitally divided to the global labor market. This story affected the measurement of stratified internet access and explanations for it.
I do not dispute the reality of these inequalities; very real gaps in internet access existed across class, race, and geography. But the narrow framing of these gaps led to a dominant understanding of access as the opportunity to compete in the New Economy. The access doctrine operationalized the problem of inclusion in the New Economy as two sides of a digital divide, one competitive, one not. Solutions to the problem flowed logically from there.
For the Clinton administration, economic growth was a question of making adequate investments into human capital: the skills and abilities making up what Gore called our “core capacity as human beings,” those means of production internal to the laborer. This was often explicit. The 1994 Economic Report of the President made human capital investment the second administrative priority after deficit reduction. The report went on to clarify that “each American must be responsible for his or her own education and training” but that the state would make limited investments in education, training, and the internet infrastructure (e.g., fiber optics). These would connect Americans to training opportunities because “American workers must build the additional human capital they need as a bridgehead to higher wages and living standards” (Clinton and the Council on Economic Advisors 1994, 41).
At other times, this approach was implicit: the language of reskilling for knowledge work or connecting to online resources. Gaps in access were not crises just because PCs and internet infrastructure are necessary fixed capital for the New Economy, but because these technologies permitted access to reskilling opportunities that increased individuals’ human capital, access to new markets for the products of human capital, and access to new markets for human capital. They made you competitive and allowed you to compete.
Human capital theory became a key concept for governance in the 1960s, as Adam Smith’s theory of the term was reassessed by a new generation of economists and as planners sought to incorporate domestic educational costs and international development projects into the neoclassical investment theories that drove macroeconomic policy (Adamson 2009). Defining human capital as productive skills and abilities fixed to a person requires a mapping of its distribution, and the effects of investment in it, across increasingly larger scales and more fine-grained variables. Human capital theorists such as Gary Becker and Jacob Mincer provided the techniques to incorporate poverty management into the Keynesian political consensus. When that consensus shifted, so did the political use of human capital.
During the War on Poverty, the welfare state identified wage labor with independence, largely irrespective of the content of that labor. And so the state of dependency was feminized and racialized by its identification with paupers, people of color on the margins of the labor market, and unwaged housewives (Fraser and Gordon 1994). Independents received state support through payroll taxes (e.g., Social Security), while dependents received it through general taxation appropriated legislatively (e.g., AFDC). The latter were easily scapegoated because they were unproductive (i.e., they had low human capital). The neoliberal turn dismantled many of these programs, attacking both the recipients and the institutions serving them as destructive of human capital—not only disincentivizing people from pursuing waged work, but subtly destroying their ability to do any work at all. Where the earlier system gave starkly different treatment to “dependents” and waged workers, the worthy and unworthy poor, the emergent neoliberal state extended this human capital calculus to every potential participant in the labor market, not just the traditionally dependent.
While the right wing of the neoliberal assault framed these attacks in largely negative terms, destroying the fetters that held people back from their promise and punishing those who could not follow the new rules of the game, the left wing proposed a hopeful, creative solution. Digital tools and skills would offer new promise to those left out of the global labor market, increasing not just local or individual but also national productivity. What seems at first to be a technology policy is then part of a wider turn in poverty policy, wherein poverty-relief measures, like all public goods, became “neither pure commodities, nor pure public goods but new intermediate strains that combine features of both” (Fraser 1993, 17).
With subsidized PCs, modems, internet connections, and a dedicated measurement program to find where to put them, the access doctrine attempted to upgrade the nation’s human capital stock so that the traditionally dependent, along with everyone else left behind by the economic dislocations of the 1970s and 1980s, could move out of low-wage service sectors with low productivity growth and into higher-wage knowledge work sectors that were, in the 1980s and 1990s, seeing big productivity gains. This is what would promote independence at the individual level and competitiveness at the national level.
The Clinton administration was thus willing to countenance limited state intervention into the “natural” functioning of human capital markets because of a post–Cold War spending pivot and a burgeoning alliance with Silicon Valley. This freed the administration to acknowledge that the market was not joining fixed capital computing resources to the human capital in need of upgrading quickly enough to transition US workers to the New Economy—all this before the term digital divide entered popular usage in 1996.
Although there is no consensus as to who coined the term, former White House staffer and MCI General Counsel Allen Hammond IV and Sesame Street Workshop cofounder Lloyd Morrisett probably used digital divide in the seven years between the passage of the High Performance Computing Act and the NTIA’s 1998 Falling through the Net report (Eubanks 2007). It appeared nowhere in the 1995 edition of that report. Clinton and Gore used it while campaigning in 1996, comparing their investment in America’s future to Dole and Jack Kemp’s planned neglect of the same. Digital divide appeared four times, in quotations, in the 1998 Falling through the Net report and more than fifty times in the 1999 sequel.
During Clinton’s presidency, the NTIA, a small wing of the Department of Commerce, released four increasingly larger, more fine-grained reports on the state of the digital divide in the United States in 1995, 1998, 1999, and 2000. At Gore’s request, the agency had asked for the Census Bureau’s monthly Current Population Survey to be updated to include household data on computer ownership and internet and telephone subscriptions. Results were then cross-tabulated by income, race, age, educational attainment, and region. The NTIA, and its reports picked up by Clinton and Gore on the campaign trail, became a key institutional ingredient in the emergent access doctrine by treating stratified access as a chief symptom of (and universal access as a logical solution for) the persistent poverty that haunted the overall optimism of the New Economy. It was here that the problem of human capital deficiency was operationalized.
The NTIA framed increased economic fitness as the goal of access and market competition as the means to extend access. The 1995 report found that poor, rural minorities were least likely to have a PC or modem, followed by poor Black residents of central cities—but that those positions were reversed when education was held constant. It decried this because those “most disadvantaged in terms of absolute computer and modem penetration are the most enthusiastic users of on-line services that facilitate economic uplift and empowerment.”
Gaps in connection rates between White and Black or Hispanic households, even with income held constant, grew from report to report, with the 1999 report calling the digital divide a “racial ravine.” This is another variation on the gap or canyon imagery of the early digital divide literature: a fissure borne of the New Economy, separating the “information disadvantaged” from opportunity on the other side (NTIA 1995). Gore often asked audiences to consider opportunities for access not just in rich suburbs but in nearby, poor, predominantly Black inner-city areas: Bethesda and Anacostia, Brentwood and Watts (Gore 1994a).
Each report ended by profiling the “least connected” who “lag further behind” and what they stood to gain through PCs and modems (1998). The 1999 report concluded, “While these items may not be necessary for survival, arguably in today’s emerging digital economy they are necessary for success” (77) and “no one should be left behind as our nation advances into the 21st Century, where having access to computers and the internet may be key to becoming a successful member of society” (80). Policy proposals were absent in the first report but included in subsequent ones. Over time, they gave greater weight to market diffusion of the means of access, but they argued that time was of the essence and that “community access centers” such as schools and libraries could act as temporary bridges for disconnected communities.
A focus on the number of internet-connected PCs available dominated the access doctrine initially, but later coexisted with investigations of usage and skill, all broadly grouped under access (Epstein, Nisbet, and Gillespie 2011). Access did not mean skills or tools specifically, but the general opportunity to compete. Bringing the digitally divided online was an urgent problem, not for reasons of human rights, religious obligation, or any of a variety of other possible frames, but because the root causes of crises of GDP were found in individual users and their PCs.
This research program quantifies the hope that the internet and personal computing would overcome poverty. And the Falling through the Net reports kickstarted a tremendous amount of digital divide research beyond the US federal government, led by researchers like myself who wanted to understand how new technologies were helping or hurting the poor. Questions of technological diffusion (e.g., Rogers 2010) were thus inseparable from questions of social mobility. The field owed much of this political urgency to the work of modernization theory. This literature largely conceives of progress in a linear, economically and technologically deterministic fashion, with governments and philanthropies engaged in a project with the goal “to bring the backwards people forward” (Graham 2008, 779; emphasis in original; see also Escobar 2012; Ferguson 1994). Early digital divide researchers largely supplemented the NTIA’s work, measuring stratification of internet access and the success or failure of different deployments (Warschauer 2002)
In the 2000s, research began to address inequalities in not only which technologies of what quality were available to whom, but also the uses to which those technologies were put and the rewards drawn from them (Hargittai and Hinnant 2008). The focus shifted to the study of digital inequalities among those with access to the internet, with new research on equipment, autonomy, skill, social support, and the purposes for which the technology was employed (DiMaggio et al 2004). Richer accounts emerged of the mechanisms driving stratification: accounts, for example, of how managers force employees or those seeking work to develop particular skills (van Dijk 2005) or of the institutional and cultural dynamics that position Black and Latino youth as both marginal to the information economy and, through their informal learning practices, leading innovators within it (Watkins 2018).
Conceptually, social scientists integrated mid-level theories of digital inequality within broader accounts of stratification by geography, race, class, and so on (Robinson et al. 2015) by, for example, demonstrating that how young people use the internet, even holding basic access constant, varies widely by class. Higher-status individuals engage in more “capital-enhancing activities” online than lower-status individuals and thus reproduce their class position (Zillien and Hargittai 2009).
This progress in the digital divide literature is a recognition that social life does not conform to simple binaries between information haves and have-nots and that no technology on its own dissolves inequalities. Indeed, in our papers and conferences we found ourselves responding over and over to the simple binaries of the Clinton administration’s measurement program. But that program’s power went beyond its empirical findings. The framework of the human capital crisis exerted an inescapable political gravity, recruiting researchers to repeatedly refute, complicate, or nuance it—but never vanquish it. Even if it was poorly framed, the problem was too urgent to dismiss. The access doctrine compels scholars to respond. Its gravity draws us in.
But in the Clinton administration, this political urgency did not translate to large-scale political solutions. Digital divide policy was both technology and poverty policy, and neoliberal poverty policy appropriated new funds largely for police and prisons, not job creation or increased cash assistance. The NTIA’s measurement program had to justify itself on this terrain. Its first report claimed that “once superior profiles of telephone, computer, and on-line users are developed, then carefully targeted support programs can be implemented that will assure with high probability that those who need assistance in connecting to the NII will be able to do so.” The crisis of competitiveness was expansive, but the needs of the human capital deficient needed precise measurement.
Aid needed to be precisely targeted so that access would offer opportunities to compete and not handouts. If a relatively small number of Pell Grants for prisoners had been labeled handouts and canceled in 1994, then surely proposals to treat the internet as a public utility able to reach everyone in the country would be verboten later in the decade. Solutions would instead come from the deregulation of telecommunications markets and carefully targeted interventions within community access centers.
The Clinton administration’s discussion of access solutions became a meditation on state limits. This final part of the access doctrine made distribution of these solutions the responsibility of deregulated markets, wherein competition would lower prices and extend access. This forced a reconsideration of the universal service mission—the provision of baseline connectivity to every citizen in the name of safety and political and economic participation—in telecommunications policy. Consistent with other contemporary neoliberal projects, state intervention would persist, but only insofar as creating markets and securing competition in them. In the meantime, community access centers would triage technological poverty. These solutions cement a master definition of access: the opportunity to compete in the New Economy, an opportunity independent of, but able to strategically mobilize, any individual digital technology. It is this master definition that continues to animate the politics of and research into access, even as the specific questions addressed have expanded far beyond who has in-home internet. Although the access doctrine resolves the contradictions of neoliberal poverty policy, any possible solutions that emerge from it are necessarily constrained by the push for private sector promise and public sector punishment.
The administration used technology policy to stake out the purpose and limits of the state during economic transitions. Press releases for the Next Generation Internet Initiative even included Q&A sections asking why the government was involved at all (Clinton and Gore 1996a). This was posed as a reaction to a larger economic problem beyond the government’s control. The NII Agenda for Action (Department of Commerce 1993) described a new era in which “information is one of the nation’s most critical economic resources” in every industry trying to thrive “in an era of global markets and global competition.” Its future priorities are listed under the heading “Need for Government Action to Complement Private Sector Leadership”: tax and regulatory policies that promote long-term private investment, universal service, and research programs and grants that help the private sector build and demonstrate NII applications.
Laissez-faire economics is never a hands-off approach, but always an activist policy wherein the state is charged with creating and protecting markets. Plans for a Global Information Infrastructure (GII) that would end the global digital divide hinged on the request of the World Treaty Organization (WTO) for member states to privatize state-owned telecommunications (Clinton and Gore 1995). Gore (1994b) compared the GII’s promise to the contemporary privatization of the former Soviet Union’s telecommunications infrastructure, arguing that “reducing regulatory barriers and promoting private sector involvement” allowed freedom of movement for information, capital, and democracy.
Prioritizing market creation would seem to contradict the universal service mission that the 1995 Falling through the Net report argued was “at the core” of US telecommunications policy. After all, there is always someone who cannot pay, always an area where new infrastructure is too costly. This universal service mission emerged from early twentieth-century competition between the first US telephone companies, which refused to connect to each other’s customers. The 1921 Willis-Graham Act admitted that “there is nothing to be gained by local competition in the telephone industry” and permitted AT&T to form a monopoly that eventually spanned the country in exchange for a commitment to cover as much of the country as they could. The 1934 Telecommunications Act created the FCC to regulate telegraph, radio, and telephone traffic and to negotiate with AT&T over price controls and service quality (Kim 1998). State-enforced private monopoly guaranteed universal service—exactly the sort of anticompetitive, Keynesian compromise the Clinton administration argued was disrupted by information technology.
This conflict was resolved by selecting certain aspects of the universal service mission, particularly its identification of individual ownership of technology with democratic participation and economic security, for incorporation into a broader story about market creation and participation. This meant equitable access would be best facilitated not by monopoly, but by cross-media competition. In this way, the regulatory apparatus that in an earlier era ensured universal service became an enemy of that same mission.
By the 1999 NTIA report, universal service was largely a stopgap measure for “high-cost areas” left out after a program of “expanding competition in rural areas and central cities” (78). Here, in the last report with divide in the title, universal service was a question to be asked after pro-competition policies were realized. This was foreshadowed by a 1994 Congressional Research Service report showing that Gore’s original nine principles meant to guide NII policy were, a year later, cut to five. Gone was the explicit universal service principle, replaced with a new commitment to not creating “information haves and have-nots.”
This commitment registered not as universal service but as an emphasis, increasing over time, on triaging the digital divide through community access centers such as schools and libraries. In 1995, such centers were temporary “safety nets” in a “long-term strategy” (NTIA 1995, 6). But by 2000—and despite a report ten times the first’s length, which stressed that “not having access to these tools is likely to put an individual at a competitive disadvantage” (NTIA 2000, 89)—the NTIA observed the increased use of libraries by the un- or underemployed without any judgment or policy proposal. It was a settled state of affairs. The later reports had faith not only in the competitive boost information technology provided the poor but also in the power of markets to extend those opportunities. Indeed, the 1999 report reinterpreted history to fit this script, comparing internet and telephone buildout and arguing that “high levels of telephone connectivity” were achieved primarily through “pro-competition policies at the state and national levels” supplemented by universal service subsidies—rather than the monopoly granted AT&T (NTIA 1999, 77).
Universal service was always more of a political principle than a specific set of proposals and objectives, vulnerable to reframing. Crawford (2013) describes the 1990s reorganization of US telecommunications as an anticipation of the possibilities of media convergence and a reaction to monopolies borne of Reagan-era rates deregulation. Trying to manage burgeoning oligopolies, the 1996 reform of the 1934 Telecommunications Act pursued universal service largely through further deregulation. Cross-media competition and ownership was permitted in all markets; local phone companies could offer long distance, cable companies could offer internet, the Baby Bells borne of AT&T’s break-up had to let smaller companies offer services on their circuits, and all cable rate regulations were ended.
Internet access for schools and libraries would be supported by the Universal Service Fund, administered by the FCC from taxes, and the E-Rate subsidy, collected from telecommunications firms—an easy target for court challenges (Hammond 1997). There was no similar provision for households. Indeed, the FCC later argued that compelling firms to offer services of equal quality or speed in rural and urban areas “would undercut local competition and reduce consumer choice and, thus, would undermine one of Congress’s overriding goals in adopting the 1996 Act” and that equality should therefore not be considered as part of the universal service rubric (FCC 1997).
At its core, the creation and protection of markets as a neoliberal political strategy relies on the idea that more competition will bring more winners and fewer losers (Dean 2008). Clinton could promote free-trade agreements while warning about the need for workers threatened by globalization to push for reskilling because both were framed as competitive responses to New Economy stakes. This competition for competition was how the Atari Democrats revised the party’s postwar social democratic agenda while embracing the punitive turn of the right. It was in this context that the digital divide was named and conceptualized, and we have yet to escape that legacy. Even as scholarship and policy matured and began to concern itself less with access to goods and more with skills or rewards, this original framing of the problem ensured that no matter how access was operationalized, it still denoted an opportunity to compete in the global economy, best provided by competition to offer that opportunity.
The Clinton administration ensured that discussions of digital equity were understood as a problem of extending digital lifelines to the information economy: an opportunity to compete in the future, rather than being left behind in the past. But the solutions to the problem, particularly in the 1996 Telecommunications Act, that flowed from this framing ensured that it would not be solved, even on their own narrowly defined terms. A deregulatory approach resulted in a highly concentrated market, to the point where 85 percent of US consumers today have zero or one choice for high-speed internet offering 100 Mbps downloads, and 43 percent have zero or one choice for 25 Mbps downloads (FCC 2016).4 The most recent data show that of the thirty-seven Organization for Economic Cooperation and Development (OECD) countries, the United States is sixteenth in home broadband subscriptions per one hundred people—below not just the Nordic social democracies, but also comparatively poorer nations like Portugal, Latvia, Estonia, and Greece (OECD 2019).5
Nor has increased competition brought better or cheaper services. The OECD divides mobile subscriptions into low-, medium-, and high-use plans, and at each stage, US subscribers pay almost twice as much as the OECD average. The story is roughly the same for in-home broadband, with low-end US subscribers paying around forty-six dollars per month compared to an average of about twenty-eight dollars, and high-end subscribers pay about sixty-one dollars per month versus an OECD average of about thirty-four dollars. And the birthplace of the internet has pretty slow internet. Although the United States is eighth in the OECD with an average speed of 18.7 Mbps—still well below South Korea, Japan, and the Nordic countries—that ranking falls to sixteenth when examining the different speed tiers into which fixed broadband subscribers fall. Only 4.1 per 100 Americans have 100 Mbps connections, compared to 18.5 in Switzerland and 11 in Belgium, and 7.2 per 100 Americans make do with 2 Mbps internet, compared to 3 and 3.8 in Switzerland and Belgium. It makes sense. What incentive does Comcast or AT&T have to upgrade its broadband network when so many consumers don’t have a choice in the first place?
Because the access doctrine joined poverty policy and technology policy, its effects were felt not only in the regulation (or lack thereof) of private, consumer internet but in the plans (or lack thereof) for public alternatives. State legislators have repeatedly passed laws forbidding localities from building low-cost municipal broadband operated as a utility (Koebler 2015). In 2018, an FCC commissioner labeled these public resources a threat to free speech (Brodkin 2018). Although this would seem to run counter to the goal of increasing competition, it’s important to remember that Clinton and Gore’s story positioned the state only as a guarantor of market competition, never a participant. This dynamic was in full view at the Connect.DC event that opened this chapter. The city used Recovery Act funds to build a high-speed municipal broadband network to serve social service agencies in poor, majority-Black areas that larger internet service providers neglected, but it was never an option to extend those connections to consumers.
As we’ve seen, community access centers like schools and libraries were viewed as stopgap measures serving the fringes of the market before superior, private, home-based alternatives arrived. This led to a short-term approach to planning for and funding these community spaces, defined by intermittent cycles of competitive grant applications (Viseu et al. 2006) and lower levels of funding compared to other developed economies (Jayakar and Park 2012).
On the ground, this meant that the United States largely avoided the telecenter phenomenon embraced by other countries—the public equivalent of internet cafes, also missing here—while piling more obligations onto the schools and libraries that already act as community centers. Eighty-nine percent of DC public libraries, the focus of chapter 3, reported in 2012 (the last year for which data is available and the first year I began visiting MLK) that they were the only source of free internet in their area—a level higher than any state (Bertot et al. 2012). In a 2014 national survey, 77 percent of Americans who lacked internet access at home said that access at the library was very important to them or their family. Pew characterized this as part of a shift in library identity, from “houses of knowledge” to “houses of access” (Pew Research Center 2014).
Data on user behavior backs this up. Kinney (2010) found that the presence of internet-connected terminals (as opposed to their absence) has a significant positive effect on a library’s total visits and reference transactions. Because both public schools and libraries are largely funded by state and local taxes (the latter significantly more important for libraries), this increased service load is not met by increased funds in times of crisis; as recessions hit, more people require public services, tax receipts dry up, and the funds available to the institutions decrease. Internet access and digital skills training are supposed to provide a lifeline in moments of economic uncertainty, but it is precisely during these moments of high need that community access centers have less to work with.
It didn’t have to be this way. That other countries avoided this particular neoliberal marriage of technology and poverty policy shows that the US approach was not inevitable. Straubhaar et al.’s (2008) comparison of US and Brazilian technology policy makes this clear. The authors found that the Clinton administration focused primarily on physical access and framed technological stratification primarily in terms of economic opportunities lost in an inevitable moment of economic transition. The contemporaneous Brazilian inclusao social framework made access one part of a broader social mission to overcome long-standing inequalities based in race and class.
Brazil’s Cardoso government set the goals for access policy in its 1997 Green Book: new research initiatives in science and technology, distance learning, cultural preservation, telemedicine and the modernization of health systems, the construction of local e-commerce platforms, and technology education at all levels. The state was the primary actor in this frame, and the citizens in their community (rather than human capital in the market) formed the primary site of intervention. It was an explicitly political framework, increasingly so as Lula’s left-wing developmentalist government took over in 2003.
This naturally led to interventions different from those pursued in the United States. Brazil’s universal service fund collected 1 percent of telecommunications firms’ revenue, rather than the variable contributions levied on US firms based on their own quarterly revenue projections. These funds were directed not only toward schools and libraries, but toward direct infrastructure investment, assistive technologies for the disabled, and the creation of purpose-built telecenters providing wraparound social services through partnerships with local civil society groups. Local municipalities funded telecenters and provided technical support, civil society groups managed them, and the whole process was administered by a community council of local telecenter users who ensured that the initiative catered to local needs. Telecenters ran on open-source systems to reduce licensing fees and maintain the spirit of democratic participation. National competitiveness was never entirely out of the picture but, because of the broader work of a developmental state emphasizing historical inequalities, it was subordinated to community control and community empowerment. Access was a social good, not an opportunity for competition. Such an idea could not fit within the US access doctrine.
For the Clinton administration, and subsequent reformers working under that script, new inequalities were borne of skill gaps in the New Economy, rather than long-term problems of deindustrialization exacerbated by punitive poverty politics. Indeed, the power of the access doctrine comes in large part from how these technological solutions reconcile the contradiction in US poverty policy between promise and punishment. A global communications network makes opportunity available everywhere. Those that do not choose to log on and take part become a drag on regional and national productivity and so must be punished.
In his final State of the Union address, Clinton (2000a) told the nation, “We have built a new economy.” Brought into office during a recession and after the collapse of the USSR, his administration was supported by export-oriented technology industries prepared to countenance mild state economic intervention that would catalyze private investment in internet infrastructure and upgrade US human capital stocks for the New Economy. This economic nationalism would create and protect markets and ensure participation in them but lacked the direct job creation or public works of prior Keynesian regimes. The access doctrine managed the anxiety of flexible economic relations by positioning access not just as a tool or a skill, but as the opportunity to compete in the global network. Even when the actual distributive mission of increased access narrowed over time, it continued to effectively frame the problems of poverty in the New Economy not as dislocation borne of deindustrialization or the retreat of the welfare state, but as the absence of investment—by state or citizen—in human capital and the technologies to grow and market it.
But the problem with an approach to equity based on sound investments in human capital is that a new set of investors can just as easily declare them unsound, which is what happened when George W. Bush entered office. His FCC Commissioner, Michael Powell, famously riffed on the persistence of the digital divide: “I think there is a Mercedes divide. … I’d like to have one; I can’t afford one” (Labaton 2001). This signaled a shift that included prominent cuts to an Education Department program funding community access centers and a Commerce Department program for underfunded organizations, like food banks, attempting to modernize their infrastructure (Schwartz 2002).
In response, representatives of liberal think tanks like the Benton Foundation argued that this political retreat kept the nation from leveraging sunken investments that could effectively mobilize human capital (e.g., Wilhelm 2003). But the “Mercedes divide” comment was not fundamentally at odds with the script set by the Clinton administration. Powell just held that this sort of capital investment was unnecessary to increase individual or national competitiveness. Prior investments had matured. Further funds would be wasted. The script persisted, even as the left edge of neoliberalism weakened and the right strengthened: equity was still a problem of human capital investment; it was just no longer a good investment.
Hope, then, crosses the political aisle. What differs between the right and left wings of neoliberalism is the particular emphasis placed on the carrot or the stick in the development calculus. Powell’s dismissal of digital divide politics was not a dismissal of the crisis itself or even the terms of the crisis as set by the Clinton administration; after all, the Mercedes divide comment came alongside further deregulation of telecommunications markets. Powell agreed that “it’s an important social issue” but suggested that the diffusion of devices specifically would largely be taken care of by the free market (Lasar 2011).
This move was consistent with broader changes in federal poverty policy under the second Bush presidency (Allard 2007). The neoliberal shift from cash assistance to workfare programs continued apace. Cultural reengineering of poor people’s human capital intensified, with new marriage promotion programs and a prioritization of faith-based organizations in poverty relief. Block-granting of poverty relief funds to states was reauthorized. This made the recovery from the 2008 recession even more painful. An emphasis on increasing competitiveness through punitive poverty policy thus continued, even as the digital divide program was largely offloaded to the private telecommunications market. This should be seen not as a failure of the access doctrine but as a success. Targeted investments were no longer necessary because opportunities for competition abounded; the state needed only to secure the grounds for competition.
The production and reproduction of the access doctrine does not, of course, just happen from the top down. This hopeful story had its origins in the Clinton administration, but the problem of poverty only becomes a problem of technology when the institutions managing technology and poverty teach themselves to make that link and pass the lesson on to the rest of us. The remainder of this book explores this process, away from the halls of power in Washington, DC, and into its streets, classrooms, and office buildings. The access doctrine set new terms for social reproduction. The institutions that teach us how to make a living changed in response, and in so doing they taught us that new technologies and new skills will secure our economic futures.
Cities were particular sites of concern for neoliberal poverty policy and economic development. In this new regime, human capital stock is not just imported from outside but upgraded within, in the likeness of gentrifying outsiders. This “creative city” (Florida 2004) is never separate from its punitive shadow. As Spence (2015) argues, urban entrepreneurialism delegates risk-taking and responsibility to cities and individuals at the precise moment that cities’ tax bases shrink and they are forced to compete with one another. Safety nets grow more restrictive and carceral solutions become more common, with one effect being the radical restriction of acceptable entrepreneurialism (e.g., heavy punishments for drug sales or unlicensed food vendors).
Municipal leaders may wish to do something different when it comes to economic development, poverty relief, or a host of other fields, but the options available to them are limited both ideologically—there is a limited menu in their broader political networks—and materially—limited federal and state support exists for solutions that don’t involve luxury housing, tax breaks for relocating corporations, and heavy-handed policing (Forman 2017). The hopeful but limited vocabulary at the OCTO event that opened this chapter is one sign of these constraints.
The problem of persistent poverty in the information economy, both for individuals and regions, is overwhelming, and the influence of the access doctrine provides a way to understand it. The reduced resources available force urban institutions to quickly reorient themselves around new technologies and new goals—to bootstrap. In doing so, they model themselves on the ideal-type organization of the New Economy: the internet startup. Just as individual economic actors are trained to remake themselves as technological entrepreneurs in order to succeed, so too are training organizations. Chapters 3 and 4 examine this process in schools and libraries, exploring how their quest to close the digital divide changes who they are and what they do. But to really understand this process, we need to start with the “right” side of the divide and the sort of hopeful organizations and people everybody else is trying to become: tech entrepreneurs and their startups. It is to them, and their place in the city, that we now turn.
1. Many thanks to Nathan Ensmenger for referring me to these articles.
2. If the amount of monetary aid any given state is able to distribute annually is capped by the federal government in advance, states have a great deal of difficulty in responding to periods of increased joblessness—when requests for aid skyrocket.
3. Throughout this chapter, I use the term New Economy to capture the shift in the mode of production summarized in the introduction, but only because that is the term the Clinton administration used in its own hopeful economic policy documents. Obviously, two decades into the twenty-first century, this state of affairs is no longer new in any empirical sense, though arguments for its novelty still retain significant rhetorical power. And so I use information economy throughout the rest of the book, to capture the institutional drive to remake organizations and people for an economy based primarily in the production and circulation of information. These are emic terms, derived from the groups under examination rather than an endorsement of the idea that the labor most socially necessary to the reproduction of contemporary capitalism is in software development. As the introduction, particularly the BLS projections, makes clear, it is not. In the United States in particular, contemporary capitalism is dominated by low-wage service work in food, hospitality, and healthcare.
4. The final year in which the FCC reported these data was 2016. The Trump administration’s FCC did not report data on market concentration in this sector and argued that penetration should always be a composite measure of mobile plus fixed terrestrial broadband, thereby obscuring the limits of broadband adoption and its oligopolies.
5. Some of that is made up for by our comparatively higher ranking—fourth—in mobile broadband subscriptions, but mobile-only users have a much harder time with essential, typing-intensive tasks like searching for jobs or completing homework (Smith 2015).