2 Policies of Panic: Porn, Predators, and Peers

Something about the combination of sex and computers, however, seems to make otherwise worldly-wise adults a little crazy.

Philip Elmer-Dewitt, Time, July 3, 1995

[The girls] just fell into this category where they victimized themselves.

Major Donald Lowe, investigator in the Louisa County High Instagram sexting scandal, November 2014 (source: Rosin 2014)

The rise over concerns about young people’s use of digital media has led to public pressure for “somebody to do something.” Often that “somebody” comes in the form of formal governmental interventions, such as policies and public campaigns aimed at controlling risks. As will be demonstrated, regulating technology and young people’s use of technology is not as straightforward as it may at first appear. Even when there is consensus over what constitutes harm such as online predators (who universally threaten normative understandings of young people’s innocence and are thus a seemingly easy example of something that young people need to be protected from), the mechanisms through which we protect young people are inextricably linked to other competing values that make regulation difficult. For example, regulations that deny minors access to computers or websites must be balanced with rights of privacy and freedom of speech. For that reason, regulation is fundamentally complicated and often controversial. Through an analysis of various attempts at regulating young people’s use of digital media in the United States, we are able to more fully investigate expectations of both youth and technology and examine how constructions of risk are mobilized. Even when harm is universally agreed upon (e.g., when there is agreement that online predators are dangerous), the ways in which we attempt to intervene are value-laden and make visible our assumptions and expectations of young people and risk.

Discourses of risk, youth, and technology are so deeply embedded within our collective imagination that it can be difficult to unpack the assumptions and expectations that produce such concerns. Because discourses are often visible in their effects but often are invisible in their constructions, it is imperative to examine moral-panic discourses alongside their effects. Risks are often so taken for granted that it can be difficult to understand how they are constructed, enacted, and mobilized throughout culture, history, and society. “Moral panics,” Lumby and Funnell write (2011, p. 280), “constitute an intense site of debate about ideas that are grounded in belief systems and that are connected to embodied and visceral ways of knowing and to ideological systems of meaning.” Federal and state intervention strategies, in the form of policies, offer a visible response to moral panics that allow us to examine how constructions of risk and expectations of youth and technology are articulated, enacted, and legislated.

The goal of this chapter is not to offer a comprehensive and exhaustive account of all attempted and actualized policies aimed at regulating young people’s use of technology. Nor is it to deeply analyze the moral panics and actualized risks about young people’s media use, as there already exists great empirical research about risk, youth, and media.1 As was noted in the introduction, this book aims to shift our focus away from the loud prominent risks dominating media attention; however, such a move must to be contextualized within the broader mediascape of dominant panics and concerns, which is what this chapter aims to do. Rather than chronicling all the panics in detail, the goal here is to use government-sanctioned policies as an entry point for examining how risk and anxiety are mobilized in society and, accordingly, how they shape expectations of youth and technology.

In her justification for studying sexting discourse through an analysis of policies, Amy Adele Hasinoff explains (2015, pp. 166–167).: “Law and policy texts position themselves as authorized by the institution of democracy. While political rhetoric usually advocates a position, it is designed to and often claims that it represents public interest and opinions even while attempting to persuade. Policy makers routinely rely on anecdotes and statistics to make their arguments, but there are no norms or standards that prevent gathering evidence from dubious sources.” Relying on unsubstantiated claims and statistics are common characteristics of many of the policies I address, which demonstrates how data are sometimes used to represent “public interest” even when the studies are not sound. Policies are certainly not the only area we could consider in order to investigate how discursive constructions of risk are mobilized, but they do offer a productive site of analysis because they are highly controversial. The controversy aims to appeal to mainstream public assumptions about youth and tend to generate a lot of media and public attention. State-sanctioned regulations require policy makers, advocates, and opponents to articulate their competing viewpoints, which are fruitful sites of analysis. Further, if passed, policies have a measurable and visible impact on the day-to-day lives of youth. In sum, policies are an appropriate space of risk discourse analysis because they reflect and inform expectations of youth and technology.

Since the 1990s there have been three substantial waves that reflect policies of panic regarding young people and digital media technologies. I refer to them as the porn panic, the predator panic, and peer fear. The three waves are not mutually exclusive and at times overlap; each offers a categorical organization for examining how media and lawmakers respond to the perceived risks associated with adolescents’ online practices. The porn panic refers to the fear that young people will be inadvertently “bombarded” with perverse pornographic content online, and more broadly a fear about minors’ access to inappropriate sexual content in general. The predator panic is similar in that the fear centers on concerns about sex(uality), specifically inappropriate contact between minors and adults who are “lurking online” for young unsuspecting victims. Peer fear complicates discourses of youth and risk by focusing on harm associated with inappropriate behaviors among and between peers (rather than adults and adult content). Specifically I examine the peer fears about cyberbullying and sexting, which are distinct but which overlap in some instances. With peer fear, youth become a complicated site of discursive tension because there are no clear victims or perpetrators—an individual can simultaneously be both—and thus young people themselves are concurrently considered to be at risk and at fault.

Policies aimed at protecting young people’s digital media use and practices reflect harm-driven expectations and privileged perspectives of risk and harm. They construct young people in paternalistic and narrow ways that do not account for young people’s agency, discretion, consent, and contextualized practices and desires, but rather rely on overly restrictive and protectionist policies and constructions of minors as vulnerable. Despite the fact that many of the studies and texts which contributed to the panics have since been debunked, the threat of risk continues to discursively construct the Internet as a dangerous space for young people. From Foucault’s perspective,2 research that identifies a population (in this case minors) as being “at risk” renders the population governable. The label “at risk” is used to justify control and intervention, often in the form of policies. The act of naming a population as being at risk and constructing the Internet as risky shapes discourse and expectations, which in turn implicates practice. In other words, if the Internet is constructed as a dangerous space, then young people are positioned at risk, which positions policy as a necessary intervention. This is not to deny the existence of potential harms associated with young people’s online practices; however, it is to say that policies are constructed on the premise that risks should be entirely avoided. Foucault argues:

Truth isn’t outside power. … It is produced only by virtue of multiple forms of constraint. And it induces regular effects of power. Each society has its regime of truth, its “generalized politics” of truth; that is, the types of discourse which it accepts and makes function as truth, the mechanisms and instances which enable one to distinguish true and false statements, the means by which each is sanctioned … the status of those who are charged with saying what counts as true. (1980, p. 131)

Within policies of panic, the accepted “truth” is that the Internet is inherently dangerous for youth and that safety must be upheld above all other values. Young people—and by extension their families—are tasked with the burden of enacting risk-avoidance strategies.

Policy interventions attempt to reduce the risks young people may encounter online, but they also aim to reduce adult anxiety. Jackson and Scott (1999, p. 86) assert that “risk anxiety helps construct childhood and maintain its boundaries.” In part this is because anxiety results from the continual historical perception of young people as innocent and in need of (adult) protection (Kincaid 1992). Scott, Jackson and Backett-Milburn (2003, p. 700) write that “the social world of children is divided into safe and dangerous places which has consequences for children’s use of space, where they are allowed to go and the places they themselves feel safe in, frightened, or excited by.” Harm-driven expectations continually construct the Internet as a dangerous space and effectively construe all risk as harmful. Rather than enabling young people and empowering adults to help youth navigate risks, they attempt to prevent risky encounters and behaviors altogether—often with unequal consequences for different youth populations. Further, policies contribute to monolithic constructions of minors that fail to account for developmental, cultural, and emotional differentiations. In the remainder of the chapter, through an analysis of a six federal policies and a few state policies, I examine the harm-driven expectations and consequences of privileged constructions of risk and youth.

Regulation Is Tricky

There are various ways in which we as a society aim to regulate young people’s use of digital media. Some parents actively monitor how much screen time their children can have every day; other parents choose to put filters on the home router that block objectionable material; still others deny their children access to computers without direct parental supervision, or may require their children to earn screen time through good grades, chores, and other behaviors. In all these examples, parents are relying on different modes of regulation as a way to monitor their children’s behaviors and practices directly and indirectly. As was noted in the introduction, Lessig (2006, p. 124) categorizes the four constraints that function as modalities of regulation as: architecture (or “code” in digital spaces), the market, norms, and law: “The constraints are distinct, yet they are plainly interdependent. Each can support or oppose the others. … Norms constrain through the stigma that a community imposes; markets constrain through the price that they exact; architectures constrain through the physical burdens they impose, and law constrains through the punishment it threatens.” All these variables—law, norms, market, and architecture—regulate behavior in different spaces and at different times, but one factor could present a greater regulatory constraint on a behavior than another factor. Take smoking as an example again. Minors’ ability to smoke is most strictly enforced via laws, whereas adults’ smoking practices may be more notably regulated via social norms (e.g., whether or not their friends smoke) or concerns about their health. All these variables are always already interacting, and at times certain constraints regulate more directly, intently, or transparently than others.

Because it is inherently difficult to directly regulate young people—due to parental autonomy over raising their children, as well as the difficulties in regulating private businesses such as Internet service providers—public spaces become a way to indirectly regulate media and technology. Public schools and libraries are central spaces for risk interventions and regulations. In the United States there is a long history of the federal government’s mandating specific protections and educational initiatives to protect minors from real and imagined harms, including the sales and availability of tobacco and alcohol products, exposure to and the effects of advertising, data collection, obesity, and exposure to sexual and violent content. In view of this history, it is no surprise that the government would intervene in managing young people’s digital media practices as a way to protect minors from potential harms. In other words, the fact that the government is regulating practices is of little interest in and of itself. What is worth considering is how risks are produced in the first place. Understanding how risks are constructed allows us to interpret regulations from a value-laden perspective, rather than as a neutral intervening strategy of protection.

The Porn Panic

One of the earliest and still ongoing concerns about young people’s online experiences focuses on access to pornography and sexually explicit content. Obviously there are potentially negative and detrimental consequences of exposing young people to graphic sexual content before they are emotionally mature enough to process and understand what they are experiencing. The harmful effects of pornography are inconclusive (Bryant 2010; Owens et al. 2012; President’s Commission on Obscenity and Pornography 1970), yet society’s continued focus on protecting young people from presumably harmful pornographic material reveals the normative expectations of childhood innocence. Harm-driven expectations clearly propel regulatory conversations related to pornography and sexual content.

Although it may seem simple enough to pass regulations that decrease the likelihood of young people coming into contact with online pornography, the ins and outs of such regulations are much more complicated. For one thing, filters that block all sexually explicit material infringe upon adults’ rights to freedom of speech and autonomy of choice, and lead to complicated debates about censorship and morality (Godwin 2003; President’s Commission … 1970). Second, while most would agree that young people of a certain age should be barred from exposure to pornographic images (i.e., expectations that porn harms innocence), there is little consensus as to what that “certain age” should be. As will be demonstrated, conversations about porn and sexual content (including information about sexuality and sexual health) rely on constructions of childhood as a naturally (i.e., biologically) innocent developmental stage (Gabriel 2013) and presume that all exposure to sexual content will threaten innocence and result in harm. Third, what actually constitutes porn is elusive. “Definitions of ‘pornography,’” Attwood writes (2002, pp. 94–95), “produce rather than discover porn texts and, in fact, often reveal less about those texts than they do about fears of their audiences’ susceptibility to be aroused, corrupted or depraved.” Attwood argues that the indistinct definition of porn—which has been applied to Pompeian frescoes, to Shakespeare texts, and to a variety of erotic media (Kendrick 1987)—leads to confusion and regulatory challenges. Many policies that aim to regulate or prohibit access to sexual content fail to acknowledge young people’s deliberate and healthy desire for information, education, and understandings of their own emerging sexuality. Even if we as a society can come to an agreement about how to define porn and agree that is a threat to young people, the regulations are controversial and problematic when we try to put boundaries around rights of access.

In the sections that follow, I analyze three federal policies aimed at regulating pornography, but more broadly at sexual content writ large: the Communications Decency Act (1996), the Child Online Protection Act (1998), and the Children’s Internet Protection Act (2000). Like other media scholars (Mazzarella and Pecora 2007; Thiel-Stern 2014), I do not analyze the policies in isolation; I also take into account how journalism and media construct youth sexuality. Mediated discourses are important to consider because they have the power to name—and thus produce—risks; they work alongside policy to shape public opinions and expectations about youth and technology.

The Communications Decency Act

In 1996, Congress passed the Communications Decency Act (CDA), which was an attempt to regulate sexually explicit material; the most controversial and relevant section addressed indecency on the Internet. At that time, the Federal Communication Commission (FCC) already regulated indecent content on television and radio, but the Internet had not previously been affected by indecency policies in the United States. After the National Science Foundation Act opened up the Internet for commercial use in 1992, there was rising concern about the accessibility of inappropriate material, namely pornography. Arguably, the anxiety about the availability of sexual content was widespread, but, as will be demonstrated, minors became the locus of concern because they were easier to regulate and control than legal consenting adults. In order to understand the challenges of regulating sexual content, it is important to distinguish between obscenity and indecency from a legal perspective. Whereas obscenity3 is not granted protections of free speech under the First Amendment, indecent and erotic material is more subjective and open to interpretation. Historically, indecent speech has received First Amendment protection in the United States. Pornographic material includes both obscene and indecent content, thus making regulation difficult since the US government cannot ban or censor non-obscene material.4

In an attempt to regulate pornography, CDA criminalized the transmission of “obscene or inappropriate” material to anyone under the age of 18. Because a sender cannot know who might have access to information posted online, CDA essentially limited adults’ access to sexual content and criminalized indecent speech, thus limiting adults’ First Amendment rights to freedom of speech. CDA, also referred to as “The Great Cyberporn Panic of 1995” (Godwin 2003), was overturned almost immediately after opponents argued that it would have chilling effects on adults’ right to free speech. It was also argued that CDA infringed upon parental autonomy because it denied parents the right to decide what material was acceptable for their children (ibid.). Also contributing to the overturning of CDA was the fact that the language intended to protect young people from indecency was deemed too broad (Quittner 1996).

Though CDA was overturned, I want to back up and consider the context in which pornography and sexual content was being addressed in order to understand how media, journalism, and politics worked together to produce expectations of harm and to mobilize risk. When CDA was being debated in Congress, mainstream news media were also discussing the risk of pornography and inappropriate online material available to minors. In a 1996 paper presented at a conference of the Librarians Association of the University of California, Santa Barbara, Dorothy Mullin (1996) chronicled the “porn panic” of the early 1990s. She found that journalists and popular news sources reported the ease with which young people could access pornography and sexual content on the Internet. Politicians and journalists drew analogies such as “the internet’s red light district” (ibid.) and confidently proclaimed the prevalence, pervasiveness, and perverseness of minors’ access to pornography on the Internet.

One of the most memorable and influential accounts that fueled the porn panic was a 1995 issue of Time that featured a cover photo of a traumatized boy (white, approximately 10 years old) looking at a computer screen with a look of horror and fear on his face. The headline “Cyberporn” spanned the cover, followed by “Exclusive: A new study shows how pervasive and wild it really is. Can we protect our kids—and free speech?” (Elmer-DeWitt 1995). The Time article made the public aware of the risk of access to pornography, although it did not actually demonstrate the harm of inadvertent access. Risk and harm were conflated as a way to construct technology as threatening and therefore in need of government regulation, essentially mobilizing a discourse of the Internet as risk throughout society.

The Time article relied on a study known as the Rimm Study (Rimm 1995), which incorrectly claimed that 83.5 percent of the content of Usenet (a popular bulletin-board system at the time)5 contained pornographic images and obscene content. Although the claims have been debunked, the inaccurate statistics have nonetheless worked to effectively construct the Internet as a scary and dangerous space for youth. The fallacious study produced harm-driven expectations that continue to be used as the justification of regulations more than two decades later. The study was conducted by a Carnegie Mellon University undergraduate engineering student named Marty Rimm. An article titled “Marketing Pornography on the Information Superhighway” was published in the Georgetown Law Review, a non-peer-reviewed law journal. Since its publication the study has been accused of being misleading at best and has been completely discredited by other researchers as outright unsupported and inaccurate (Cohen and Solomon 1995; Godwin 1998; Hoffman and Novak 1995; Marwick 2008; Post 1995; Rheingold 1995).6 Nonetheless, the debunked statistics from the Rimm Study were repeatedly reported and used to justify CDA regulations and restrictions intended to protect young people.

Drawing from the ways media are implicated in inciting fear and panic, scholars (Godwin 1998; Mullin 1996) largely blamed the porn panic on the rhetoric that was used in the Time article. The article was presented with the following headline and tagline:

ONLINE EROTICA: ON A SCREEN NEAR YOU

IT'S POPULAR, PERVASIVE AND SURPRISINGLY PERVERSE, ACCORDING TO THE FIRST SURVEY OF ONLINE EROTICA. AND THERE'S NO EASY WAY TO STAMP IT OUT

After an eight-sentence introduction about how easy it is to access pornography and erotica offline, the article’s author, Philip Elmer-Dewitt, turned his attention to online erotica. His language both incited and indicated fear: “[S]uddenly the press is on alert, parents and teachers are up in arms, and lawmakers in Washington are rushing to ban the smut from cyberspace with new legislation—sometimes with little regard to either its effectiveness or its constitutionality.” His evidence of this claim? The now-debunked Rimm report, which was to be released later that week. The article contained a bullet-point list describing the findings of the study, which included information about: the pervasiveness of perverse content, details about how men are the dominant consumers, claims that online pornography is a worldwide phenomenon, and a discussion of how much money is made from the sales of these images. Elmer-Dewitt concluded that “the appearance of material like this on a public network accessible to men, women and children around the world raises issues too important to ignore” (p. 40). Right above this statement he explained that access to pornographic bulletin-board systems cost, on average, $10–$30 a month and required a credit card. On those grounds it is reasonable to assume that a majority of consumers were not children but rather consenting legal adults, but nonetheless children were lumped in as consumers. The article went on to discuss the benefits and negative consequences of the Internet: “This is the flip side of Vice President Al Gore's vision of an information superhighway linking every school and library in the land. When the kids are plugged in, will they be exposed to the seamiest sides of human sexuality? Will they fall prey to child molesters hanging out in electronic chat rooms?” (p. 40). Although the Rimm Study was about sexually explicit content (and more so scanned images of already existing pornography), the article lumped predators in with porn, thus further inciting fear and precipitating the need for government regulation.

In making an effort to quell fears, Elmer-Dewitt addressed the difficulties minors faced in accidentally accessing pornography:

According to at least one of those experts—16-year-old David Slifka of Manhattan—the danger of being bombarded with unwanted pictures is greatly exaggerated. “If you don't want them you won't get them” says the veteran Internet surfer. Private adult BBSs require proof of age (usually a driver's license) and are off-limits to minors, and kids have to master some fairly daunting computer science before they can turn so-called binary files on the Usenet into high-resolution color pictures. “The chances of randomly coming across them are unbelievably slim,” says Slifka.

Here we have a potential adolescent victim explaining that the odds of a minor’s chances of inadvertently accessing porn were “unbelievably slim.” What Slifka implied, of course, was that if minors were jumping through hoops and acquiring technical skills and currency to access pornographic content on Usenet, it was deliberate. The panic construed all exposure to porn as harmful and ignored that it can also be intentional, and not necessarily harmful. Although the Time article criticized the Rimm Study (particularly for the fact that the data were taken from self-selected users interested in erotica), presented other similar studies that revealed less shocking findings, addressed the complications of regulating the Internet, and quoted experts and politicians on both sides of the debate, it nonetheless contributed to the rising panic about minors’ access to porn online. Toward the end of the article, Elmer-Dewitt mused “How the Carnegie Mellon report will affect the delicate political balance on the cyberporn debate is anybody's guess.” What unfolded was a continued reliance on the study as evidence of risk and harm.

Despite the limitations of the study, Senator Chuck Grassley (R-Iowa) entered the study into the Congressional Record when he relied on the data and rhetoric as the basis for his Protection of Children from Computer Pornography Act of 1995. Senators James Exon (D-Nebraska) and Slade Gordon (R-Washington) used data from the study when they co-sponsored the CDA bill. The misleading, fear-mongering, and discredited study fueled a porn panic that was taken up by the media and by politicians.7 Fiction was functioning as truth (Walkerdine 1997) in that fallacious studies were used by authority figures and experts to justify policies. Although CDA was overturned, the expected and perceived threat of pornography has continued to shape policies and practices more than twenty years later. On the twentieth anniversary of the publication of the Time article, Elmer-Dewitt wrote an article for Fortune explaining the fallout and how the article shaped his career and fueled a panic. He even disclosed that a Time researcher assigned to his story remembers the study as “one of the more shameful, fear-mongering and unscientific efforts that we [Time] ever gave attention to” (Elmer-DeWitt 2015).

The combination of the Rimm Study, the Time article, and the article’s use as fodder for politicians and “expert” opinion allows us to trace how harm-driven expectations produced a discourse of fear and risk that fueled a moral panic and increased calls for restrictive regulations. Congress’ attempt to regulate the Internet was an example of collective harm management overriding self-responsibilization—particularly because government regulation was not the only mode of protection at this time, as is addressed in the following section.

The Child Online Protection Act

Congress tried again in 1998 to draft a policy that would protect minors from inappropriate material online (again, primarily pornography). The Child Online Protection Act (COPA) was a watered-down version of CDA. Unlike CDA, COPA only attempted to restrict commercial communication and only affected Internet service providers (ISPs) in the United States. Rather than criminalizing the transmission of sexual content to minors, the Act attempted to regulate the private sector by requiring ISPs to restrict minors from accessing sites that contained materials deemed “harmful to minors.” COPA defined “harmful to minors” in a much broader sense than obscenity, but also included material that “appealed to prurient interests” as deemed by “contemporary community standards” (Child Online Protection Act, 1998, section 231). This included all sexual acts and human nudity, including all images of female breasts but not male nipples, which reveals the sexist double standard that renders the female body as categorically and inherently sexual, and thereby explicit.8 Most detrimentally, such a broad definition also blocked minors’ access to educational information, including health and sexual information online, even though access to sexual health education is a valuable part of adolescents’ developing sexual identities.

In contrast, an opportunity-driven approach would recognize that increased access to sexual health information is a risk, but is also an opportunity to promote healthy and safe sexual exploration and development. As will be further addressed in the next chapter, offline sex-education material may be available to some youth, but the Internet provides an accessible way for minors to seek out information in private. This is particularly beneficial for young people whose questions, desires, and sexual orientation may not align with parental and cultural expectations. In addition to educational resources, the Internet also provides a supportive space and community for gay, lesbian, transgender, queer, and questioning youth to explore their sexuality and sexual identities (Gray 2009; Kanuga and Rosenfield 2004; Vickery 2010). Yet COPA framed youth sexuality as inherently harmful and attempted to deny minors access to sexual content.

Interestingly, although COPA insisted upon stricter regulations of private businesses, the congressional findings also recognized that the industry was already attempting to provide parents with ways to protect their children. In other words, the controversy had less to do with whether or not we restricted young people’s access to sexually explicit (and educational) content than with who should be responsible for such regulations, and how. There were other modalities of regulation in place that balanced risk and opportunities, including the market, norms, and technological solutions. The congressional findings included the following statement: “To date, while the industry has developed innovative [technological] ways to help parents and educators restrict material that is harmful to minors through parental control protections and self-regulation, such efforts have not provided a national solution to the problem of minors accessing harmful material on the World Wide Web” (congressional findings, COPA, 1998, section 1402). The report acknowledged technological advances that enabled parents to protect their children, yet the problem was explicitly constructed as a national problem that required a national (read: government) form of regulation. Such language is indicative of the ways in which risk discourse contributed to a panic that deemed parents’ and educators’ self-regulatory behaviors insufficient means of intervention and protection. With COPA we see how moralization discourse called for government intervention and constructed sexual content not as a risk, but as a harm (despite the fact significant harm had not been clearly demonstrated). This is an example of the way harm-driven expectations work to push collective harm management to the frontline of public discourse, rather than self-responsibilization (Hunt 2003; Hier 2008) and education.

COPA was eventually deemed unconstitutional for infringing on the protected speech of adults. It was also criticized for its broad language, which made it difficult to define or enforce. Lowell A. Reed, a US District Court judge, struck down the bill for violating the First and Fifth Amendments and interestingly added “perhaps we do the minors of this country harm if First Amendment protections, which they will with age inherit fully, are chipped away in the name of their protection” (Urbina 2007). Youth in the United States are often denied legal rights they later attain as adults. Often the more productive approaches—stemming from opportunity-driven expectations—are parental regulation, education, and professional guidance that help youth safely (and eventually autonomously) navigate or avoid risks, rather than harm-driven expectations that deny minors legal rights.

Additionally, Judge Reed’s remarks highlight the continued discursive binary of child/adult, which ignores the complexity of “youth” that is neither fully child nor legally adult. Youth occupies a transitional period from childhood to adulthood and highlights the limits of the discursive boundary (Gabriel 2013). At the age of 18 young people are automatically granted the rights of adults. But, as Judge Reed hinted, perhaps young people ought to gradually attain rights and responsibilities—to legally “come of age.” Such an approach would recognize youth as a transitional period between childhood and adulthood, rather than an absolute either/or existence. For that reason, regulation could protect the most susceptible and vulnerable populations (e.g., young children) while granting older youth responsibility and rights.

Looking Beyond the Law

There have been attempts to account for complexity within our legal understandings of protection, risk, and harm; however, the logistics of passing such nuanced regulations are complicated. When Congress initially passed COPA, it also created an eighteen-member committee whose purpose was to “identify methods to reduce minors’ access to harmful material on the internet” (Goldstein 2002, p. 1190). After two years of evaluation the panel recommended that libraries promote public awareness of technological tools available to protect adolescents, that schools and libraries adopt acceptable use policies, and that policies should focus on the design and adoption of curriculum and education intended to protect adolescents. Remarkably absent from the committee’s recommendations was any mention of requiring filtering software in libraries; instead the focus was on education and self-regulation. One member of the committee—Jerry Berman of the Center for Democracy and Technology (a free speech advocacy group)—wrote the following: “Acknowledging the unique, global character of the Internet, the commission concludes that new laws would not only be constitutionally dubious, they would not effectively limit children’s access to inappropriate materials. The Commission instead finds that empowering families to guide their children’s Internet use is the only feasible way to protect children online while preserving First Amendment values” (Statement of COPA commissioner Berman, 2002). This approach would have allowed for other modes of regulation (e.g., norms, the market), rather than legal restrictions, to offer protection.9 Additionally, the approach allowed for a more continuum-based construction of youth and empowered adults to implement discretion in protecting young people.

The members of the congressional panel were not the only authorities to suggest less restrictive technological regulations; other experts also suggested less restrictive means of regulation. For example, in the mid 1990s the World Wide Web Consortium (W3C) suggested a model of industry self-regulation similar to the self-regulation implemented by the Motion Picture Association of America (MPAA) and the National Association of Broadcasters.10 The W3C recommended that a voluntary rating system that would enable parents to block certain content be embedded within the HTML protocol. The Internet Engineering Task Force was also working on embedding a ratings system into web addresses (Abernathy 1995). This would have allowed the industry to empower parents, schools, and libraries to filter content on the basis of self-regulation and localized community standards rather than overarching government intervention. It also would have also enabled schools to implement filtering systems that would account for increasing maturity and responsibility. Elementary schools could block out material deemed inappropriate for young children; high schools could choose to allow more mature content, such as sex-education material or nude art. However, when Congress once again attempted to regulate indecency, this time via the 2000 Children’s Internet Protection Act (CIPA), paternalistic restrictive regulation continued to take precedence over education and parental guidance.

The Children’s Internet Protection Act

After CDA and COPA were struck down as unconstitutional, Congress tried yet again to restrict minors’ online engagement via law and technical restrictions. In 2000, Congress passed the Children’s Internet Protection Act (CIPA). After many challenges in court, CIPA was upheld as constitutional by the US Supreme Court in 2003. It is still in effect at the time of writing. Unlike CDA and COPA (which attempted to directly regulate access to inappropriate material), CIPA relied on existing economic regulations and government-funded institutions as a way to indirectly regulate content. Under the universal service doctrine of the 1934 Telecommunications Act (revised in 1996), many schools and libraries receive federal subsidies for telecommunication services (e.g., telephones, computer equipment, and Internet service); these services are colloquially referred to as E-rate discounts or E-rate schools.

Unlike earlier attempts at regulation, CIPA did not criminalize indecent material writ large, nor did it require schools or libraries to block access to inappropriate content; instead it required recipients of E-rate funding to utilize technical filters to block access to inappropriate material on all computers accessible to minors. Specifically, CIPA required all K–12 public schools and public libraries that receive E-rate funding to enable technology that blocks access to obscenity, child pornography, or content harmful to minors. CIPA also required E-rate recipients to adopt and enforce a policy to monitor and surveil the online activities of minors (FCC Guide to CIPA, 2000). Blocking access to obscenity and child pornography is rather straightforward and mostly uncontroversial, as obscenity is not granted First Amendment protections and child pornography is illegal. “Content harmful to minors,” on the other hand, is a much broader, more subjective, and more controversial stipulation that opens up a multitude of interpretations as to what constitutes harm. As has already been noted, risk and harm are often conflated within discourses of risk, and CIPA’s language allowed for the censorship of all content that presented even the threat of harm.

The E-rate program is an example of how governments indirectly regulate practices. With CIPA, the constraint of the market is used to indirectly fulfill legal objectives that had otherwise been ruled unconstitutional. As a similar example, Lessig (2006) recounts how the Reagan administration required doctors in federally funded clinics to advise patients against abortion as an appropriate method of family planning. This was not necessarily the medical opinion of the doctor, but rather government regulation intended to indirectly reduce abortion rates. Through indirect regulation, the government “gets the benefit of what would clearly be illegal and controversial regulation without even having to admit any regulations exist” (ibid., p. 135). Lessig asserts that indirect regulation is not necessary problematic, but he argues transparency of regulation is of the utmost importance in a democratic society. Similarly, by making technical filters a requirement for E-rate funding recipients, CIPA did not directly regulate access to indecent content online; rather, it relied on pre-existing market constraints. Many of the more than 100,000 schools and libraries that serve low-income populations have little choice but to accept E-rate funding, and therefore they must also comply with the restrictions of CIPA (Patton 2014; Long-term strategic vision … 2009). The very population CIPA was intended to protect—adolescents—is the one disadvantaged by restrictive policies that block access and opportunities for learning. Harm-driven expectations continue to dominate our perceptions of risk, and thus outweigh the potential benefits of a less restrictively filtered Internet. As is further discussed in the next chapter, the unintentional and trickledown effects of CIPA exacerbate rather than alleviate risks when enacted in high schools serving low-income and marginalized populations.

It is important to note that often it is adult institutions that benefit from a moral-panic discourse of youth at risk. The risk discourse positions young people as the scapegoats for broader social, economic, and political problems. Insisting upon boundaries between adulthood and adolescence “enables adults and their institutions to blame youth for a variety of problems created by those very same adults and adult institutions” (Mazzarella 2003, p. 238). I argue that constructing the minor at risk within the cyberporn discourse subjects them to surveillance and regulation and presents the “at-risk” adolescent as the social problem, when in fact the actual problem is morally conservative America’s desire to contain the “deviant” desires of adults who access porn online. Moral-panic discourses divert attention away from (socially constructed) deviant adult behaviors—behaviors that are difficult to control—and instead exert control over the boundaries of childhood. Cloaked as policies purely intended to protect young people from pornography, CDA and COPA infringed upon adults’ right to freedom of speech and can be interpreted as attempts to regulate the sexual morality of American citizens. This echoes Stuart Hall’s (1978) assertion that moral panics serve as vehicles for disseminating dominant ideology—in this case America’s continual struggle over sexual morality as represented through debates about pornography.

The Predator Panic

Concerns about porn have not been completely quelled, but around 2005 risk anxieties were displaced by louder concerns regarding online predators—largely as a result of evolving technologies and changes in how young people engaged with the Internet around that time. The introduction and adoption of social network sites such as MySpace in 2003 and Facebook in 2004 precipitated a shift in how young people engaged and communicated online. The popularity of social network sites allowed teens to create profiles to stay in touch with friends, form communities, and virtually “hang out” online in ways strikingly similar to the ways teens hang out in physical spaces such as malls and parks (boyd 2007; Livingstone 2008). As social network sites such as Facebook and MySpace became increasingly popular, so too did concerns about young people’s interactions with strangers. The “stranger danger” fears have a much longer history in the public imagination. First incited via public-service media campaigns in the 1970s and the 1980s, they continued to gain public attention as cyber chat rooms gained popularity in the mid 1990s; they have only intensified as teens make themselves more visible and accessible via online social network sites. As in the porn panic, misleading statistics linking social network sites to unwanted sexual solicitation proliferated fears about online child predation.

Quantitative data demonstrate that, although online social network sites pose some risks, the risk of sexual solicitation is extremely low (Finkelhor 2013; Hinduja and Patchin 2008; Rosen 2006). Nonetheless, fears about “stranger danger” are on the rise (Keohane 2010; Skenazy 2009), but research tells us that there has never been a safer time to be a child in America (Ingraham 2015). News reports and crime dramas often espouse the risk of child abduction and harm at the hands of a stranger, yet the most likely place for a child to exploited, molested, and assaulted is in the home and by someone the child knows, not by a stranger (Finkelhor 2011, 2013). According to the Bureau of Justice Statistics, the rate of missing-person reports for children has fallen more than 40 percent since 1997 (Cooper and Smith 2011). Both state and national trends consistently report a decline in the number of crimes committed against children (Finkelhor 2013). David Finkelhor, director of the Crimes Against Children Research Center, points out that only 0.01 percent of all missing children in the US are taken by strangers or slight acquaintances; that means the overwhelming majority of child predators are friends of the child or family (ibid.). Lenore Skenazy (2009), in her book Free-Range Kids, points out the irony of driving children to school or the bus stop to be safe rather than allowing them to walk: children are 40 times more likely to be killed in a car accident than to be abducted on the streets. Nevertheless, fears about strangers far outweigh fears about riding in a car (which is constructed as a quotidian experience). Skenazy also purports that statistically it would take 750,000 years to guarantee that a child left alone in public would be abducted by a stranger (Friedersdorf 2014). The point of these statistics is that there is an inherent risk in everything a child does, including riding in a car, walking alone in public, and creating a profile on a social network site. But there is also the risk that a child may be harmed by falling down stairs (a leading cause of death; see Iyamba 2012) or by choking on his or her lunch (another leading cause of death; see Nationwide Children’s Hospital, 2010) or may die as a result of a freak heater malfunction (Blackburn 2016), but we understandably do not discourage or outlaw young people from using stairs, eating, or staying warm. Statistics are a way to identify and quantify harm and can also be used to incite fear.

There is little evidence to support “stranger danger” rhetoric, yet it has worked to effectively construct the home as the locus of control and safety for young people and contributed to a construction of the (online) public as a dangerous place. Media have played a pivotal role in shaping these perspectives. Benjamin Radford (2006) goes so far as to blame the online predator panic directly on media sensationalism. In particular he notes that the popular NBC Dateline series To Catch a Predator11 played a significant role in fueling a panic. He accuses the show of being misleading because it incorporated completely inaccurate or even made-up facts. The show used adults as decoys pretending to be underage children. After conversing online in increasingly sexually explicit and graphic ways, the predator would agree to meet the “child.” When he arrived, the host, Chris Hansen, would ask the predator “to take a seat”; he would then reveal that the conversations had been staged. The show concluded with the predators’ guilty, shocked, and shamed reactions to getting caught. To Catch a Predator made it seem as though lewd men were lurking around every virtual corner just waiting to victimize a child at any moment. The perverseness of their online conversations were often highlighted, which included unnecessary graphic and sensationalized details of sexual acts.

To Catch a Predator is important to consider because, by definition, moral panics must be publicly accessible (Marwick 2008). Additionally, according to Cohen (1972), moral panics rely on “ready-made stock images” (p. 57) and serve as “dominant vehicles for diffusion” (p. 63). The series relied on the conception of the stereotypically innocent and naive child and the inherently vile and perverse predator—it went to great length to include the perverse and graphic ways predators conversed with their potential targets, which understandably contributed to and incited fear in parents. The media scholar Alice Marwick (2008) chronicles the 2006 cyberpredator panic, what she refers to as a technopanic; the panic was concerned about sexual predators on the most popular social network site at the time, MySpace. She asserts that technopanics have three defining characteristics: “First, they focus on new media forms, which currently take the form of computer–mediated technologies. Second, technopanics generally pathologize young people’s use of this [sic] media, like hacking, file-sharing, or playing violent video games. Third, this cultural anxiety manifests itself in an attempt to modify or regulate young people’s behavior, either by controlling young people or the creators or producers of media products.” Similar to the porn panic, popular discourse constructed MySpace (and teen-populated social network sites and chat rooms writ large) as a space for pedophiles to prey upon children who were discursively constructed as innocent, vulnerable, and lacking maturity or agency to protect themselves.

Child predation is a legitimate concern and one that we as a society ought to actively take steps to minimize and prevent. However, the predator panic of the mid 2000s falsely blamed technology for increasing the risk of predation when no such data existed to support the claim. In fact, Finkelhor (2013) points out that the decline in crimes against children can actually be partially attributed to an increase in available technology.12 This is in stark contrast to rhetoric that blames technology for the imagined increase in crimes committed by strangers. The focus on strangers as suspect reflects a privileged understanding of risk and sexual harm. Middle-class parents are increasingly restricting the public spaces they allow their unaccompanied children to occupy (boyd 2014; Livingstone 2008; Skenazy 2009), which in itself is a privileged choice. Working-class and poor children often do not have the same options for supervision as children from middle-class homes. Because of financial instability and precarious living situations, working-class children are more likely to spend time alone after school, to care for younger siblings, to be in the care of someone other than a parent, to rely on access to public transportation, bikes, and walking as a means of getting home from school or to and from work, and so forth. Thus, while all children are at risk of sexual predation from family members and acquaintances, working-class children are more likely to spend time unaccompanied in public (Dodson et al. 2012; Lareau 2003).

The Internet has become the new public and has given rise to growing middle-class concerns about child predators. At the beginning of the predator panic, middle-class youth were more likely to be using social network sites—or at least have reliable and frequent access to the sites—than were working-class and poor youth (Lenhart, Madden, and Hitlin 2005). This distinction reflects the classed nature of what garners public attention and what gets constructed as risky. More than ten years later, teens’ participation on social network sites has become commonplace, yet fears about online sexual predators are still frequent and visible. While working on this book one afternoon, I happened to read a story on the website of WFAA, the local Dallas ABC affiliate, about predators’ using games to lure children. In the story (Eiserer 2015), a 10-year-old white girl was sent an inappropriate message while playing a game on her phone. “It was frighteningly simple,” the opening line read, “for 10-year-old Olivia to accidentally connect with a child sex predator through a game.” In response to the incident, the girl’s mother replied: “We don’t let our kids walk anywhere. We don’t let them go out by themselves. If they’re outside riding bikes, somebody’s out there with them. That’s who we are. We know bad things can happen.” Such rhetoric is consistent with middle-class fears of the public and reflect the options middle-class parents have in supervising their children—options not always available to working-class families. Hence the privileged way the problem is framed. The mother’s comment “That’s who we are” points out how she believes she is doing “everything right,” she is not a “bad” parent, and her child does not fit the description of a child at risk. Yet even within this middle-class home children are at risk. Markedly, Olivia immediately told her parents about the incident, the user was blocked, and no real harm was encountered; nonetheless, the risk itself reasonably incites fear even in the absence of harm.

Online “stranger danger” incidents are significantly rare and stand out as exceptional, yet stories such as these continue to make the news and provide narrative fodder for crime dramas precisely because they are about “good” children with “good” parents. They disrupt our understanding of who is at risk. The story of the abused runaway from a broken home is far less likely to attract media attention compared to an attack on a child we collectively perceive as privileged—and thereby innocent and entitled to protection. Research consistently demonstrates that marginalized youth—youth of color, from poor homes, with unstable family lives, and some others—are not granted the same presumptions of and entitlement to innocence as are white, middle-class youth from “good” homes. Marginalized young people are constructed as “deviant,” and their stories of risk and harm are much less likely to make the news (unless they are in trouble with the law) and remain largely absent from our collective imaginations (Giroux 2009; Hasinoff 2015; HoSang 2006; Rios 2006). Stories of privileged youth, such as the WFAA one (and so many like it), are disproportionately more likely to catch the attention of the media than are the more frequent scenarios of sexual abuse to which runaways, homeless youth, and other marginalized young people are subjected (Fernandes-Alcantara 2013).

My point isn’t that we shouldn’t be concerned about child predation (we should), but that we ought to consider the root of the problem, understand who is at risk, and approach the problem from a less myopic view than one that narrows in on the “cyber” or “online” nature of the crimes. What we see, though, are policies that aim to regulate and restrict how young people use social media—platforms that are integral to their ability to communicate with peers, construct identities, seek out information, engage civically, and form supportive communities—rather than controlling adult predators who are the actual problem. In the sections that follow I examine the Deleting Online Predators Act and the Protecting Children in the 21st Century Act in order to explore the production and mobilization of predator risk discourse.

The Deleting Online Predators Act

It is important to note that the Children’s Internet Protection Act did not require schools to block access to social network sites, but that many schools have chosen to block students’ access to all social media. The 2006 Deleting Online Predators Act (DOPA), which was passed by the House of Representatives by a vote of 410–15 but was not voted on by the Senate, would have required schools receiving E-rate funding to block access to all social network sites and chat rooms. Social network sites were broadly defined as commercial sites that “(i) allow users to create web pages or profiles that provide information about themselves and are available to other users; and (ii) offer a mechanism for communication with other users, such as a forum, chat room, email, or instant messenger” (section 3, Deleting Online Predators Act). DOPA contributed to the acceptance of the Internet as a dangerous space in particularly gendered ways by relying on a media discourse that constructed girls as especially vulnerable and at risk of predation. Although the bill was overwhelmingly passed by the House, opponents were worried that it would be overly restrictive and that it would block access to potentially educational or useful websites, such as Amazon, blogs, and wikis, that might fall under the bill’s overly broad definition of social network sites.

DOPA eventually failed, but a decade later its legacy still shapes policies and practices in many schools. Despite little to no evidence to suggest that social network sites pose an increased threat to the well-being of students, many schools (including Freeway High, as will be discussed in chapters 3–5) continue to block students’ access to social network sites. The Internet certainly presents risks for young people (and adults); however, as has already been noted, there is little evidence to suggest that young people are at greater risk online than they are elsewhere (e.g., at home, at school, or on a playground). The Crimes Against Children Research Center reports that since the rapid adoption of the Internet “sex crimes overall and against adolescents have dropped dramatically in the US” (Finkelhor 2011, p. 5) and that overall crimes against and by youth have been declining since the adoption of the Internet.13 Nonetheless, harm-driven expectations have led to a perception that the Internet poses a greater threat to young people than it actually does (Holloway and Valentine 2003; Olfman 2008; Livingstone 2009; Finkelhor 2011)—a perception that was fueled by cyberporn and cyberpredator panics and validated by the introduction of policies such as those specified by DOPA.

The Protecting Children in the 21st Century Act

After dying in Congress, DOPA was reintroduced in 2007 as the Protecting Children in the 21st Century Act (part of the broader Broadband Data Improvement Act). Rather than completely banning social network sites, the 2007 version required E-rate funded schools to “protect against access to a commercial social networking website or chat room unless used for an educational purpose with adult supervision” (Protecting Children in the 21st Century Act, section 203, ii).14 Requiring direct adult supervision of all social media practices puts an undue and unreasonable burden on teachers. In practice, most schools probably would have chosen to deny access altogether, rather than actively monitor and be held liable for students’ social media practices.

However, in 2011, as the bill was being debated in court, Congress began to acknowledge the educational value and potential of social media. In a positive move, the Protecting Children Act amended CIPA and issued a statement that by the 2012 fiscal year schools’ Internet safety policies must include provisions for educating students about appropriate online behaviors and interactions on social network sites. Instead of banning social media altogether, the FCC publicly noted that “social networking websites have the potential to support student learning” (Donlin 2011). The Protecting Children Act was not necessarily reflective of a decreased concern about risk, but rather can be viewed as an appropriate response to an increased concern about cyberbullying (ibid.) (or more accurately, relational aggression among peers, which is the focus of the next section). Additionally, the FCC concluded that, although individual pages on Facebook or MySpace might pose a risk to minors, those sites are not in and of themselves “harmful to minors.” As a result, they did not fall into a category of websites that must be blocked.

This is a positive step for schools, teens, and educators who have long acknowledged the educational potential of social network sites, as will be more fully explored in the second half of the book. Stephanie Winfrey, senior compliance specialist at Funds For Learning, a leading E-rate compliance services firm, applauded the adoption of the additional guidelines, adding that the changes should help to “foster a generation of responsible Internet users” (FCC releases Order … 2011). This is perhaps the first time we see evidence to suggest that students’ positive online experiences can influence policies aimed at reducing risk. CDA, COPA, CIPA, and DOPA were all shaped by harm-driven expectations that consistently constructed the Internet as a harmful space and youth as passive subjects. However, the Protecting Children Act implemented opportunity-driven expectations by taking into consideration the positive values of social media. The act appropriately enabled schools to play an active role in helping youth navigate social media and positioned educators and students as experts of their own experiences. Empowering educators is a notable shift in risk discourse because it breaks away from sensational and exceptional narratives of harm, and instead takes a more nuanced approach that aims to balance risk with opportunity, instead of just trying to eliminate risk altogether. Conversely, E-rate funded schools now face the challenge of remaining compliant with CIPA while also deciding whether and how to incorporate social media into their classrooms. Many schools still choose to block social media, but at least it is at the discretion of the school rather than a federal mandate. This enables schools to exercise discretion, curriculum, scaffolding, and policies that fit the localized needs, values, and practices of their communities.

Peer Fear: Cyberbullying and Sexting

Back in 2008, I wrote about the suicide of 13-year-old Megan Meier, who had hanged herself after being bullied via her MySpace account (Vickery 2008). Little did I know, at that time, that the case would become a locus for all sorts of cyberbullying concerns, attention, and legislation. In the decade since her death there have been far too many other similar stories in which teens, many of whom face mental health challenges, have taken their own lives after experiencing bullying, rejection, and aggression from peers. I want to revisit Megan’s story not only for the media attention it garnered and the legal questions it posed, but also because her story complicates constructions of the child/adult binary and serves as an appropriate introduction to what I call peer fear. Peer fear refers to the fear and anxiety about the ways young people communicate and interact with one another via digital and mobile media; it shifts the focus away from strangers and instead onto the interpersonal relationships between young people. As an example, I specifically consider the panics and legislation that have circulated around two headline-grabbing concerns: cyberbullying and sexting.

Cyberbullying Becomes a National Concern

To begin, let’s take a deeper look at the coverage of Megan Meier’s story. The words “cyberbully” and “bully” were notably absent from either the local coverage of the story (which received frequent coverage for years on STLtoday.com, the website of the St. Louis Post-Dispatch), as well as from national coverage. Megan’s suicide became the locus of media and legal attention for what we now refer to as cyberbullying,15 Yet at the time the media were not yet identifying or naming the problem as cyberbullying. The act of labeling something, of categorizing it—be it an object, a population, or a phenomenon—gives it credibility and power.16 Language both creates and alters perceptions of the world and is constituted by, and constitutive of, power, values, and ideologies. “By naming something,” Armstrong and Fontaine write (1989, pp. 7–8), “one actively carves out a space for it to occupy, a space defined by what one values in the phenomenon and by how it appears to be like or unlike other parts of one’s world view.” Cyberbullying intentionally draws attention to the presumably unique aspect of the online environment, the “cyber element” of bullying. Identifying cyberbullying as a phenomenon works to incite fear and panic. Labeling cyberbullying as a unique and unprecedented problem produces harm-driven expectations by identifying it as something distinctive, harmful, and worthy of attention. Nonetheless, at the time of Megan’s death the term was not being invoked in a story such as hers, a story that today would undoubtedly garner a cyberbullying headline.

The year 2007 appears to have been the “tipping point” (Gladwell 2000) at which cyberbullying garnered national concern. The term was taken up in various spaces within popular culture, media, and research. A public-service campaign to end cyberbullying was launched by the Advertising Council (in partnership with the National Crime Prevention Council, US Department of Justice, and Crime Prevention Coalition of America). The Ad Council released an anti-bullying video as part of an ongoing educational campaign. In June, the Pew Internet and American Life Project released its first report that included data about cyberbullying and online harassment of youth (Lenhart 2007). Harris Interactive, a market-research firm that tracks popular trends and releases the results of polls, published an entire newsletter about youth and cyberbullying: Trends and Tudes, written by Chris Moessner, the Research Director for Youth and Education (Moessner 2007). That same year, the Centers for Disease Control and Prevention described cyberbullying as an “emerging public health problem” (David-Ferdon and Hertz 2007; Hertz and David-Ferdon 2008).

There has been much research and funding directed toward “solving” cyberbullying as a social and legal problem, but it has proved particularly challenging to legally regulate for many reasons. First, regulation of interpersonal communication often infringes upon freedom of speech. Second, peer relationships transcend the liminal boundary of when schools can regulate off-campus behaviors. Third, it can be difficult to demonstrate when cyberbullying constitutes harm or presents a credible threat rather than playful teasing. Fourth, online speech is often anonymous, pseudonymous, and/or collaborative, and thus it is difficult to identify wrongdoers. Despite the practical and legal challenges to regulation, there is a general consensus that someone ought to do something about cyberbullying. The cry intensifies every time there is another teen suicide that is allegedly the result of peer aggression, harassment, and bullying. “Cyberbullying,” the legal scholar Alison King writes (2010, p. 848), “is already too grave a problem to be ignored, and it is quickly escalating with the proliferation of Internet use and the popularity of social-networking sites.” Such a statement reveals the taken-for-granted-ness that the problem must be addressed and regulated. King goes on to declare that “the time has come for legislative action” (p. 849). However, as she and other legal scholars and school administrators are well aware, the logistics of regulation are challenging. Further, it is unclear whose responsibility it is to regulate cyberbullying. Should it be the responsibility of schools, the federal government, parents, or the social media industry and its platforms?

State Laws Address Cyberbullying

To date, there are no federal anti-bullying laws. However, in April 2015, Montana became the fiftieth and final state to sign an anti-bullying bill into state law (Baumann 2015)17; the anti-bullying laws of all but eight of the states address cyberbullying directly. The majority of state laws address bullying at the (public) school level by describing at a minimum what (public) school district policies must address regarding bullying. The laws vary greatly, but have some similar components. Most provide a definition of bullying, but the definitions vary greatly. According to a report prepared by the US Department of Education (Stuart-Cassel, Bell, and Spring 2011, p. 25), “some state laws focus on specific actions (e.g., physical, verbal, or written), some focus on the intent or motivation of the aggressor, others focus on the degree and nature of harms that are inflicted on the victim, and many address multiple factors.” Notably, such definitions are not consistent with research-based definitions of bullying, which focus on a “repeated pattern of aggressive behavior that involves an imbalance of power and that purposefully inflicts harm on the bullying victim” (ibid., p. 1). Many of the laws conflate “bullying” and “harassment,” which, although similar, have different legal definitions: “Harassment is distinguishable from more general forms of bullying in that it must be motivated by characteristics of the targeted victim. It is generally viewed as a subset of more broadly defined bullying behavior. Harassment also violates federal civil rights laws as a form of unlawful discrimination” (ibid., p. 17).

Not surprisingly, there is much disagreement about whether schools can regulate student speech, particularly when it occurs off campus. Regulating student speech both on and off campus has some legal precedents,18 but still poses challenges to students’ First Amendment rights (King 2010).

Only two states (Massachusetts and Rhode Island) specify that disciplinary action must be balanced with education about appropriate behavior (Sacco et al. 2012). The state laws are overwhelmingly trending toward a legal approach that aims to punish and criminalize bullying (cyber or otherwise) rather than to address the larger social context and implications of bullying. This is consistent with the increased criminalization of youth practices and behaviors (Giroux 2009). The Department of Education reports:

Recent state legislation and policy addressing school bullying has emphasized an expanded role for law enforcement and the criminal justice system in managing bullying on school campuses. Though historically, authority over youth bullying has fallen almost exclusively under the purview of school systems, legislation governing the consequences for bullying behavior reflects a recent trend toward treating the most serious forms of bullying as criminal conduct that should be handled through the criminal justice system. … An increasing number of states also have introduced bullying provisions into their criminal and juvenile justice codes. (Stuart-Cassel, Bell, and Spring 2011, pp. 19–20)

Similarly, the Megan Meier Cyberbullying Prevention Act (2009) took a punitive and criminalizing approach to addressing cyberbullying. In 2009, Representative Linda Sanchez (D-California) brought the legislation before the US House of Representatives, but it was not enacted. Both Democratic and Republican representatives feared the bill was too broad and would have chilling effects on free speech. The bill proposed that violations be prosecuted as a felony rather than a misdemeanor.19 Representative Louie Gohmert (R-Texas) said the legislation “appeared to be another chapter of over-criminalization [of minors]” (Kravets 2009).

The majority of state provisions, as well as the Megan Meier bill, rely heavily on punishing bullying only after it has occurred and are unlikely to deter a would-be bully from harassing someone. We have seen that when young people are framed as social problems, they become subjects of control, surveillance, and criminalization. The focus of many of these policies is on the perpetrator, rather than the target or even the social conditions leading to bullying. Many state and school policies ignore the reality that numerous perpetrators are also victims, and thus our understanding of victim/offender is complicatedly disrupted. Within the legal framework, blame is bounced around from youth to technology to parents and to schools. This is evident when schools deny students access to sites such as MySpace, Facebook, Instagram, and YouTube, and also when parents sue school districts for failing to appropriately prevent cyberbullying incidents.20

Other legal scholars have proposed amending the Communications Decency Act in order to hold ISPs and website administrators (collectively referred to as Online Service Providers or OSPs) responsible for facilitating bullying via their services and platforms. King (2010) suggests creating a notification system in which OSPs would be required to remove defamatory content. The process King proposes would function similarly to the “safe harbor” provision of the Digital Millennium Copyright Act (DMCA), which holds service providers liable for material that infringes upon copyright law only if they are made aware of the material. King argues that OSPs need “a legal incentive to combat cyberbullying that occurs by means of their services.” Since the publication of King’s article, we have seen many social media platforms implement anti-harassment policies (as part of their Community Standards and Terms of Service agreements) and enable users to report violations of the harassment policies (Rubin, Sawyer, and Taye 2015). It is up to the discretion of platforms as to whether or not they will remove the content or ban the user, but there are nonetheless mechanisms in place for reporting inappropriate content and users in cases of harassment and bullies. Such forms of regulation are not the results of legal liability so much as they are industry self-regulation driven by economic incentives. If users regularly have negative encounters on a particular platform, or if enough parents and adults hear about negative encounters, users are likely to leave that particular site altogether, which in turn has a negative economic impact on the site. As van Dijck (2013) argues, social media platforms have an economic incentive to enhance (safe) sociality. Social media platforms’ anti-harassment policies are an example of how social norms and the market often work together to incentivize industry self-regulation as opposed to heavy-handed legal action from the state.

This brings me to my final point about the legal regulation of cyberbullying, which is that the criminalization of youth behavior addresses the symptoms, but not the causes, of bullying. Rarely do the policies address mental health and the social conditions of the school climate that perpetuate power imbalances, social hierarchies, intolerance, and bullying in the first place. One policy that stands out as an exception is the 2009 Student Internet Safety Bill.21 The bill, proposed by Representative Adam Putnam (R-Florida), would have allowed school districts to use federal funds to “educate their students about appropriate online behavior, including interacting with individuals on social networking Web sites and in chat rooms. They could also use the funds to protect students against online predators, cyberbullying, or unwanted exposure to inappropriate materials, or promote involvement by parents in the use of the Internet by their children” (source: Representative Cathy McMorris Rodgers’ testimony before the House of Representatives, retrieved from Congressional Record for June 15, 2009). The proposed bill took an educational approach and offered funding for education. One criticism of many state bullying laws is that they require schools to create policies and curriculum, but do little in the way of funding research, policies, and curriculum (Sacco et al. 2012). The burden rests on already strained school districts to fund such policies. Unfortunately, the Student Internet Safety Bill died in the Senate (after it was passed unanimously by the House).

Problematically absent from most discussions of cyberbullying are larger discourses of social hierarchies, mental health, race, gender, class, and sexuality. Discourses organize the social world in such a way as to make certain aspects of problems seem relevant: our focus is frequently on what is happening, who it is happening to, and how we can punish it, but rarely are we overtly addressing why it is happening in the first place. or why some targets of bullying are less equipped to cope or seek help. Over the past eight years, conversations about cyberbullying have productively moved away from overly blaming technology (although technology remains heavily regulated and banned in many schools). Technology exacerbates the problem and presents new challenges, but we have come to a better understanding of the ways cyberbullying is merely an old problem with a new and more persistent, instantaneous, and intensified face. What is seldom discussed within conversations about regulation is the overt prevalence of homophobic, sexist, and ableist nature of bullying that is so common among reported cyberbullying incidences. We know that young LGBTQ people are significantly more likely to be bullied and harassed than their heterosexual and cisgender peers and are four times as likely to commit suicide (Bullying and LGBT Youth 2009), yet these variations are not often accounted for in news reports addressing bullying. In fact, Hasinoff (2015, p. 8) points out that media repeatedly overlook stories involving people of color and queer youth when they are victims of crime, instead such mainstream discourses focus on “the benevolent but misplaced desire to protect the supposedly inherent sexual innocence of white middle-class girls.”

The larger questions of diversity and tolerance (or lack thereof) remain unaddressed in far too much of the cyberbullying discourse. Perhaps one reason is that addressing the why of bullying draws attention away from the controllable behaviors of young people and shifts attention toward an adult society that teaches and socializes young people to be racist, homophobic, sexist, and ableist in the first place. Additionally, cyberbullying is largely constructed as an individualized problem (often constructed as an individual pathology), rather than as symptomatic of larger social, cultural, and collective issues that extend beyond individuals. Cyberbullying is constructed as a problem of the young (never mind that Megan Meier’s bully was an adult); however, a 2014 Pew study found that 40 percent of adult Internet users have experienced some form of online harassment (Duggan 2014). The problem is significantly greater for women, who disproportionately experience gender-based online harassment (Chemaly 2014; Duggan 2014), which is becoming an “established norm” (Hunt 2016). These numbers are strikingly similar to the numbers of young people who report being bullied online. Notably, Pew used the word “harassment” in its survey of adults. Although Pew may have done so because of the legal definition of harassment, it also highlights the juvenile connotations of the word “bully,” a word that further signifies a “youth” problem.

As long as cyberbullying remains an individualized “youth problem,” it can be subjected to collective control and management, and young people can be simultaneously blamed and protected within legal discourse. But the moment the problem becomes a larger societal problem, the blame must be (at least partially) placed on adults’ racist, sexist, ableist, and homophobic attitudes—attitudes that cannot be controlled through criminalization. Young people have become scapegoats onto whom larger social anxieties are displaced and whose behaviors society places under control and surveillance. Cyberbullying discourses distract us from larger racist, sexist, and homophobic discourses that cannot be so easily disciplined and reify harm-driven expectations of youth and technology.

The Problem with Sexting

Sexting—broadly defined as digitally producing, distributing, and/or consuming sexually suggestive, nude, or explicit images of oneself or one’s peers—is an appropriate conclusion to a discussion of panics and policies.22 Not only is sexting the subject of the most recent panic to gain national attention, but it combines all three aforementioned anxieties: it incites fears of pornography (particularly of underage female teens), of predators (who coerce young people to engage in activities to produce images), and of bullying (in which peers distribute images without permission in order to shame or harass an individual). As with the previously discussed panics, there are valid risks and legitimate harms that can accompany sexting. There are incidences in which young people are coerced into producing sexual images against their will and are blackmailed, harassed, or abused. These instances are never justifiable and often constitute physical, emotional, and/or sexual abuse as defined by the law and breach psychological and emotional understandings of autonomy. However, the purpose of this section is to consider the practices of older teens who willingly produce and share sexually suggestive and explicit images with a (would-be) romantic partner. The former examples are inexcusable and often criminal, but the latter warrant a deeper understanding of youth agency, sexuality, privacy, and consent.

As with the Communications Decency Act, the porn panic, the Deleting Online Predators Act, and the predator panic, we are again witnessing data being used as scare tactics. Depending where you look, there are numbers reporting that 20 percent of teens have sent or received a sext (Knorr 2010). However, other studies have found a mere 7 percent of teens have sent or received a sexually explicit photo and only 1 percent of those potentially violated child pornography laws (Mitchell, Finkelhor, Jones, and Wolak 2012). These numbers are a far cry from the “sexting ring scandals” the media present as commonplace and that have contributed to a moral panic (Bryner 2012; Fields 2014; Rosin 2014; Searcey 2009). Contrary to the picture the media present, the majority of teens are not producing and sharing random sexual images, but are doing so in the context of trusting relationships (Hasinoff 2015; Mitchell, Finkelhor, Jones, and Wolak 2012). The discrepancy between reality and panic is attributable in part to vague definitions of what constitutes sexting, discrepancies between adult and teen norms, and teens’ reluctance to disclose private sexual practices with adult researchers.

In the second of the epigraphs at the beginning of this chapter, Major Donald Lowe, the investigator of the Louisa County sexting scandal,23 describes teen girls as culprits who had “victimized themselves” via sexting. The public distribution (via Instagram) of sexts from teens at the school caught national attention with the headline “Police bust Virginia sexting ring involving more than 100 teens” (Fields 2014). The headline presented the story as though the teens had been involved in some sort of organized crime ring. The girls were shamed through public reactions, including online comments that included gendered epithets such as “slut,” “ho,” and “tramp.” Other reactions called for legal prosecution; it was suggested that the teens should be charged with the production, possession, and distribution of child pornography. The quotation from Major Lowe draws attention to our need to rethink discourses of youth when laws that are intended to protect minors are also used to prosecute and harm them. With sexting there is not always a clear victim or offender, and in many cases the “victims” may not identify as such if their practices were consensual and deliberate and when the images were not circulated outside of the intended context.

Laws pertaining to child pornography are intended to protect young people from sexual exploitation. However, as we have seen with many media stories about sexting —and as teens themselves report—sexting can be a part of a consensual and deliberate sexual practice among peers. It is a way for some teens to explore their sexuality, arouse interest and desire, flirt, and express intimacy (Hasinoff 2015). It is inherently problematic that most states acknowledge older teens’ right to make decisions about engaging in sexual activities24 and yet simultaneously condemn youth for recording those very same sexual practices they legally engage in. Certainly it can be a challenge to legally specify intent (Sacco 2012), but laws that automatically criminalize teen sexting are derived from harm-driven expectations that are based on the assumption that sexting innately and inevitably results in harm. Criminalizing sexting can actually do more harm than good. Instead of protecting young people, which is the goal and purpose of the law, criminalization labels a minor as a sex offender. I agree with Hasinoff’s argument that “ensuring that adolescents have the right to sext is the most effective way to protect them from these kinds of unfair prosecutions” (2012, p. 161). The legal right to sext within consensual relationships and within particular circumstances grants teens protection from unnecessary criminalization.

I struggle with restrictive and all-encompassing policies that monolithically construct all minors, from toddlers to high school students, as a singular legal category that ignores developmental, emotional, and cultural differentiations. Policies that aim to restrict minors’ digital access (whether to pornography or to sexting) in the name of preventing harm have to rely on a stereotypical image of the innocent “child”—an image we can normatively agree deserves protection. However, such an image erases the agentive and coming-of-age teenager from our collective understanding of minors. Harm-driven expectations problematically conflate the legal status of a minor with the fluid cultural and discursive constructions of “child,” “youth,” and “adult.” Consequently, such a monolithic approach falsely constructs a uniform understanding of “risk and harm” and “child and youth.” Conflated discursive constructions are produced by and reflective of harm-driven expectations of technology and rely on a reification of (white middle-class) youth as inherently innocent.

With sexting, a minor as both a victim and an offender disrupts our discursive understandings of youth because young people’s exploration of sexuality cannot be encapsulated by a child/adult or innocent/knowledge binary. Fleur Gabriel (2013, p. 105) writes: “Popular discourses on sexualisation, however, rely more on traditional and Romantic assumptions about childhood and youth in responding to sexualisation debates, seeing them as naturally innocent and therefore rightly lacking sexual knowledge.” Policies that label all sexting practices as deviant, harmful, and pornographic leave no space for the deliberate and consensual ways young people explore their sexuality as part of a healthy developmental process. Gabriel (ibid., p. 106) argues that coming-of-age narratives are problematic discursive constructions of youth because they rely on false binaries between innocence/knowledge and child/adult: “Young people are called to ‘come of age,’ yet I argue that this concept of youth is grounded in a contradictory logic that produces conflicting aims: a desire to preserve the innocence of youth and a simultaneous expectation that they ‘grow up.’” If childhood is about the preservation of innocence (or lack of knowledge and experience), then any exposure to knowledge—or in this case, expression of sexuality that transcends legal definitions of minors—is considered an assault on said innocence. Youth is a transition from childhood to adulthood, and that transition involves a loss of innocence. As a society we normatively agree that children need to “grow up,” yet there is much debate, anxiety, and concern about how and when it is appropriate for youth to “grow up.” “The discourse of ‘coming of age,’” Gabriel further explains (p. 110), “describes something that is supposed to happen given the values and structures of modern society, but which at the same time is prevented from happening by those very structures.” Growing up inevitably leads to a loss of innocence, or rather the acquisition of knowledge and experience, including explorations of one’s own sexuality. This framing de-sensationalizes sexting and instead considers it as an optional aspect of healthy sexual exploration.

Debates that regulate or prohibit all sexting practices are similar to Judge Lowell Reed’s remarks about pornography—should we outright deny young people rights they later inherit as adults? There are major distinctions between (a) adults coercing young people into sexual activities or adults consuming images of minors outside of a context in which the image was intended to be shared and (b) young people willingly producing and sharing images with peers, particularly within the context of a romantic relationship. Hasinoff (2015, p. 140) argues that “erasing consent is particularly problematic when legal and school officials completely ignore malicious behavior and choose to instead punish everyone involved [in sexting] equally.” She argues, and I concur, that we must move toward an explicit model of consent for everyone, and that “scholars, policymakers, technology developers, and users alike should adopt an explicit consent standard for the production, distribution, or possession of private media and information [including sexting]” (ibid., p. 139). This model also deviates from harm-driven expectations by considering the intent, consent, and agency of teens’ practices, rather than presuming an inevitable outcome of harm.

Peer Culture and Social Norms

As with cyberbullying, what we need is an evolution of norms. Laws are important and have a role in shaping practices, but they are insufficient in and of themselves. An analogy to drunk driving is appropriate. We certainly need laws that regulate drunk driving; however, by nature such laws can only ever be reactive—that is, they punish only after a person drives drunk. Social norms play an important proactive preventative role in changing attitudes about drunk driving. Two successful campaigns have transformed cultural attitudes about driving drunk: the 1988 “Designate a driver” campaign (Harvard Alcohol Project 1988) and the Ad Council’s 1983 “Friends don’t let friends drive drunk” campaign (Drunk Driving Prevention 1983). The former offers something proactive for people to do; it shifts from a restrictive approach (i.e., don’t drive drunk) to an affirmative tactic (i.e., designate someone to drive). The latter encourages a culture of peer accountability that extends beyond the realm of the law to encourage an attitude change about the acceptability of drunk driving. The message “Friends don’t let friends drive drunk” was frequently incorporated into popular television shows such as Cheers, L.A. Law, and The Cosby Show (Winsten 2010). Incorporating the messages into pop culture is a powerful strategy for changing norms and attitudes. These examples did not rely on scare tactics related to drunk driving (graphic images, harrowing statistics, and so on), but rather shaped new norms about drinking responsibly. Both campaigns have been considered successful in changing cultural attitudes about inebriated driving in the United States and have contributed to a decline in alcohol-related accidents (Harvard Alcohol Project 1988; Winsten 2010). Both rely on social norms, rather than the law, as a mode of regulation. Similarly, we should think about how schools, parents, media, and young people can work together to shift cultural norms and attitudes about sexting, privacy, and consent. And importantly, these strategies must move past archaic gendered stereotypes that shame teenage girls for their sexuality and instead contextualize sexting within larger understandings and norms of consent, agency, privacy, and sexual education.

Conclusion: Implications of Fear-Based Regulation

This chapter has demonstrated how discourses of risk—often fueled by fallacious and exaggerated data—work alongside harm-driven expectations to incite anxiety and justify control. No matter how misleading or overblown the claims about porn, predators, cyberbullying, and sexting are, harm-driven expectations have power. As Stuart Hall aptly articulates (1997, p. 49), “Knowledge linked to power, not only assumes the authority of ‘the truth’ but has the power to make itself true. All knowledge, once applied in the real world, has real effects, and in that sense at least, ‘becomes true.’ Knowledge, once used to regulate the conduct of others, entails constraint, regulation and the disciplining of practices.” It does not matter if there is conclusive evidence proving the Internet presents an increased risk of harm, because the notion of adolescents at risk and the construction of the Internet as a dangerous space have already been “made true” via discourses of “knowledge.” Moral panics are vehicles for harm-driven expectations and work to produce and mobilize particular discourses of risk and to distract from other risks, harms, and concerns.

This chapter has demonstrated at least some of the implications of constructing the Internet as dangerous and youth at risk—constructions that are used to justify policies of panic. These overly restrictive policies are derived from expectations of harm and ignore the positive opportunities associated with learning to navigate risks. Through an analysis of policies aimed at regulating young people’s online participation over the past twenty years, I have argued that risk discourse protects against some risks at the expense of other competing values and opportunities, privileges particular youth populations, and fails to prepare youth to safely navigate risk and positive opportunities.

The reliance on statistics and “expert” opinions brings certain risks to the forefront of public attention and functions as truth, even when the information is misleading or inaccurate. In and of itself this is not particularly harmful; however, a focus on sensationalized harms diverts attention, research, and resources away from more serious threats to the safety and well-being of youth. Research consistently demonstrates that we cannot blame the Internet for most of the crimes committed against youth, and there is even evidence to suggest that technology makes young people safer. Yet harm-driven expectations provide justification for restrictive policies that exert control and surveillance over young people’s practices, speech, and movements, all the while ignoring broader contextual variables that lead to harm. Discourses of risk—even in the absence of demonstrable harm—are used to sublimate other rights, including freedom of speech, access to educational content, and a recognition and validation of young people’s emerging sexuality, autonomy, and agency. The old adage “safety first” is circulated as a normative value that is never supposed to be questioned. Safety is important, but notions of risk must always be questioned. We must never fail to recognize that risks do not objectively exist “out there,” but are always socially produced through mechanisms of identification, categorization, and propagation by institutions of power. We ought to strive for policies that minimize harms and protect young people’s legal rights, values, and agency.

A second takeaway from this chapter is that policies of panic that ban technology are unlikely to actually protect youth from harm; instead their function is to reify boundaries between child and adult as a way to protect perceptions of childhood innocence. As was noted, conversations that demand public attention are often dictated by the concerns of the privileged. The law professor Mary Anne Franks (2015) rightfully points out that recent concerns about online privacy draw attention to the fact that the poor, people of color, and criminals have historically been subjected to privacy violations daily, and that violations of their privacy have been largely ignored.25 Franks argues that if we as a society really cared about privacy violations—as we claim to every time we feel a social media platform has invaded our privacy—then we would have had these conversations about privacy much sooner. She makes a convincing case that members of the privileged middle-class and predominantly white society do not actually care about privacy as a universally protected right until our own comforts are invaded. In a similar way, panics and restrictive policies about pornography, sexually explicit content, bullying, and sexting also reflect a privileged understanding of risk and are most concerned with protecting privileged populations. Particular marginalized youth populations have always been at risk of violence, sexual exploitation, and predation, yet their stories garner little attention from media or policy, and they struggle to receive the resources they need to alleviate these risks. This same population faces additional risks of hunger, poverty, and incarceration, but their stories—and the proposed solutions—go largely ignored. When their stories are told, it is often through a lens that constructs the victims themselves as a social problem, rather than the systematic injustices or perpetrator. Thus we come to expect that such youth must be controlled, surveilled, and contained rather than protected.

The panics discussed in this chapter demonstrate that risks become visible only when they threaten otherwise protected and privileged young people, who do not fit the stereotypical image of “at-risk” populations. When privileged young people are perceived to be threated, their stories gain attention and concern. This is evident when parents and teachers speak to what a “good kid” a victim or an offender is, or how they are “surprised this could happen to their child” (Thiel-Stern 2014). News coverage emphasized how Megan Meier’s parents did “everything right.” Such rhetoric is largely absent when “at-risk” youth are subjected to harm; instead they are likely to be blamed or held responsible for any harm they encounter. The policies that have been explored in this chapter demonstrate how online risk and harm have been shaped by middle-class understandings of protection and innocence and how technology and youth become discursive sites for governmental intervention and control.

Both of these implications—drawing attention to sensationalized harms and protecting childhood innocence—lead to the third takeaway from this chapter’s harm-driven policy analysis: the failure to equip young people with the resources and education to safely navigate risk. There is inherent risk in everything we do. Banning social media will not eliminate the risk of exposure to unwanted pornography, sexual predation, or peer aggression. If our goal is to eliminate risk, we will fail every single time. And even if we could craft a policy that eliminated all these risks by denying young people access to particular content or websites, young people would still lose these regulatory and restrictive protections when they turn 18 and gain the constitutional rights of adults. And how then could we expect them to be prepared for the inevitable risks they will eventually face? Youth is a time of learning and preparing for adulthood, and that means helping young people identify and navigate risks, not avoid them.

As an alternative approach, opportunity-driven expectations recognize our responsibility as a society to help young people identify and assess risk. Regulations that move beyond minimizing risk to balance and expand opportunities will help young people make decisions about which risks are potentially beneficial and worthwhile and which decisions are not. As will be discussed in the following chapters, an educational approach recognizes and values young people’s desires and experiences by building trusting relationships with adults who, rather than look over their shoulder, “have their back” (to paraphrase from Jenkins 2007). This approach validates the experiences and expertise of educators and school districts to discern the appropriate measure of guidance for their students. Opportunity-driven expectations do not construct a monolithic view of youth, but instead account for variations within different communities, developmental stages, and experiences. Opportunity-driven expectations balance risk by crafting regulations that simultaneously expand opportunities for positive experiences and minimize exposure to harm. Safety should not be polarized as the opposite of risk; rather, a discourse of safety must strive to separate risk from harm and to help young people learn to navigate positive and negative opportunities by respecting their experiences, their values, and their rights.

The rest of the book examines how discursive understandings of risk and harm-driven expectations shape local policies and practices. These prevailing fears—porn, predators, and peer interactions—dominate public imagination, conversation, resources, and policies. However, what effect does this narrative have on the lived experiences of actual young people? Such a question is particularly challenging to answer when we remember that many policies are predicated on insubstantial claims and assumptions of harm in the first place. Equally as important, what are the other risks and harms that are subsumed by these visible and attention-demanding discourses? To answer these questions, I explore the unintentional consequences of the Children’s Internet Protection Act at Freeway High in order to examine how the policy actually exacerbates some risks. I then address the second question: What else might we be concerned with—that is, what risks are rendered invisible as a result of these media-fueled panics? It is far more attention-grabbing to discuss and worry about porn, predators, bullies, and sexting than it is to concern ourselves with the intensification of social inequities. But we must address those inequities if we want to create a safe and equitable digital world for all young people—a world in which risks are minimized regardless of privilege and opportunities are maximized across all populations.

Notes