2

Forgetting Made Impossible

Since the conversation about the right to be forgotten began in Europe and particularly after Google v. AEPD, many observers in the U.S. have perceived digital redemption as entirely unworkable and ridiculous, while others have begun to consider the idea more seriously. Although many Americans may want some kind of structure for digital redemption, the commentary surrounding digital redemption stops the conversation before it starts, in large part due to the weight placed on the freedom of expression in the U.S. When Wolfgang Werlé and his lawyers were contacting media outlets to have his name dissociated with his crime, they attempted to do the same for the English-language Wikipedia entry by sending a cease-and-desist letter to the Wikimedia Foundation, explaining, “[Werlé’s] rehabilitation and his future life outside the prison system is severely impacted by your unwillingness to anonymize any articles dealing with the murder.”1 The Electronic Frontier Foundation rebutted the argument: “At stake is the integrity of history itself.”2 In the U.S., solutions in the form of nonlegal governance are popular approaches to the harms caused by increased discoverability, but these silver bullets are no more successful than are other simplistic responses.

Though a number of state constitutions expressly provide for a right to information privacy, the U.S. Constitution does not; as we have seen, this is not actually unusual. The right to privacy crafted by the Supreme Court in 1965 in Griswold v. Connecticut, which stated that “specific guarantees in the Bill of Rights have penumbras, formed by emanations from those guarantees that help give them life and substance. . . . Various guarantees create zones of privacy,” serves only to protect against an overbearing and too powerful government, offering no protection against private intrusion.3 Any protection against private entities, therefore, is derived from common or statutory law. “Dignity” has not made its way into either. Although the Supreme Court used the term “human dignity” or its equivalent in 187 opinions between 1925 and 1982 and 91 times between 1980 and 2000, the references have been “inconsistent and haphazard,” and the cases involve bodily searches, treatment of prisoners, marriage, and abortion.4 In the context of information privacy, Samuel Warren and Louis Brandeis in their monumental 1890 law review article “The Right to Privacy,” which launched the expansion of common law privacy actions in the U.S., envisioned a right to privacy that was “a part of a more general right to the immunity of the person, the right to one’s personality.”5 In 1960, William Prosser’s classic article “Privacy” analyzed and organized hundreds of privacy cases and found four distinct torts,6 which were eagerly and obdurately put into practice, but in doing so, Prosser “stripped out any high level concepts of why privacy should be protected.”7 This operationalistic approach was reiterated by Chief Justice of the Seventh Circuit Frank Easterbrook, who stated frankly, “When we observe that the Constitution . . . stands for ‘human dignity’ but not rules, we have destroyed the basis for judicial review.”8

The conflict between expression and privacy is inevitably lopsided, as it is one of constitutional versus common or statutory law. The values of privacy and the First Amendment have been “balanced,” and lines have been drawn between negligence and actual malice,9 public figures and private citizens,10 and public concerns and private interests11 to guide lower courts. The attempts by judges, legislators, and advocates to etch out some space for privacy concerns in light of the reverence for expression, explicitly granted in the Constitution, have been woefully unsuccessful.

There are four key legal mechanisms utilized by those who want to control or limit the flow of information about them once it is released: intellectual property restrictions; contractual obligations; defamation; and the privacy torts: (1) intrusion upon seclusion, (2) public disclosure of private facts, (3) misappropriation, and (4) false light. Copyright restrictions are very useful for preventing the replication of content created by the information subject, but they only reach the creative aspects of that work and do not reach information created by another person related to the subject. European countries offer similar protections but under the umbrella of certain rights, as opposed to four separate civil claims. While avenues for protecting privacy are available in both regions, we will see that in the U.S. these options have been significantly weakened, offering little recourse once information is considered public.

To say that Americans do not remove information from the Internet would be disingenuous. Copyrighted material is regularly removed at the request of the copyright holder. Section 512 of the DMCA grants safe harbor from secondary copyright liability (i.e., responsibility for copyright infringement of an end user) to online service providers (OSPs) that remove content in response to a takedown (cease-and-desist) notice from the copyright holder (17 U.S. Code § 512). This can be the removal of an image, song, or video or a link that simply directs one to the complained-of content. There is no judicial oversight involved in this initial takedown-for-immunity arrangement. Use of copyrighted material may be permitted in situations that are considered fair use, found in section 107 (17 U.S. Code § 107). These exceptions include criticism, comment, news reporting, teaching, scholarship, or research and are subject to a balancing test applied on a fact-specific, case-by-case basis. If a user believes his or her use of content falls within a fair-use exception, he or she can file a counternotice, but the OSP is required to keep the content offline for a week (17 U.S.C. § 512(g)). Wendy Seltzer, policy counsel to the World Wide Web Consortium (W3C), explains the way a takedown system can significantly chill speech: “The frequency of error and its bias against speech represents a structural problem with secondary liability and the DMCA: the DMCA makes it too easy for inappropriate claims of copyright to produce takedown of speech.”12 It is too easy for two main reasons: (1) even good-faith claims involve legal uncertainty; (2) speedy removal creates an incentive to file dubious claims.13 The chilling effects of the DMCA system have not resulted in further alterations to copyright law. A data subject who owns a copyright in the information he or she is trying to remove will be able to address the copyrighted material itself but not commentary surrounding the material or use that falls into one of the exceptions.

Often the disclosure of information online is covered by the terms of service of the site; but these contractual obligations only restrain those who are privy to the contract, and much of the information disclosed about an individual is outside this relationship. So, while information flow can be controlled through legal mechanisms like intellectual property laws and terms-of-service agreements, these options leave holes for personal information from the past to easily slip through, in which case filing a tort claim may be the only option. Even then, looking more closely at available tort claims reveals that suing someone where truthful information is involved will not likely be successful in the United States.

The privacy torts are not entirely relevant to the goal of digital reinvention as they address false impressions or the improper disclosure of information, whereas digital reinvention relates to the invasion that results from continued access to personal information. But the privacy torts are the options that Americans have when they seek to protect against the unwanted dissemination of information by others, and they have been significantly restricted to protect free speech. Intrusion upon seclusion protects one from the intentional invasion of solitude or seclusion through physical or nonphysical means like eavesdropping, peeping through windows, or opening another person’s mail. The public disclosure of private facts is a cause of action against one who disseminates generally unknown private information, even if it is true. One may also be sued for using another person’s name, likeness, or other personal attributes without permission for exploitative purposes, or misappropriation. In states that recognize a claim for false light, it can be brought if a defendant publishes information that places the subject in a highly offensive light, but this claim addresses false impressions as opposed to false statements. In order for one of these claims to be successful and prevent, hinder, or punish the dissemination or access to information in the U.S., it is the interest of the public, not the interest of the individual, that matters.

Public interest is built into information disputes through the protection of “newsworthy” content or content that “the public has a proper interest in learning about.”14 Many victories over privacy have been won with a blow from First Amendment newsworthiness. But newsworthiness is not impenetrable and has not always trumped privacy claims. If we look back far enough, we can find cases wherein the public’s interest would not necessarily override an individual’s privacy interests. For example, in Melvin v. Reid (1931), the court found that the movie depiction of a former prostitute’s real-life involvement in a murder trial impinged on the positive reinvention of the woman and overpowered the public’s interest in her past. “One of the major objectives of society as it is now constituted . . . is the rehabilitation of the fallen and reformation of the criminal. . . . We believe that the publication by respondents of the unsavory incidents of the past life of appellant after she had reformed, coupled with her true name, was not justified by any standard of morals or ethics known to us and was a direct invasion of her inalienable right.”15 After Melvin, the rehabilitative function of privacy began to dwindle, and the definition of newsworthiness began to grow. In 1971, in Briscoe v. Reader’s Digest Assoc., California’s highest court recognized a common law cause of action for invasion of privacy for a magazine’s publication of circumstances that occurred a decade earlier, ordering that the magazine could be liable if it published the material with reckless disregard for facts that a reasonable person would find highly offensive. The plaintiff had been convicted of hijacking trucks in the 1950s. Upon serving his sentence, he reformed his life, but his past was revealed to the world when Reader’s Digest published a story on hijacking and identified the plaintiff in relation to the crime. The court found this identification potentially actionable: “One of the premises of the rehabilitative process is that the rehabilitated offender can rejoin the great bulk of the community from which he has been ostracized for his anti-social acts. . . . In return for becoming a ‘new man’ he is allowed to melt into the shadows of obscurity. . . . Human forgetfulness over time puts today’s ‘hot’ news in tomorrow’s dusty archives. In a nation of 200 million people there is ample opportunity for all but the most infamous to begin a new life.”16 Even still, the plaintiff ultimately lost. When the case went back to the trial court, it was removed to the federal Central District of California, where the story was found newsworthy and the magazine’s motion for summary judgment was granted in an unpublished opinion.17

Time does not have the same impact on the public interest in the U.S. as it does in some European countries. More specifically, time does not generally transform a once-famous person, a former public official, or those who are thrust into a public issue into not being of interest to the public later. This becomes clear when we look at defamation standards. Defamation claims create a cause of action to protect one’s reputation from false statements of verifiable fact, as long as the individual is not a public figure or considered one for the limited purpose of an event that is of public interest. Truth is an absolute defense in a defamation claim, and so it is not an available avenue for pursuing digital redemption of properly disclosed information; but because public-figure plaintiffs suing for defamation must meet a higher standard (“actual malice”—that the defendant published with either knowledge of falsity or in reckless disregard for the truth),18 case law on plaintiffs who are no longer in the spotlight is relevant. Anita Wood Brewer, a television performer and recording artist and Elvis Presley’s “no. 1 girl” in the late 1950s, had retired to private, married life when an article in 1972 mistakenly reported a romantic reunion with Presley. A unanimous Fifth Circuit panel held that Brewer remained a public figure, at least in regard to a report revolving around the source of her fame.19 In Street v. National Broadcasting Co., the Sixth Circuit held that Victoria Price Street was a public figure over forty years later on the basis of her involvement as a victim in the famous Scottsboro Boys case, a controversial trial of nine black young men who were convicted and sentenced to death for raping two white women on a freight train in 1931. Street was portrayed as a prostitute, a perjurer, and dead in the NBC “docudrama” aired in 1976. Though the trial court found that Street was no longer a public figure,20 the Sixth Circuit held that her public-figure status was not changed by the passage of four decades.21

Courts have maintained this stance in the privacy context. Newsworthiness and public interest are not necessarily, or likely, diminished with the passage of time. Sidis v. F-R Pub. Corporation (1940) illustrated the beginning of a trend away from Melvin. In it, a child prodigy who was profiled in 1910 brought a claim for intrusion upon seclusion when the New Yorker published an intimate “where are they now?” story, with the subtitle “April Fool!,” profiling him as an odd, recluse adult in 1937. The Second Circuit established that Sidis remained a public figure and explained that it could not confine “the unembroidered dissemination of facts” unless the facts are “so intimate and so unwarranted in view of the victim’s position as to outrage the community’s notion of decency.”22 In Estill v. Hearst, the Seventh Circuit decided a case regarding the publication of an image taken fifteen years prior depicting then Indiana prosecutor Robert Estill in a friendly pose with John Dillinger in jail. The picture was described as “literally a fatal mistake. Following Dillinger’s epic crashout with a wood-carved gun, Estill lived long enough to be laughed out of office. Then, a broken man, he died.”23 Estill was neither laughed out of office (he had maintained active political life, held public office, and practiced law for twenty-five years after Dillinger escaped) nor dead, but because the court found that the plaintiff had been a public figure at the time the photo was taken, no invasion occurred a decade and a half later. In Perry v. Columbia Broadcasting Sys., the actor Lincoln Theodore Perry, who went by the stage name Stepin Fetchit, argued that the 1968 CBS series Of Black America “intentionally violated [his] right of privacy and maliciously depicted [him] as a tool of the white man who betrayed the members of his race and earned two million dollars portraying Negroes as inferior human beings.”24 The Seventh Circuit held in 1974 that forty years after Perry filmed his last movie, he remained a public figure.

In the end, most privacy tort claims result in losing plaintiffs and unscathed defendants who were allowed to expose the private idiosyncrasies of the subjects; the facts are rarely offensive enough, and the public interest is easily satisfied. As the Indiana Supreme Court held in 1997 that an HIV-positive postal worker could not sue a coworker for invasion of privacy after she shared his health information, it explained, “Perhaps Victorian sensibilities once provided a sound basis of distinction, but our more open and tolerant society has largely outgrown such a justification. In our ‘been there, done that’ age of talk shows, tabloids, and twelve-step programs, public disclosure of private facts are far less likely to cause shock, offense, or emotional distress than at the time Warren and Brandeis wrote their famous article.”25 If society finds disclosure unobjectionable, the dissemination of private facts will not be hindered based on real implications for the individual.

It is difficult to imagine what facts could be more private, morbid, or sensational than those intertwined in rape cases, but these disclosures are nonetheless protected as long as they are obtained legally. In Cox Broadcasting Corp. v. Cohn, the U.S. Supreme Court decided that truthful publication of a rape victim’s name obtained from public records was constitutionally protected.26 A similar set of facts led to the same result in Florida Star v. B.J.F., in which the Court addressed the issue of whether information obtained from the public domain—subsequently published by the press—created liability under the public disclosure tort, narrowly deciding favorably for the press.27 The “zone of privacy surrounding every individual, a zone within which the State may protect him,”28 recognized by the Court in Cox has not developed clearly defined boundaries, but generally, the right to know has trumped privacy interests.

Deference to journalists to determine what is newsworthy and assurance that the long tail of the Internet creates an audience for everything make for a very convoluted notion of newsworthiness as a standard for the proper and continued dissemination of private information. Of course, today the disclosure of private details that reach the masses can originate from any number of sources including the individuals themselves. Cringe-worthy personal information may reach the press last (after having been passed from email or smartphones to social media posts to blogs), reporting on how a file has gone viral. U.S. law reinforces the notion that we are at the mercy of the people around us. After all, “exposure of the self to others in varying degrees is a concomitant of life in a civilized community. The risk of this exposure is an essential incident of life in a society which places a primary value on freedom of speech and of press. ‘Freedom of discussion, if it would fulfill its historic function in this nation, must embrace all issues about which information is needed or appropriate to enable the members of society to cope with the exigencies of their period.’” While courts have gone back and forth on elements of privacy tort law, this quote—the second half originally handed down by the Supreme Court in Thornhill v. Alabama in 1940,29 the first half added in Time v. Hill in 1967,30 and used again in 2001 in Bartnicki v. Vopper31—represents a consistent aspect of American legal culture.

The difference between the foregoing information disputes and those related to digital redemption is that in the former cases, there is nothing necessarily illegal or undesirable about the information when it is initially collected or published online. The right to be forgotten addresses information that has become outdated, irrelevant, harmful, or inaccurate. This information haunts individuals, causing undesirable repercussions for them. An interesting attempt at digital forgetting occurred in the lower state courts in California. In 2006, the Daily Californian published, against the pleas of a student’s parents, a story about his suspension from the Berkeley football team because of his actions at an adult club; a downward spiral ensued, and in 2010 he died. The parents sued after the newspaper refused to remove the article from the website. Its perpetual existence had caused a great deal of emotional pain, but the court found that the two-year statute of limitations in the Uniform Single Publication Act begins upon the first publication. Moreover, the court took issue with the assertion of intentional infliction of emotional distress derived from libel of the memory of a deceased family member, preventing the case from moving forward.32 More of these creative claims should be anticipated.

One such claim was rejected by the Second Circuit on January 28, 2015. The court was asked to interpret Connecticut’s Criminal Records Erasure Statute in relation to a newspaper’s right to distribute content about the original arrest. In August 2010, Lorraine Martin was arrested with her two sons on drug-related charges. After the state of Connecticut decided not to pursue charges against her in January 2012, her criminal case was nolled, meaning it would no longer be prosecuted, and thus she was “deemed to have never been arrested within the meaning of the general statutes with respect to proceedings so erased and may so swear under oath,” in accordance with the statute.33 Martin sued multiple news sources that had published stories about the arrest, arguing that the publications were defamatory after the incident was nolled. The Erasure Statute requires destruction of “all police and court records of any state’s attorney pertaining to such charge” but does not impart erasure obligations on anyone else. Martin contended that the continued discoverability of the news story, made possible by the news sites’ hosting of the content, constituted the publication of false information, but the court explained that the statute “does not and cannot undo historical facts or convert a once-true fact into falsehoods.”34 Essentially, the court interpreted the Erasure Statute in relation to the First Amendment as allowing for multiple legal truths. Martin may say she was never arrested as a legal truth, and the newspapers may legally publish the fact that Martin was arrested. Because the content accurately portrayed Martin’s arrest, all of her claims failed. Beyond her libel and false-light claims, which failed because the stories contained no falsehoods, the claim of negligent infliction of emotional distress failed because there was no negligence in the publication of truthful newsworthy information, and the appropriation claim failed because the paper did not improperly appropriate her name by reporting criminal proceedings to the public. Similar conclusions have been reached by state courts, including the New Jersey Supreme Court, which explained, “The expungement statute does not transmute a once-true fact into a falsehood. It does not require the excision of records from the historical archives of newspapers or bound volumes of reported decisions or a personal diary. It cannot banish memories. It is not intended to create an Orwellian scheme whereby previously public information—long maintained in official records—now becomes beyond the reach of public discourse on penalty of a defamation action.”35

U.S. cases, with few exceptions, are markedly distinct from Google v. AEPD for two reasons. The first is that most of the U.S. claims have been unsuccessful. The second is that all the U.S. cases involve removal from the original source of publication. The lack of success is explained by the First Amendment’s interpreted priority in relation to the press, privacy, and reputation. The second distinction is a result of Google’s immunity from liability as a search platform. Section 230 of the Communication Decency Act (CDA) prevents interactive computer service providers (platforms and service providers) from being held liable as a publisher for content posted by users of the service. It reads, “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider” (47 U.S. Code § 230). U.S. courts have not been inclined to hold intermediaries accountable for information created by a user and have extended § 230 immunity to a wide range of scenarios, including claims for invasion of privacy, misappropriation, and even negligent age-verification procedures.36 In the U.S., platforms and service providers are considered simply conduits for content provided by someone else, with the exception of copyrighted material, of course.

Data trails that are not publicly disseminated fall outside these legal tools. Priscilla Regan succinctly describes the way data protection works in the U.S.:

Generally it takes an incident to focus attention on the issue of information privacy—and such incidents tend to focus on one type of record system at a time. . . . There is always vocal and well-financed opposition to privacy protections, generally from business and government bureaucrats who do not want to restrict access to personal information. Their opposition is usually quite successful in weakening the proposed privacy protections and in further narrowing the scope of such protections. And after passage opponents are likely to challenge legislation in courts, often on the basis of First Amendment grounds that any information, including that about individuals, should flow freely and without government restriction.37

Detailing the murder of an actress that led to restrictions on the release of vehicle-registration information, the disclosure of a Supreme Court candidate’s movie rentals that led to the Video Privacy Protection Act, and concern about the ease with which health and financial information could be shared that led to legislation, Regan concludes that in the rare instances when self-regulation is not available and legislation actually passes, the individual is handed the burden.38 “In most instances, the costs associated with protecting privacy are shifted to the individual in terms of time to monitor privacy notices and practices and time and often money to pursue redress of grievances, which rarely benefit the consumer in a meaningful way.”39

The dissemination of data collected through sites and services is governed by the terms of service set by the sites and services themselves, which are enforced by the Federal Trade Commission (FTC) and by internal policies enforced by what Jeffrey Rosen calls “delete squads.”40 These are the teams of people hired by large platforms that interpret and enforce sites’ internal policies on the removal of certain content. The FTC acts under its authority to combat unfair trade activities, to bring actions against deceptive digital information operations, and to guide corporate policy through workshops and best practices. In the end, there is currently very little legal support for individuals who seek to curate their digital profiles and to delete old information.

Children, however, are a little different. With regard to data-protection regimes, the U.S., unlike other countries, carves out special privacy protections and detailed procedures for children online. The Children’s Online Privacy Protection Act (COPPA) regulates information collected from children, as opposed to what they are exposed to. The FTC is responsible for the enforcement of COPPA, which requires parental authorization before commercial operators may collect personal data from children under thirteen if the site or service targets children or has actual knowledge of preteen users. Most sites that do not intend to be “kid” sites either do not collect birthdates (which would put them on notice of users under thirteen) and inform users in their terms of service that those under thirteen are not allowed, or collect birthdates at registration and do not allow accounts to be created for children under thirteen who put in their real birthdate. Needless to say, there are a lot of workarounds at registration.41 The disclosure of children’s data collected by rare governed operators requires additional consent to be obtained.

While COPPA gives children and parents the right to actively participate in the collection, use, and retention of children’s personal data, it does not protect children from themselves. COPPA is intended to protect children from unscrupulous data-hungry adults working in an asymmetric information landscape, as described earlier. In fact, COPPA stops providing protection to children just when they need it most, as teenagers.

The gap left by COPPA, in both age and protection, was filled by a California state law, Privacy Rights for California Minors in the Digital World.42 Signed in September 2013, the law grants minors under eighteen the right to remove information that they themselves post online. There are some important caveats to the law. First, the California law applies only to

The right does not extend to content posted by another user, posts that have been copied or reposted by a third party, anonymous posts, or content removed from visibility but still stored with the site or service.

The state law does not require an “eraser button,” meaning that this is not a technology-forcing law. Rather, it grants the substantive right to remove content that has been disclosed (arguably) to the public and asserts the associated procedural requirements to effectuate that right. Procedurally it is similar to laws that ensure information controllers provide means to correct information in certain settings (included in most policies based on the Fair Information Practices Principles). The law requires that sites and services must provide notice of the right and clear instructions for exercising the right and must explain that exercising the right does not ensure complete or comprehensive removal of the information.

The right granted to California minors is novel in the U.S. Under only a few circumstances does U.S. law allow truthful information to be retracted from the public domain once it is released (e.g., copyright). The law grants this right only to minors in California but intends to hold any site that is accessible to people in California responsible for any violations. The law took effect January 2015 and has not yet been challenged in court, so it is still unclear whether the forced removal of content that has been made publicly available violates a First Amendment right to access such information. The juvenile cases that supported the dissemination of B.J.F.’s identity by the Florida Star newspaper involved striking down a pretrial order that enjoined the publication of the name and photograph of a boy in connection with proceedings attended by journalists43 and overturning an indictment of two newspapers for violating a state statute that forbade the publication of names of youths charged with crimes prior to being granted permission by the juvenile court.44 In 1962, facts from juvenile court records that detailed a boy’s sexual assault of his younger sister were printed in a “daily record” column of a newspaper and were determined to be newsworthy.45 Thus, it is not entirely clear that a California minor has a right to remove a newsworthy comment he or she posts online, particularly one on a traditional newspaper story page.

While the disclosure of the identities of juveniles and rape victims is much more easily prevented and punished elsewhere, COPPA remains a unique and pioneering approach to protecting children. The European Union is often considered to have no special protections for minors, but that is not entirely true. The DP Directive may not provide specific treatment for children, but a child’s right to development is protected as a fundamental right—so much so that it is stated in the Universal Declaration of Human Rights (art. 25), the International Covenant on Civil and Political Rights (art. 24), the International Covenant on Economic, Social and Cultural Rights (art. 10), and the EU Charter of Fundamental Rights (art. 24).

Children’s data-privacy rules for the age of consent and additional parental consent differ in somewhat informal ways across Europe. In the UK, the Data Protection Authority’s Guidance states that parental consent is generally required to collect data from children under twelve years old. Germany requires parental consent for children under fourteen, and Belgium for those under twelve. In Spain, a royal decree requires verifiable parental consent in order to process data for children under fourteen. France requires compliance closer to COPPA. The French data-protection agency requires parental consent to collect data from minors under eighteen. Only email addresses and age can be collected for email subscriptions. It is illegal to use a minor’s image or transfer a minor’s data to third parties when obtained through games. In Denmark and Sweden, data cannot be collected from children under the age of fifteen as a general rule. None of these countries provide extensive procedures like those outlined in COPPA or an explicit digital right to be forgotten specifically for children.

A few other efforts should be mentioned that pointedly address information with significant psychological effects or social stigma. The first is an anti-cyberbullying movement. Cyberbullying is harassment using information technologies to send or post hurtful, hateful, or embarrassing content. The issue has given rise to numerous nonprofit foundations, initiatives, and workshops, as well as laws and policies.46 These proposed and codified laws have limitations and variations because cyberbullying is difficult to define, affects speech, and involves removal of content.47 Still, this is one type of content, depending on the age and educational affiliation of the subject, that may (possibly) be forgotten.

Although there is no U.S. federal law for cyberbullying specifically, states have taken up the challenging task of drafting laws that address real and lasting harms without inviting government censorship. Currently, forty-one states have online-stalking laws, and thirty-seven have online-harassment laws. The trick is differentiating between harassment and protected speech. For instance, the New York State Court of Appeals struck down a county cyberbullying statute that criminalized electric or mechanical communication intended to “harass, annoy, threaten, . . . or otherwise inflict significant emotional harm on another person.”48 The statute was challenged by a fifteen-year-old boy, who pled guilty after creating a Facebook page that included graphic sexual comment next to photos of classmates at his school. Judge Victoria Graffeo explained, “It appears that the provision would criminalize a broad spectrum of speech outside the popular understanding of cyberbullying, including, for example: an email disclosing private information about a corporation or a telephone conversation meant to annoy an adult.”49 In dissent, Judge Robert Smith stated that he believed the law could stand if applied only to children.50

A second movement is the anti-revenge-porn movement. Revenge porn is the disclosure of sexually explicit images, without consent of the subject, that causes significant emotional distress. Also called “nonconsensual pornography,” revenge porn is often the product of a vengeful ex-partner or opportunistic hacker, and most victims suffer the additional humiliation of being identified with the images, leading to on- and offline threats and harassment.51 As of 2015, sixteen states have criminalized revenge porn. A number of scorned lovers have been prosecuted under New Jersey’s revenge-porn statute, which prohibits the distribution of “sexually explicit” images or videos by anyone “knowing that he is not licensed or privileged to do so” and not having the subject’s consent.52 Dharun Ravi, a Rutgers University student who distributed webcam footage of his roommate engaging in sexual activity, was also convicted under the law after his roommate killed himself.53 These laws have been criticized as unnecessary because legal remedies like the privacy torts and copyright, criminal extortion, harassment, and stalking statutes cover the aspects of revenge porn that do not infringe on free speech rights. For instance, the ACLU is challenging the Arizona revenge-porn statute, which makes it a crime to “intentionally disclose, display, distribute, publish, advertise, or offer a photograph” or other image of “another person in a state of nudity or engaged in specific sexual activities” if the person “knows or should have known” that the person depicted has not consented to the disclosure, arguing that it is unconstitutionally broad and captures First Amendment–protected images.54

The third is a smaller movement, directed at sites that post mug shots and arrest information about individuals. These mug-shot websites may simply be conveying public information, but when they started charging upward of four hundred dollars to remove pages, state legislators responded.55 A few states have passed laws limiting the commercial use of mug shots. The Oregon law, for instance, gives a site thirty days to take down a mug shot once individuals prove that they were exonerated or that the record was otherwise expunged.56 In light of the Second Circuit’s Martin v. Hearst case, discussed earlier in this chapter, the constitutionality of these statutes is questionable and may hinge on a distinction between the press and other types of sites.

Recognition by some U.S. courts that these narrowly tailored legislative efforts do not violate the First Amendment suggests that when communication is gossipy, vindictive, or coercive (as opposed to published by the press), the public may not have a right to know it. These laws are designed, however, to punish the disclosure of only specific and extreme content: (1) nonconsensual, highly intimate images, (2) psychologically detrimental communication directed at children, and (3) communication that reaches the level of harassment, stalking, or extortion. They do not cover your run-of-the-mill bankruptcy notice in an old newspaper or protected expression that was rightfully disclosed but may no longer be of legitimate interest to the public. The ability of these movements to decrease discoverability of the information hinges on whether disclosure of the content can be punished within the limitations of a given interpretation of the freedom of expression, whether they are properly enforced, and whether removal can be effectuated.

European laws that protect against cyberbullying and revenge porn are similarly situated in national and local harassment, privacy, and defamation laws, but no Europe-wide law exists. Instead, the EU, equally concerned and involved in these movements, has supported research to understand the extent of the cyberbullying problem, has developed guidelines for ISPs to keep children safe online, and has started a #Deletecyberbullying campaign.57 A number of European countries are considering whether they need new laws to deal with cyberbullying given the existing protections provided by other legal mechanisms. In November 2014, Ireland’s Law Reform Commission asked for public comments on an issue paper addressing the subject of cyberbullying.58 Revenge-porn laws were passed in the UK in October 2014, defining the content as “photographs or films which show people engaged in sexual activity or depicted in a sexual way or with their genitals exposed, where what is shown would not usually be seen in public.”59 Each European country has laws for handling criminal records, and unlike the multiple truths allowed by the U.S. Second Circuit, many of these countries prevent references to criminal pasts. In January 2014, a French art dealer demanded that his criminal record be deindexed from his Google search results and received a favorable ruling from a Parisian lower court, which explained that he had a right to protect his reputation under French data-protection laws.60 For the most part, special laws to address the aforementioned movements have been deemed unnecessary by European countries.

For good reason, then, nonlegal options are often put forth in the U.S. as the key to solving the problems of digital memory; after all, law can be a heavy-handed response to a socio-technical problem when compared to technical solutions or the quick action of the market. Values, including forgiveness, may need to be governed and preserved differently in the Digital Age. Lawrence Lessig, whose work on the governance of technology is considered foundational, explains that online, much like in the offline world, computer code, norms, markets, and law operate together to govern values and behaviors.61 Computer code, the pressure of adhering to social norms, and the invisible hand of market-based solutions may address the problems that the right to be forgotten seeks to address, but none are the silver bullet their advocates purport them to be.

Lessig succinctly argues in his book Code that the future of cyberspace will be shaped by the actions we take to define and construct it and that action will need to be taken to safeguard cherished values. Lessig describes the four methods of regulation as the market forces that invisibly meet the demands of society; norms and other social regulators that pressure individuals to behave appropriately; technical design that affects behavior; and government regulation. Although law is the most effective form of regulating behavior, according to Lessig, all of the first three methods are regularly suggested as more suitable responses to persistent personal information. But none will save the day on its own.

Markets are looked to more readily to solve problems in the U.S. than in the EU.62 And in fact, the market has answered the call regarding the tarnishing of reputation. Companies like Reputation.com, TrueRep.com, and IntegrityDefender.com offer services to repair your reputation and hide your personal information. On the “Suppress Negative Content Online” page of Reputation.com, the site explains, “You’re being judged on the Internet,” “The Internet never forgets,” “The truth doesn’t matter,” and you are “guilty by association.”63 These statements may seem dramatic, but for those who live with a nasty link on the first page of a Google search for their name, they probably feel very accurate. The fact that these businesses are successful suggests that there is a market of users with injured online reputations who are seeking redress and that search results are already gamed. Only Americans with means can remove themselves from the record of the Internet, and those who are less powerful can only hope for an opportunity to explain their digital dirty laundry. While it may be appealing to demonize the “privacy for a price” approach in favor of one based on privacy for all, these services provide privacy from past negative information, a very complicated task, starting at the low price of fifteen dollars per month.

Markets have other limitations for addressing a society that is incapable of forgetting. Certainly, reputation systems64 like those for sellers on eBay and Amazon allow for reputational cure by performing a large number of trust-affirming transactions, making the poor review less representative of the seller’s commercial conduct. The equivalent solution for personal reputation is to try to get negative information pushed off the first few pages of search results by bombarding the Internet with positive content. Those who suffer from negative online content can and do hire reputation management companies, presenting positive information about the client as opposed to presenting the confidences and character testimony of others.65 Relying on the market, however, runs the risk of endangering users because it allows for those subjects whose information is socially vital (i.e., politicians and professional service providers) to be hidden.66 By allowing the market to effectively suppress content to the last few pages of a search result, censorship is administered without any oversight or safeguards. This type of manipulation also may further victimize those who have been harmed by a subject, making them feel as though the subject suffered no social ramifications because he or she could pay to avoid them. Finally, the market ignores privacy as a right, providing forgetting services only to those who can afford it and those who are comfortable with a large online presence.

This form of intervention may promote the goals of reputation rehabilitation but does not serve the Internet as a socially valuable information source or provide privacy. The easiest way to make negative information less accessible is to bury it under highly ranked positive information—and lots of it. Though a reputation service can add content that adds context, it is not necessarily more accurate, relevant, or valuable. Additionally, this solution does not offer real seclusion or the feeling of being left alone or any other privacy definition related to autonomy. If users are interested in being left alone, paying for a service that will plaster information about them all over the Internet does not support their goals of regaining a private existence. If users seek to control information communicated about them, the pressure to fill the web with positive information in order to place a piece of information back in a sphere of privacy is more like strong-arming users than empowering them with privacy.

Norms have been suggested as an answer to addressing digital memory and preserving moral dignity in cyberspace.67 Actually, this is the most common response I hear from college students. The argument is that society will adjust as they—the understanding, empathetic, tech-savvy population—take on positions of power. It is a nice idea. Julian Togelius, a professor and artificial-intelligence researcher in Copenhagen, argues that “we have to adapt our culture to the inevitable presence of modern technology. . . . We will simply have to assume that people can change and restrict ourselves to looking at their most recent behavior and opinions.”68 According to danah boyd, “People, particularly younger people, are going to come up with coping mechanisms. That’s going to be the shift, not any intervention by a governmental or technological body.”69 Jeffrey Rosen argues, “The most practical solution to the problem of digital forgetting . . . is to create new norms of atonement and forgiveness.”70 Essentially these scholars argue that we will all begin to accept seeing previously closeted skeletons revealed digitally and become capable of ignoring them or judge them less harshly.

Other authors and commentators question the success of relying on social adaptation to preserve forgiveness in the age when it is impossible to forget. Viktor Mayer-Schönberger appreciates these ideas but argues that reliance on norms will take too long to avoid significant social damage or is simply an attempt at unattainable social changes.71 The philosophy professor Jeffrey Reiman challenges reliance on social adaptation as it relates to privacy by explaining that “even if people should ideally be able to withstand social pressure in the form of stigmatization or ostracism, it remains unjust that they should suffer these painful fates simply for acting in unpopular or unconventional ways.”72 Ruth Gavison also refutes these arguments, noting that “the absence of privacy may mean total destruction of the lives of individuals condemned by norms with only questionable benefit to society.”73

Human memory and the ability to forget may not be susceptible to alteration. The brain’s management of information is a result of evolution over a long period of time as it adapts to the context and environments in which it is processing.74 This view is shared by many leading psychologists, including the Harvard University professor David Schacter, who agrees that memory and forgetting mechanisms are deeply embedded in brain functionality.75

Bad events experienced by individuals have stronger impacts on memory, emotion, and behavior than good events do.76 Negative impressions and stereotypes form quicker and are more resistant to being disconfirmed than positive ones are. The brain reacts more strongly to stimuli that it deems negative, a reaction termed “negativity bias.”77 This is bolstered by behavioral research. For example, the PhD candidate Laura Brandimarte at Carnegie Mellon University measured how people discount information with negative and positive valence.78 These experiments supported the conclusion that bad information is discounted less and has longer-lasting effects than good information does.

The idea that we will all be used to seeing indiscretions online and will not judge people too harshly for those exposed indiscretions challenges our capabilities as humans. The opposite of large-scale acceptance is also possible. Norms of nondisclosure may present themselves.79 Users may self-censor not only because it is difficult to predict who will see their expression but also because norms may change and their expression may become unpopular in the future. Time, then, may add an additional layer of inhibition to what is known as the “spiral of silence,” wherein individuals who believe that their opinions are not widely shared are less willing to speak up and engage on the topic.80 Relying on norms of acceptance to develop is a risky proposition.

In Forgetting as a Feature, Not a Bug: The Duality of Memory and Implications for Ubiquitous Computing, the professor of computer science and information systems Liam Bannon warns that “the tendency to use the power of the computer to store and archive everything can lead to stultification in thinking, where one is afraid to act due to the weight of the past.”81 Bannon insists, “What is necessary is to radically re-think the relation between artefacts and our social world. The aim is to shift attention to a portion of the design space in human-computer interaction and ubiquitous computing research that has not been much explored—a space that is supportive of human well-being.”82 One of the more interesting solutions to privacy problems that are not easily or appropriately addressed by the law alone is the concept of privacy by design.83 Building the value of privacy into the design of a system offers a preventive measure, establishes standards, and potentially lightens the load on government oversight and enforcement. Forgiveness by design or automated forgiveness would be a code-based solution but, at this point, an inappropriate one.

The popularity of ephemeral data technologies became obvious, at least in one area of information sharing, when Snapchat took off. Snapchat is an app that allows users to send photos and videos that are visible to the recipient only for a specified amount of time (up to ten seconds). After the specified time, the recipient cannot access the file through the Snapchat interface, but the file is not deleted from the device. Ephemerality is the fun of Snapchat. The app is so fun that over twenty-seven million users (half of whom are between thirteen and seventeen years old) send more than seven hundred million snaps every day.84 However, research on users’ attitudes toward and behavior on Snapchat suggests that screenshots of messages are common and expected.85 Senders are notified by Snapchat when this happens, but few are upset about it.86 So while the ephemeral nature of Snapchat is what makes it popular, norms of lasting content have also become part of the application’s use. Other forgetting technologies include Wickr (whose slogan is “leave no trace” and which offers Snapchat-like service in a more secure form) and Silent Circle (which offers a “surveillance proof” smartphone app that allows senders to automatically delete a file from both the sender’s and the receiver’s devices after a set amount of time). However, with regard to data trails collected by sites and services, technologies that block data from being collected and those that automatically clear history data are currently the only available tools for users.

Computer scientists have also begun to play with coding forms of forgiveness, each outlining variables of forgiveness and reestablishing trust. DigitalBlush is a project designed to support technology-mediated facilitation of forgiveness, focusing on the importance of the human emotions of shame and embarrassment.87 The researchers developed a formal computational model of forgiveness and designed a tool to support rule-violation reports and link victim with offender to facilitate forgiveness.88 This required the researchers to categorize elements of human forgiveness. The first, violation appraisal, accounts for the severity, intent, and frequency of the exhibited behavior.89 The second, reversal, addresses the role of apologies and acts of restitution.90 Last, preexisting factors like the familiarity with and level of commitment to the offender are considered.91 Then the project collected user-generated information on rule violations in specific communities.92

Other researchers have focused on the role that forgiveness plays among artificial-intelligence agents by portraying the reestablishment of trust as an assessment of regret that can be cured or diminished over time depending on the conduct of the offending agent.93 The model is particularly valuable because it accounts for the limits of forgiveness (conduct that is unforgivable) and the importance of time.94 Mayer-Schönberger argues that these code or technical manipulations are the solution to the permanent memory of the web. Users should be able to attach an expiration date to information, after which it would no longer be accessible.95 Applying an expiration date would be available only for information created by the subject and would require predicting a useful life span at the time of creation.

The process of coding forgiveness of harmful online information carries the same issue as coding to remove unauthorized use of copyrighted material in such a way that also protects fair use: there are too many human elements. That being said, when the right to be forgotten has developed some definition and certainty, it may be much easier to automate significant portions of the determination. Elements of time, fame, and public interest can all be supported by automated systems. The delicate nature of human forgiveness and its implications for censorship require a nonautomated response until a system can be designed to know when an individual feels extreme shame or harm from information online and whether that information can be appropriately removed or deindexed. If not done thoughtfully, manipulation of this content or the system that preserves it in the name of forgiveness may threaten the openness and robustness of the Internet. This conclusion is not to suggest that technology cannot be used to support norms of forgiveness or that code is not an integral part of any effort at digital redemption but only that a singularly technological effort will not solve the problem of personal stagnation.

The foregoing mechanisms are simply ill equipped to handle forgetting, forgiving, or reinvention on their own in the Digital Age. They will all play an important role, but no single response, from law or otherwise, has really taken a close look at the opportunities available to address the issues. Finding innovative, nuanced responses requires going back to the drawing board and looking at new theories, breaking down concepts, and reframing the problem.