9

The Internet as a Lens: Concepts of Privacy in Online Spaces

Jeffrey Hermes

While most societies grant legal protection to expectations of privacy, such protection is rarely if ever absolute; privacy rights are generally limited both by the consent of the individual and by a range of competing values such as national security, effective law enforcement, and access to government records or information relevant to public issues. These values, and the way they limit privacy rights, are in many senses the ‘fingerprint’ of a given nation, reflecting a country’s particular balance between personal autonomy and public authority.

Every medium of communication presents a landscape in which these competing rights and values are made manifest and a balance is struck. To be sure, each method of communicating has its own unique character, raising privacy issues in different ways. A bullhorn outside one’s window is typically more intrusive than a newspaper on one’s doorstep, regardless of the content of the communication; but if the content includes sensitive private information, the bullhorn might reach dozens while the newspaper reaches thousands (if not tens or hundreds of thousands). To that extent, each new medium not only calls for different forms of regulation, but illuminates different aspects of the concept of ‘privacy’.

The Internet, however, is not a single medium of communication, but a platform allowing a wide variety of different modes of interaction. It represents an unprecedented laboratory for exploring concepts of privacy, and a continuously evolving challenge with respect to protecting privacy rights while preserving the benefits of open communication and ready access to information. This chapter explores several ways in which communication via the Internet has challenged traditional legal concepts of privacy.1

Disclosure of private facts by electronic communication

The thoughtless or malicious distribution of private information is, of course, not a new phenomenon. Personal relationships and sensitive situations routinely involve the sharing of such information within a trusted group, and trust can easily be breached due to negligence or acrimony. However, the ease of electronic communication makes it possible for someone stung by a failed relationship to lash out in the heat of the moment, or for an accidental disclosure of personal data to have lasting repercussions.

The US District Court for the District of Kansas faced one such situation in 2009 in Peterson v Moldofsky, a case involving the retaliatory disclosure of private photographs.2 Piper Peterson and Michael Moldofsky had an intimate relationship for approximately two years; during that time, Moldofsky took photographs of Peterson in certain sexual situations. The relationship ended bitterly, and Moldofsky sent a selection of those photographs by electronic mail to a small group of Peterson’s relatives, friends, and coworkers. Peterson filed suit, claiming among other things that the distribution of the photographs was actionable in the state of Kansas as the unauthorised publicity of matters concerning her private life.

The difficult issue in this case was the scope of Moldofsky’s publication of the photographs. There is a rough consensus in the United States that a legal remedy for publication of private facts should only exist where the defendant’s disclosure of information transcends the boundaries of private knowledge and makes that information truly public. Thus, under Kansas law, the dissemination of private information becomes actionable when it is communicated to ‘the public at large, or to so many persons that the matter must be regarded as substantially certain to become one of public knowledge’. The Restatement (Second) of Torts further states that ‘it is not an invasion of the right to privacy … to communicate a fact … to a single person, or even to a small group of people’.3 Moldofsky, relying on the Restatement, argued that Peterson’s privacy claim failed as a matter of law because the distribution of the photographs was limited to five people.

The district court rejected Moldofsky’s argument, finding that it was unlikely that Kansas would adopt the Restatement’s interpretation of the law in circumstances ‘involving the transmission of sexually explicit material over the Internet’:

To begin with … this case does not involve a traditional form of communication, such as paper mail or an oral conversation. This distinction is significant because the Internet enables its users to quickly and inexpensively surmount the barriers to generating publicity that were inherent in the traditional forms of communication. Furthermore, the Court finds significant the fact that [the Restatement] was published at a time when few, if any, contemplated the fact that a single, noncommercial, individual could distribute information, including personal information, to anyone, anywhere in the world in just a matter of seconds. Today, unlike 1977, … due to the advent of the Internet, the barriers of creating publicity are slight.4

The court went on to acknowledge the role of third parties in the distribution of information on the Internet, and the ease with which material shared via electronic mail could be republished:

Here, Defendant emailed sexually explicit material to a handful of Plaintiff Piper’s family and friends. While the Court agrees that it is unlikely that Piper’s mother will distribute the incriminating photos to the public, the Court cannot, as a matter of law, say that her ex-husband, or any of the other recipients for that matter, will not. With one simple keystroke, a recipient of the email could, at least theoretically, disclose the pictures to over a billion people. Therefore, … the Court finds that a genuine issue of fact exists as to whether the contents of the email are substantially certain to become public knowledge.5

Accordingly, the court allowed Peterson’s claim to proceed to trial.

The district court’s opinion in this case is interesting because it considered a general statement of traditional privacy law that, on its face, was neutral as to medium of publication, and found that it nevertheless rested on unspoken assumptions as to the technological capabilities of those who transmit and receive private information. As such, the court determined that it needed to investigate those assumptions further in order to reach a just result. The district court’s opinion is also markedly hyperbolic. Although electronic republication is a straightforward matter, it is unlikely that the recipients of the photographs were equipped to republish electronic mail to the Internet with ‘one simple keystroke’, and highly unlikely that the photographs would reach an audience of a billion. (By way of comparison, at the time of writing only a single YouTube video has received more than a billion views.6)

This is not to say that the reach of a forwarded email would be insufficient to render the contents public knowledge, but it is important to note that the impact of electronic distribution is dependent upon both the technical architecture of the medium and the manner in which both the source and recipient of information use the medium. In Catsouras v Dep’t of California Highway Patrol,7 the Court of Appeals of California considered a privacy claim arising out of police officers’ unauthorised use of email to forward gruesome images of a road accident victim to their friends and family members, which subsequently led to the photographs ‘going viral’. Although the court’s majority opinion assumed without discussion that publication by email was sufficient to support liability, Justice Aronson analysed that issue in a concurring opinion and found that the susceptibility of email to ‘easy and thoughtless forwarding to a larger audience’ was sufficient to satisfy this requirement:

Given the medium the officers selected and the likelihood acquaintances they chose would, like the officers, prove unable to resist an impulse to forward the photographs, plaintiffs’ allegation that defendants publicized the photographs to ‘members of the general public’ is sufficient to survive …, even under a standard requiring disclosure substantially certain to become public knowledge.8

There is, of course, an assumption about user behaviour implicit in this statement – namely, that use of electronic mail itself begets republication, at least when the information is reasonably calculated to tickle one’s prurient interest. Whether this statement is valid is another, more complicated question (particularly in the context of services like Twitter where a republication function is built into the service), but it is notable that Justice Aronson at least recognised user behaviour as relevant and integrated it into his analysis.

User behaviour was also the critical factor in a series of cases involving personal information uploaded to publicly accessible US Bankruptcy Court databases. The bankruptcy courts are part of the US federal court system, and utilise a standardised Electronic Case Filing (ECF) system for the filing

of pleadings and other materials. Materials filed through the ECF system are stored in separate databases for each federal court, and are in most instances available over the Internet to attorneys and other registered users of the federal courts’ Public Access to Court Electronic Records (PACER) system.

By their nature, judicial proceedings (and bankruptcy proceedings in particular) often involve substantial amounts of private data and sensitive financial information, leading to significant concerns about the filing of materials containing that information in a publicly available database. In response to these concerns, the US Congress passed the E-Government Act of 2002, which mandated that the Supreme Court of the United States prescribe rules to ‘protect privacy and security concerns relating to electronic filing of documents and the public availability … of documents filed electronically’.9 The Supreme Court carried out this mandate with respect to the bankruptcy courts through Rule 9037 of the Federal Rules of Bankruptcy Procedure, which directs anyone filing a document with a bankruptcy court to redact certain personal identifiers and private information before filing, and gives the bankruptcy courts the authority to order the redaction of additional information where necessary.

Unsurprisingly, compliance with Rule 9037 has not been perfect. There have been many instances where personal identifiers such as debtors’ social security numbers have made their way into PACER databases, leading debtors to seek relief from the court for invasion of privacy. But while the E-Government Act and Rule 9037 explicitly recognise the privacy risks of the disclosure of information in public databases, and the bankruptcy courts are granted authority to enforce compliance with their rules, the bankruptcy courts have almost uniformly rejected the concept that uploading a document containing private information to a PACER database gives rise to a common law cause of action for invasion of privacy.

As with the photograph cases discussed above, these courts have focused on whether material uploaded to a PACER database is sufficiently ‘publicized’ to create liability for disclosure of private facts. Notwithstanding the public accessibility of these databases and the legal status of filed documents as public records, the bankruptcy courts have been reluctant to find that the filing of an unredacted document should give rise to a civil cause of action – particularly where the offending document is only available for a matter of days until it is called to the court’s attention. But rather than explicitly balance the interest in improved public access to government through online databases and the threat to personal privacy, several bankruptcy courts have gone to great lengths to find that these public records on searchable public databases were not in fact public.

For example, in a case involving the disclosure of a debtor’s date of birth and health information in a creditor’s proof of claim, the US Bankruptcy Court for the Eastern District of Arkansas wrote:

There are three ways a person may gain access to a document filed on the Court’s CM/ECF system: (1) having a CM/ECF password which is given to attorneys licensed to practice law and registered with the Court …; (2) through the PACER system which requires a subscription and charges a user fee for access to most documents; and (3) by walking into the Bankruptcy Court during regular business hours and using the computers there made available to the public at no charge. … [P]roofs of claim are only available to parties who take affirmative actions to seek out the information. … [I]n order to access a proof of claim in a debtor’s case, an individual would have to access the Debtor’s claims register rather than the docket sheet for the Debtor’s case, and then access the individual proof of claim itself. This process requires a certain working knowledge of how to maneuver within the Bankruptcy Court’s CM/ECF system. [T]he simple fact that all documents filed in a bankruptcy case file are technically deemed ‘public records’ does not satisfy the ‘publicity’ element necessary to state a claim for invasion of privacy …

[T]he Plaintiff does not allege that a member of the public has actually accessed and seen the unredacted proof of claim filed in her case. Additionally, given that the unredacted proof of claim was available on the CM/ECF docket for only three days, the Court finds it highly unlikely any member of the public would have accessed or seen the proof of claim. Accordingly, the Plaintiff has failed to allege facts sufficient to establish that the information in the unredacted proof of claim ‘reached or was sure to reach’ the public at large …10

Virtually identical reasoning has been followed by at least five other US Bankruptcy Courts.11

As in Peterson and Catsouras, the courts in the PACER cases focused on both the architecture of the electronic communication at issue and how people actually use the technology. However, the decisions in the PACER cases reverse the emphasis of the legal analysis. In Peterson and Catsouras, the courts acknowledged that the photographs had reached only a small number of people, but found determinative the fact that they had the potential to reach many more. In contrast, the courts in the PACER cases acknowledged that the information was available to anyone who cared to jump through the necessary hoops, but found determinative the fact that as a practical matter the information was seen by only a limited number of people. In other words, these cases are all grappling with – but coming out in different ways on – the question of what it means that any given communication on the Internet is potentially available to many but usually seen by few.

It is not clear that any court has identified a principled balance between concern for the potential reach of an electronic communication and its actual reach when evaluating privacy claims. Although it is tempting to view concerns about potential reach as ‘alarmist’ and consideration of actual reach as ‘realist’, it is not necessarily the case that courts focusing on actual reach are more principled. Indeed, the PACER cases – which conspicuously downplay the potential audience for court records – convey a sense of end-oriented decision making, suggesting that these courts were simply unwilling to allow state privacy law to complicate the courts’ transition to electronic filing.

New kinds of ‘private’ data

It is important to recognise that the scope of ‘private’ information is neither clearly defined, nor limited to a single category of information. A full discussion of the numerous definitions of privacy under law is beyond the scope of this book, and has been written about at length elsewhere. For the purposes of this chapter, it is sufficient to recognise that when we discuss privacy online, we might be referring to any number of types of information, including: information about our private lives that we try to restrict to family and friends; financial information that we disclose to third parties in secure transactional settings but otherwise treat as confidential; and automatically generated data about websites visited and other online activity that we typically consider no one’s business but our own.

The gathering and use of online behavioural data for marketing purposes has generated much controversy, with courts struggling to determine whether there is a cognisable privacy interest in such information. For example, the US District Court for the Northern District of California12 has twice rejected claims for invasion of privacy based upon disclosure of behavioural information. In the case of In re iPhone Application Litigation, the district court considered allegations that Apple violated user privacy by enabling its various ‘iDevices’ to be used by third parties to gather personal information (such as a unique device identifier number, personal data, and geolocation information) through applications downloaded from Apple’s App Store. Applying California law, the court held that ‘even assuming this information was transmitted without Plaintiffs’ knowledge and consent, a fact disputed by Defendants, such disclosure does not constitute an egregious breach of social norms’. The court compared the gathering of information through mobile apps to the gathering of street addresses by marketers for the purpose of direct postal mailings, noting that the state courts of California had treated the latter activity as ‘routine commercial behavior’.13

Subsequently, in Low v LinkedIn Corporation, the same judge held that LinkedIn, an online service providing professional networking services, did not commit a ‘serious invasion’ of a protected privacy interest when it disclosed users’ de-identified browsing history information gathered using tracking cookies and beacons for advertising and marketing purposes. Although the plaintiffs argued that the nature of the information provided was sufficient for a recipient to reconstruct a user’s identity, the court rejected that argument because the plaintiffs neither alleged that any third party had actually attempted to identify specific users nor specified any information thereby obtained.14

Courts have also struggled with efforts to measure damages caused by the disclosure of this type of personal data. Several courts have rejected claims for breach of privacy policies or other economic loss based upon an online service’s gathering or sale of demographic information or browsing histories. In the case of In re DoubleClick, Inc. Privacy Litigation, the US District Court for the Southern District of New York drew a distinction between the value of user attention and behaviour to advertisers and its value as an asset to the users themselves:

We do not commonly believe that the economic value of our attention is unjustly taken from us when we choose to watch a television show or read a newspaper with advertisements and we are unaware of any statute or caselaw that holds it is. We see no reason why Web site advertising should be treated any differently. A person who chooses to visit a Web page and is confronted by a targeted advertisement is no more deprived of his attention’s economic value than are his off-line peers. Similarly, although demographic information is valued highly the value of its collection has never been considered a[n] economic loss to the subject. Demographic information is constantly collected on all consumers by marketers, mail-order catalogues and retailers. However, we are unaware of any court that has held the value of this collected information constitutes damage to consumers or unjust enrichment to collectors.15

Similarly, In re JetBlue Airways Corp. Privacy Litigation involved JetBlue’s disclosure of user information to a third-party data mining company. The US District Court for the Eastern District of New York held that ‘there is absolutely no support for the proposition that the personal information of an individual JetBlue passenger had any value for which that passenger could have expected to be compensated’.16 In contrast, claims based upon the exploitation of the commercial value of an individual’s name or likeness, or of sensitive personal information traditionally considered private, have been more successful.17

Although one might dispute the results of the cases discussed above, the courts’ decisions reflect an attempt to grapple with fundamental issues relating to privacy. What gives rise to a right to control information? Is it enough that the information is personal in nature, or does there need to be an articulable risk to the individual from the disclosure of that information? If we recognise a privacy interest in certain personal information gathered online, are we creating a distinction between the online and offline contexts? If so, is such a distinction justified? These questions remain challenging, especially with respect to the types of peer-to-peer user behaviour discussed in the last section of this chapter.

Note also that these cases relate to the gathering and disclosure of data by private parties. The United States addressed the gathering and disclosure of information related to particular individuals by federal government agencies in the Privacy Act of 1974.18 In contrast to the cases above, the Privacy Act reflects concerns about data aggregation and abuse with respect to any information – whether traditionally ‘private’ or not – that is associated with a particular individual in the records of a federal agency. Under the Act, such information may not be disclosed without the written request or prior written consent of the individual, unless one or more of 12 specified exceptions apply.19 The European Court of Human Rights has similarly recognised that the collection and disclosure of aggregated public information about a particular individual can violate an individual’s right to privacy under Article 8 § 1 of the Convention for the Protection of Human Rights and Fundamental Freedoms:

The Court reiterates that the storing of information relating to an individual’s private life in a secret register and the release of such information come within the scope of Article 8 § 1 … Moreover, public information can fall within the scope of private life where it is systematically collected and stored in files held by the authorities.20

Online services entrusted with personal information

Since 1996, online intermediaries have enjoyed protection against liability in the United States under theories of tort (including privacy torts) that would treat the intermediary as the publisher or speaker of offending content.21 As a result, cases involving one person’s unauthorised online distribution of another’s private information typically do not include claims against intermediaries.

The situation is very different when an online service provider (OSP) actively solicits or collects information from its users as part of the bargain for providing the service. Sometimes this information is essential to providing the service to the user, such as the collection of credit card data to process online transactions or the gathering of medical information by diagnostic websites. In other circumstances, the information gathered is used primarily for the benefit of the OSP, such as behavioural and demographic data used as metrics in business development. It is more common, however, for OSPs to use personal information for multiple reasons, and it is often difficult to determine whether those uses benefit the user, the service, or both. This accumulation of personal information by OSPs, whether operated by private or government entities, creates a substantial risk of abuse by third parties or by the services themselves. These troves of data also represent an irresistible target for law enforcement, private litigants, and others who are empowered to compel disclosures as part of a judicial or regulatory process.

Complicated questions arise with respect to an OSP’s obligation to protect users against these kinds of intrusions, due to the nature of the information at issue. Traditional privacy law generally addresses circumstances where an individual’s private information is disclosed by one who obtained the information through an unauthorised intrusion or through a relationship of trust. In the case of an individual’s disclosure of his or her own information to OSPs, the relationship is generally commercial, and there has been reluctance to treat disclosures in such a relationship as being subject to an expectation of privacy.

This is most clearly reflected in the ‘third-party doctrine’ under United States law, which arose out of law enforcement efforts to access information that the subject of an investigation had communicated to third parties. The landmark case for the application of the doctrine to communications via a third-party carrier is Smith v Maryland, in which the US Supreme Court considered the warrantless use by the Baltimore police of a pen register, a device that tracks the date, time and numbers dialled from a particular telephone number. A pen register does not, however, intercept the content of telephone calls. The Court held that the government’s use of the pen register did not violate the defendant’s rights:

Telephone users … typically know that they must convey numerical information to the phone company; that the phone company has facilities for recording this information; and that the phone company does in fact record this information for a variety of legitimate business purposes. Although subjective expectations cannot be scientifically gauged, it is too much to believe that telephone subscribers, under these circumstances, harbor any general expectation that the numbers they dial will remain secret. … Although petitioner’s conduct may have been calculated to keep the contents of his conversation private, his conduct was not and could not have been calculated to preserve the privacy of the number he dialed. Regardless of his location, petitioner had to convey that number to the telephone company in precisely the same way if he wished to complete his call. … [A] person has no legitimate expectation of privacy in information he voluntarily turns over to third parties.22

Critically, the Court found that the disclosure of the information at issue was voluntary despite the fact that the disclosure was essential to use the telephone system; the other option available to the defendant was to avoid use of the phone entirely.

Similar logic has been applied to conveyance of information to OSPs. For example, in the case of In re Application of the United States of America for an Order Pursuant to 18 USC § 2703(d), the US District Court for the Eastern District of Virginia applied the doctrine to find that a government demand for users’ IP address information from Twitter did not violate the prohibition against warrantless searches in the Fourth Amendment to the United States Constitution:

Even if Petitioners had a reasonable expectation of privacy in IP address information collected by Twitter, Petitioners voluntarily relinquished any reasonable expectation of privacy under the third-party doctrine. To access Twitter, Petitioners had to disclose their IP addresses to third parties. This voluntary disclosure – built directly into the architecture of the Internet – has significant Fourth Amendment consequences under the third-party doctrine.23

The idea that a waiver of privacy rights is inherent in the very structure of the Internet is understandably disturbing. Analogising IP addresses to phone numbers also disregards the fact that we use the Internet for a much broader range of activity than telephony; disclosure of online contacts can reveal far more information about our private lives than telephone records. The uncertainty about such questions has led the US Supreme Court to express caution about moving too quickly when ruling upon privacy norms relating to online communication, stating: ‘The judiciary risks error by elaborating too fully on the Fourth Amendment implications of emerging technology before its role in society has become clear.’24

The Supreme Court’s caution has nevertheless left lower courts without guidance as to how to evaluate cases involving electronic privacy. In this vacuum, some lower courts have recognised cognisable privacy interests in the contents of electronic communications. However, the lack of Supreme Court precedent has led these courts to continue to rely upon analogies with traditional offline modes of communication. For example, in United States v Warshak, the US Court of Appeals for the Sixth Circuit held that a defendant had a legitimate expectation of privacy in the contents of electronic mail messages that the government obtained from his Internet Service Provider without a warrant. But while the Sixth Circuit found that ‘the Fourth Amendment must keep pace with the inexorable march of technological progress, or its guarantees will wither and perish’, it did not rely on advances in technology or evolving understandings of privacy as a basis for its ruling. Rather, it reached its conclusion by comparing privacy in electronic mail to protection recognised for the contents of telephone calls and letters carried by the postal service:

Given the fundamental similarities between email and traditional forms of communication, it would defy common sense to afford emails lesser Fourth Amendment protection.25

Similar analogies have been applied by other courts with respect to faxes sent through an electronic communications service provider and password-protected Facebook messages.26 While it is not necessarily the case that reliance upon analogies to older technologies will result in an incorrect result, there is at least as much risk of disregarding important aspects of how electronic media are used in the course of such comparisons as when analysing electronic media independently.

Statutory privacy rights

Uncertainties about privacy interests in electronic communications and the obligations of OSPs have led to statutory efforts to bypass these difficult questions by creating new rights to control personal information. Statutory solutions, however, have presented their own complexities and challenges.

In the United States, the Stored Communications Act (SCA), enacted as part of the Electronic Communications Privacy Act of 1986, responded to concerns over the third-party doctrine by providing substantive protection against government intrusion comparable to the Fourth Amendment with respect to certain user information held by certain online services.27 In general, the SCA requires the government to obtain a warrant to access the contents of certain electronic communications in the hands of certain third party service providers (e.g., unopened email less than 180 days old). Other information about a user and his or her communications can be obtained with a lesser showing than is required for a warrant by obtaining a court order pursuant to 18 USC § 2703(d) (known as a ‘(d) order’), or in some circumstances merely by issuing a subpoena. The Act also prohibits some services from voluntarily disclosing certain categories of user information to the government or to other entities without user consent.

The SCA has in many ways brought more complexity to questions of online privacy than it has resolved. There is significant debate about which standards apply to particular information. For example, federal courts are currently split as to whether the content of opened emails should be protected as communications stored for backup purposes, which cannot be obtained with a subpoena, or as data stored for archival purposes, which can.28 There is even greater debate about whether the protections assigned to various categories of information are sufficient.

The framework that the SCA applies to the Internet is also outdated. Reflecting an understanding of electronic services that predates the World Wide Web,29 the SCA recognised two categories of online services: ‘electronic communications systems’, or ECS, defined as ‘any service which provides to users thereof the ability to send or receive wire or electronic communications’;30 and ‘remote computing services’, or RCS, defined as ‘the provision to the public of computer storage or process services by means of an electronic communications information system’.31

As a result, the statute does not cover all user information transmitted to online service providers; to the contrary, there are many that fall into neither the ECS nor RCS category. The statute does not apply to private remote computing service providers; the definition of an RCS is limited to those who provide storage or process services ‘to the public’. More importantly, a wide range of online service providers that use the Internet to conduct business may also be excluded. In the case of In re JetBlue Airways Corp. Privacy Litigation, the US District Court for the Eastern District of New York held that ‘companies that provide traditional products and services over the Internet, as opposed to Internet access itself, are not “electronic communication service” providers’ under the SCA.32 Nor was JetBlue, which provided air travel services over its website, an RCS:

[Remote computing] services exist to provide sophisticated and convenient data processing services to subscribers and customers, such as hospitals and banks, from remote facilities. … By supplying the necessary equipment, remote computing services alleviate the need for users of computer technology to process data in-house. … Although plaintiffs allege that JetBlue operates a website and computer servers …, no facts alleged indicate that JetBlue provides either computer processing services or computer storage to the public.33

Even when courts make an effort to squeeze new forms of online service into the ECS and RCS categories, efforts to pursue legal action under the SCA for disclosure of personal information run into difficulty when courts attempt to parse the relationships between the parties in terms of the statute. In the case of In re Facebook Privacy Litigation, the Northern District of California considered a claim against Facebook under the non-disclosure provisions of the SCA, based upon Facebook’s alleged disclosure of user data to advertisers. Facebook’s arrangement with advertisers provided that when a user clicked on an advertisement, information about the user and the page on which the advertisement appeared would be included with other data transmitted to the advertiser as a result of the click. Although the court treated Facebook as an ECS for the purposes of its analysis, it held that the alleged disclosure did not support a claim because the information transmitted as a result of the click was either directed to Facebook itself or to the advertiser:

If the communications were sent to Defendant, then Defendant was their ‘addressee or intended recipient,’ and thus was permitted to divulge the communications to advertisers so long as it had its own ‘lawful consent’ to do so. 18 U.S.C. § 2702(b)(3). In the alternative, if the communications were sent to advertisers, then the advertisers were their addressees or intended recipients, and Defendant was permitted to divulge the communications to them. Id. § 2702(b)(1).34

In other words, even if Facebook were an ECS and even if the user would have preferred that personal data not be included in the transmission that resulted from clicking on an advertisement, the architecture of the transmission was such that the user was technically responsible for initiating that transmission to either Facebook or the advertiser. If the former, Facebook could ‘consent’ to its own disclosure of the information that it received; if the latter, Facebook was simply facilitating the communication initiated by the user.

It is of course a well-recognised limitation of statutory solutions that they are constrained by the foresight and understanding of legislators at the time they are enacted. The exclusion of a wide array of future online services from the effective reach of the SCA was less a matter of intention than a result of the difficulty of predicting the current form of the Internet in the mid-1980s. However, the problem with creating a comprehensive statutory framework relating to the Internet is exacerbated by the extraordinary rate of innovation and change in the field.

Alternative statutory approaches to privacy have focused on the act of gathering of personal information for specific purposes. Concerns over the gathering and abuse of information relating to minors led to the enactment in 1998 of the Children’s Online Privacy Protection Act (COPPA).35 COPPA imposes strict obligations upon ‘any person who operates a website located on the Internet or an online service’ directed to children (defined as individuals under the age of 13) or who knowingly collects any of the following personal information about children:

•   a first and last name;

•   a home or other physical address including street name and name of a city or town;

•   an email address;

•   a telephone number;

•   a social security number;

•   any other identifier that the [Federal Trade] Commission determines permits the physical or online contacting of a specific individual; or

•   information concerning the child or the parents of that child that the website collects online from the child and combines with an identifier described in this paragraph.36

Except in limited circumstances, COPPA (through the mechanism of regulations promulgated by the Federal Trade Commission) prohibits operators of online services from knowingly gathering any of the information listed above from children without first (1) providing ‘notice on the website of what information is collected from children by the operator, how the operator uses such information, and the operator’s disclosure practices for such information’, and (2) obtaining ‘verifiable parental consent for the collection, use, or disclosure of personal information from children’.37 ‘Verifiable parental consent’ is defined as:

any reasonable effort (taking into consideration available technology), including a request for authorization for future collection, use, and disclosure described in the notice, to ensure that a parent of a child receives notice of the operator’s personal information collection, use, and disclosure practices, and authorizes the collection, use, and disclosure, as applicable, of personal information and the subsequent use of that information before that information is collected from that child.38

Furthermore, operators are required to provide to a parent, upon request:

•   a description of the specific types of personal information collected from the child by that operator;

•   the opportunity at any time to refuse to permit the operator’s further use or maintenance in retrievable form, or future online collection, of personal information from that child; and

•   a means that is reasonable under the circumstances for the parent to obtain any personal information collected from that child.39

The statute also requires operators to have procedures in place to protect the ‘confidentiality, security, and integrity of personal information collected from children’.40

Apart from a prohibition on conditioning a child’s participation in an online activity on the disclosure of ‘more personal information than is reasonably necessary to participate in such activity’,41 COPPA does not contain any complete bar to collecting of information about children under 13. Rather, the statute attempts to provide the tools necessary for parents to understand their children’s online activity, consent, and terminate that activity if they choose.

A similar approach was undertaken by the state of California with respect to information gathered about its citizens for commercial purposes. The California Online Privacy Protection Act42 requires that the operator of a commercial website post a privacy policy on its website if it collects ‘personally identifiable information’ about California residents. The Act defines personally identifiable information in virtually the same terms as COPPA (without, of course, limitations to information about children). The privacy policy must be conspicuously presented, and must describe:

1. the categories of information gathered about consumers;

2. the categories of third parties with whom that information may be shared;

3. any process that the website provides for reviewing or requesting changes to that information;

4. how the site notifies consumers about changes to the privacy policy; and

5. the effective date of the policy.

The California ‘Shine the Light’ law43 imposes additional obligations on a commercial website operator that discloses any of 27 enumerated categories of personal information44 gathered from a California consumer to a third party for direct marketing purposes. Each California consumer whose information is disclosed has the right, once a year, to request a list of the categories of information disclosed and the names and addresses of all third parties who received that information, as well as the products or services marketed by the third parties. A website covered by the statute must provide contact information for making such requests in one of three ways: through its employees who have contact with consumers; in its places of business in California (if any); or in a section of its website entitled ‘Your Privacy Rights’ that is conspicuously linked from the home page (or in a privacy policy accessible from the home page via a link entitled ‘Your Privacy Rights’), which also enumerates a consumer’s legal rights under the law. A website operator is, however, exempt from the requirements of the law if it has fewer than 20 employees or if it allows consumers to either opt in or opt out of direct marketing through a process specified in its privacy policy.

Although California’s laws specify that they apply only to information gathered from California residents, they do not require that a website operator be located in the state. Because a judgment against a website in a California court is likely to be enforced by other jurisdictions in the United States, it is common for online businesses located elsewhere in the United States to comply with these laws. Even if a website operator is based outside of the United States, it might elect to comply with California’s laws if it has assets or personnel in the United States against which a judgment might be enforced.

The notice and consent paradigm

The preceding sections have discussed a variety of approaches to privacy, sounding in common law, constitutional law, and statute. However, all of these doctrines share a common principle: the idea that an individual can consent through his or her actions to the disclosure of his or her personal information. This principle derives from concepts of individual autonomy, and the belief that each person should be able to draw the boundary between the public and private aspects of their lives for themselves. In many ways, the debate over the responsibilities of online services with respect to privacy has narrowed to the question of implementing informed consent to disclosure, based upon meaningful notice of the ways in which personal information will be used.

COPPA and the two California laws discussed above resolve this question in particular contexts by legislative fiat. In other contexts, the adequacy of consent has been left to the courts. Decisions analysing notice and consent have primarily focused on two issues: first, whether notice is presented in such a way that it is likely to be seen by the user before the act that constitutes consent to disclosure; and secondly, whether the substance of the notice is such that the user would understand it to authorise the forms of disclosure in which an OSP actually engages.

The first issue, the form in which notice is presented by an online service, is often discussed in terms of a dichotomy between ‘clickwrap’ and ‘browsewrap’ terms of service. ‘Clickwrap’ generally refers to a structure in which the user is required to take an affirmative step to accept the terms of service and privacy policy of a website before being allowed to use the site, often through clicking a button or marking a checkbox as part of a registration process. ‘Browsewrap’ refers to the posting of the terms of service and privacy policy on the website where they can be accessed by users, but without any affirmative requirement of users to manifest consent other than continuing to use the site.

Unsurprisingly, clickwrap terms of service generally fare better than browsewrap terms of service when challenged in court. Browsewrap terms will be subject to scrutiny as to whether the user receives sufficient notice that continuing to use a website constitutes acceptance of the terms.45 On the other hand, even where a clickwrap arrangement provides a link to terms of service at the time of acceptance rather than the full terms, courts have usually enforced the user’s acceptance of the terms.46 That said, even a clickwrap agreement might not constitute sufficient notice if the terms of the policy are altered after the ‘click’, and users are not sufficiently notified about the revised terms.47

Few United States cases have addressed the specific level of detail necessary in a privacy disclosure. Perhaps due to the influence of the California Online Privacy Protection Act, a de facto standard has developed that privacy policies are sufficient if they identify categories of information gathered, categories of uses, and categories of third parties with whom information may be shared, but not necessarily specifics in those categories. At least one court has rejected an argument that more detailed information is required. In Kirch v Embarq Mgt. Co., the US District Court for the District of Kansas considered whether the following disclosures in a privacy policy were sufficient to notify subscribers to an Internet service provider that their online communications would be subject to deep packet inspection and cookie tracking by a third party:

Embarq [the ISP] may use information such as the websites you visit or online searches that you conduct to deliver or facilitate the delivery of targeted advertisements. The delivery of these advertisements will be based on anonymous surfing behavior.

De-identified data might be purchased by or shared with a third party.

The policy also stated that the websites visited by users would be automatically logged, and that such information could be shared with ‘business partners involved in providing EMBARQ service to customers’. The plaintiffs argued that the disclosures were insufficient because they did not specify NebuAd, Inc., as the third party at issue. The court rejected that argument, finding that no authority required such specific disclosures in a privacy policy.48

That said, a court will take heed when a website’s information practices and privacy policy are actually inconsistent. In CollegeNET, Inc. v XAP Corp., an online college admissions application service sued defendant XAP, its alleged competitor, claiming that XAP engaged in false advertising of its products through misrepresentations in its online privacy policy. Specifically, XAP stated in its privacy policy that: ‘Personal data entered by the User will not be released to third parties without the user’s express consent and direction.’ This was confirmed by other sections of the site stating: ‘The information you enter will be kept private in accordance with your express consent and direction.’

But when students using XAP’s service were asked whether they wished to receive information about student loans and financial aid, they were not expressly informed that answering ‘yes’ would authorise XAP to share their personal information with third parties selling such products and services. The US District Court for the District of Oregon held that this discrepancy was sufficient for the plaintiff’s false advertising claims to go to trial:

The Court is not persuaded by Defendant’s argument that its privacy-policy statements are merely incidental rather than intrinsic; i.e., fundamental to its products and services. Identity theft associated with use of the internet to buy products and services is a common occurrence. Promises of confidentiality of information provided over the internet are certainly more than incidental; i.e., ‘minor’ matters.49

Ironically, competitors like CollegeNET might have greater success in asserting an unfair competition claim based upon the alleged false statements in a privacy policy than customers would have claiming damages from a breach of the policy. As discussed earlier in this chapter, the difficulty that courts have had in evaluating the economic damage to an individual of the disclosure of behavioural information makes contract and market-based theories problematic.

The notice and consent paradigm has been subject to significant criticism. COPPA, for example, is dependent on parents’ willingness to devote the time necessary to understand their children’s online activity and make informed judgments. It is also quite possible for children whose parents have not taken affirmative efforts to restrict their Internet use to evade age restrictions on websites directed at adults. More generally, online terms of service and privacy policies are frequently derided for their length and technicality, leading users to be unlikely to take the time to read through those documents.

However, no reported cases in the United States have considered whether disclosures in a privacy policy may be disregarded solely because they are complex or unusual. Allegations by users that they did not read the terms of use for a site (because of complexity or otherwise) have been ineffective,50 and indeed can backfire on users. In Low v LinkedIn, the court dismissed a claim under California’s false advertising law based upon an argument that LinkedIn’s privacy policy was deceptive or misleading, because the plaintiffs never alleged that they had read or were even aware of the policy. If the plaintiffs had never read the privacy policy, reasoned the court, they could not have been deceived by it.51

It should be stressed that the notice and consent paradigm is not the only model that has been applied to online privacy. As mentioned above, COPPA prohibits the gathering of more personal information than is reasonably necessary for a child to participate in a particular online activity, and Rule 9037 of the Federal Rules of Bankruptcy Procedure prohibits any party from electronically filing any material containing certain personal information in an unredacted format. In addition, new laws enacted by four states in 2012 prohibit employers in those states from requesting that current or potential employees disclose user IDs and passwords for their social media accounts.52 Three states have similar laws prohibiting academic institutions from compelling students to provide such access.53

These are flat bans on the gathering or dissemination of information, irrespective of the consent of the individual affected. These are also all special situations: information regarding minors is treated with particular sensitivity; employees and students are in an imbalanced power relationship with employers or schools, such that the voluntariness of consent is always in doubt; court databases are restricted in their use and those who appear before a court are subject to its authority. In more general contexts, completely barring the requesting or dissemination of information notwithstanding consent would create significant tension with principles of freedom of expression.54

Network amplification of content

In January 2011, a woman was walking through the Berkshire Mall in Wyomissing, Pennsylvania, while using a cellular device. Focused on the device, she did not see a water fountain in her path; she tripped over the low wall surrounding the fountain, and fell into the water. She was uninjured by the fall and quickly exited the fountain.

Unfortunately for this individual, the incident was captured by at least two security cameras. A composite video made from this footage, with amused commentary from security personnel, was posted to YouTube. It was a perfect seed for a viral video: it had elements of slapstick, with no one seriously hurt (at least physically), and echoed the ongoing debate over a cultural obsession with cellular devices. The recording was reposted on YouTube multiple times, quickly received millions of views, spawned parody videos, and was the subject of international media attention.

Perhaps unsurprisingly, the individual later commented to the press that her true injury from the incident came from the transformation of a single, if embarrassing, incident into a persistent meme: ‘The humiliation. Ask my husband: I cried for days. … You don’t know how many people are laughing at me.’55 She subsequently threatened legal action against those at the mall involved in producing the video, although it is unclear whether she was considering a claim based on the publication of the video or for the negligence of mall security in failing to check on her safety after the fall.56

It is difficult to deny that this individual has endured additional suffering from her widespread exposure, but it is doubtful whether a claim based on such exposure could succeed under existing legal theories under United States law. The incident occurred in a public place, in which the individual would have no reasonable expectation of privacy.57 Although the commentary on the video is mocking, there has been no suggestion that the video misrepresents what actually occurred, and so it could not be actionable as a defamatory falsehood. At most, the video is embarrassing to the individual in question and of dubious public importance, but expanding rights of privacy to cover such content would make the boundaries of liability vague at best and chill a substantial amount of legitimate speech.

Recognising a new tort based on a theory akin to ‘excessive publication’ is also problematic. The reach of a particular publication has traditionally been considered to be a factor in evaluating the damage caused by an otherwise actionable statement, not a basis for liability in and of itself. Such a theory would suggest that there is something inherently wrongful in hoping that one’s otherwise lawful content is seen by as many people as possible. Furthermore, the determination of whether publication is excessive is necessarily an after-the-fact judgment, making it difficult to identify the right person to punish on such a theory. Should we punish the original poster, who has no way to know how far the content will reach? The intermediate republisher, who has no way to know whether his or her particular contribution is the one that shifts the coverage of the plaintiff from acceptable to excessive? The late-stage republisher, who might reasonably be aware that he or she is ‘piling on’ by the time he or she joins in, but whose incremental impact on the plaintiff is likely to be trivial? The social media platform, which might not be encouraging (or even aware of) the situation at all?58

At some level, the frustration over cases like these is not with any particular individual, but with the networking effects that allow the dramatic amplification of some content that, for whatever reason, strikes the right chord with the public at the right time. This occasionally results in the phenomenon of mass defamation litigation, sometimes termed a ‘suit against the Internet’ – that is, an effort to counter the network effect and to deter online discussion of an entire topic by filing legal claims against a significant number of people who have written on the topic.59 These claims usually sound in defamation rather than privacy, but the intent to quell discussion is frequently revealed by the fact that the case is filed without significant consideration of whether the claims stated would be successful against any particular individual.

‘Suits against the Internet’ are intended as a deterrent and must be sweeping to pose a credible threat; as a result, they often sweep in statements that are nothing more than questions, opinions, ancillary comments that are unquestionably true, or even mere references to the fact that there is an active discussion. For that reason these cases are generally considered to be misguided, as an attempt to leverage judicial process to create a pseudo-legal remedy for the negative impact of network effects where a true legal remedy does not exist. In many of these cases, the plaintiff’s attempt to shut down discussion backfires by drawing even more attention to the matter that the plaintiff wishes dropped. Moreover, these lawsuits tend to generate a new round of public criticism of the plaintiff for his or her litigation tactics. In some United States jurisdictions, this type of claim may also constitute a ‘strategic lawsuit against public participation’, or ‘SLAPP’, subjecting the plaintiff to special penalties.

That said, the concerns raised by mass defamation lawsuits may be less susceptible to cross-border generalisation than those raised by other examples discussed in this chapter (such as concerns about easy redistribution and aggregation of personal information, the role of commercial intermediaries, difficulties with legislative solutions, and the effectiveness of notice and consent regimes). In nations that recognise defamation claims based upon opinions or true statements, the line between defamation and privacy becomes blurred and the use of litigation to prevent the discussion of true but embarrassing information may be less controversial. In contrast, within the United States, it is doubtful that a legal remedy could be crafted for Internet overexposure that is consistent with US principles of freedom of speech, leaving solutions to the social, technological and other spheres.60

 

 


1   This chapter is primarily based upon United States case law and statutes, as a convenient vehicle to identify more universal trends and tensions in the application of privacy concepts to the Internet. I do not, however, intend to suggest that specific articulations of law discussed in this chapter are generally applicable.

2   2009 WL 3126229 (D Kan, Sept 29, 2009).

3   Restatement (Second) of Torts § 652D cmt a (1977). The Restatements of the Law are published by the American Law Institute, and attempt to reflect a general consensus among the 50 United States as to the elements of the law ‘as it presently stands or might plausibly be stated by a court’. See http://www.ali.org/index.cfm?fuseaction=publications.faq. While the Restatements are often favourably cited by United States courts as helpful and persuasive, they are not binding authority and state laws can differ in whole or in part from the law as set forth in a Restatement.

4   2009 WL 3126229, at *4 (internal quotation marks deleted).

5   Ibid. at *5.

6   http://www.youtube.com/watch?v=9bZkp7q19f0.

7   181 Cal App 4th 856 (2010).

8   Ibid. at 904.

9   Public Law 107–347, Title II § 205(c)(3), 2002 HR Rep 2458, 2914 (Dec 17, 2002).

10   Dunbar v Cox Health Alliance, LLC (In re Dunbar), 446 BR 306, 315 (Bkrtcy ED Ark 2011) (internal quotation marks omitted).

11   See, e.g., Southhall v Check Depot, Inc. (In re Southhall), 2008 WL 5330001, at *3 (Bkrtcy ND Ala, Dec 19, 2008); French v Am. Gen. Fin. Serv’s (In re French), 401 BR 295, 318–19 (Bkrtcy ED Tenn 2009); Cordier v Plains Commerce Bank (In re Cordier), 2009 WL 890604, at *6 (Bkrtcy D Conn, Mar 27, 2009); Carter v Flagler Hosp., Inc. (In re Carter), 411 BR 730, 742 (Bkrtcy MD Fla 2009); Matthys v Green Tree Servicing, LLC (In re Matthys), 2010 WL 2176086, at *3 (Bkrtcy SD Ind, May 26, 2010). Contrast McKenzie v Biloxi Internal Medicine Clinic, PA (In re McKenzie), 2010 WL 917262, at *5-6 (Bkrtcy SD Miss, Mar 10, 2010) (holding that submission of materials to a PACER website might constitute support for an invasion of privacy claim under Mississippi law).

12   The US District Court for the Northern District of California is of particular interest with respect to online privacy cases, as it has jurisdiction over cities in which many Internet companies and OSPs have their home offices.

13   844 F Supp2d 1040, 1063 (ND Cal 2012).

14   —- F Supp2d ——, 2012 WL 2873847, at *9 (ND Cal 2012).

15   154 F Supp2d 497, 525 (SDNY 2001).

16   379 F Supp2d 299, 327 (EDNY 2005). See also Low, at *12–15 (‘unauthorized collection of personal information does not create an economic loss’ actionable in breach of contract or conversion).

17   See Fraley v Facebook, Inc., 830 F Supp2d 785, 798–99 (ND Cal 2011) (distinguishing claims based upon exploitation of user names and likenesses for advertising from claims based solely upon alleged value of browsing data); Doe 1 v AOL LLC, 719 F Supp2d 1102, 1111–12 (ND Cal 2010) (plaintiff identified cognisable harm through defendant’s collection and publication of sensitive financial information and information about private issues such as sexual history, health, and past traumatic events).

18   5 USC § 552a.

19   5 USC § 552a(b).

20   Rotaru v Romania, App. No. 28341/95 (2000) at [43].

21   47 USC § 230(c)(1).

22   442 US 735, 743–44 (1979).

23   830 F Supp2d 114, 133 (ED Va 2011).

24   City of Ontario, California v Quon, 130 SCt 2619, 2629 (2010).

25   631 F3d 266, 285–86 (6th Cir 2010).

26   In re Applications for Search Warrants, 2012 WL 4383917 at *4-5 (D Kan, Sept 21, 2012) (discussing emails and faxes); RS v Minnewaska Area Sch. Dist., —- F Supp2d ——, 2012 WL 3870868, at *11 (D Minn, Sept 6, 2012) (discussing Facebook messages).

27   18 USC §§ 2701 et seq.

28   Contrast Theofel v Farey-Jones, 359 F3d 1066, 1074–77 (9th Cir 2004) (holding that opened emails may be copies retained on an ECS for backup protection) with US v Weaver, 636 F Supp2d 769, 772–73 (CD Ill 2009) (holding that services that maintain copies of opened emails are providing storage services for the user as an RCS).

29   But not the Internet itself. Various dates can be ascribed to the launch of the packet-switching computer networks that evolved to become the Internet as we know it today, but the transition of the ARPANET to the Internet Protocol Suite (also known as TCP/IP), which allowed the formation of the interoperable worldwide ‘network of networks’ known as the Internet, occurred on January 1, 1983.

30   18 USC § 2510(15).

31   18 USC § 2711(2).

32   379 F Supp2d 299, 307–08 (EDNY 2005).

33   Ibid. at 310. See also Low, 2012 WL 2873847, at *7–8 (holding that LinkedIn is not an RCS).

34   791 F Supp 2d 705, 714 (ND Cal 2011).

35   15 USC §§ 6501-6506 (1998).

36   15 USC §§ 6501–6502.

37   15 USC §§ 6502(a)(1), (b)(1).

38   15 USC § 6501(9).

39   15 USC § 6502(b)(1)(B).

40   15 USC § 6502(b)(1)(D).

41   15 USC § 6502(b)(1)(C).

42   California Bus & Prof Code §§ 22575–79 (2003).

43   California Civ Code § 1798.83 (2003).

44   The 27 categories are: name and address; email address; age or date of birth; names of children; email or other addresses of children; number of children; age or gender of children; height; weight; race; religion; occupation; telephone number; education; political party affiliation; medical condition; drugs, therapies, or medical equipment used; kind of products purchased, leased, or rented by customer; real property purchased, leased, or rented; kind of service provided; social security number; bank account number; credit card number; debit card number; bank or investment account, debit card, or credit card balance; payment history; and information pertaining to the customer’s creditworthiness, assets, income, or liabilities. Cal Civ Code § 1798.83(e) (6)(A).

45   See Sprecht v Netscape Communications Corp., 306 F3d 17, 31–32 (2nd Cir 2002) (a link to the terms of service at the bottom of screen allowing the download of software was insufficient notice, given that users might not scroll to the bottom of screen); Hines v Overstock.com, Inc., 668 F Supp2d 362, 367 (EDNY 2009) (browsewrap terms at the bottom of the page did not provide sufficient notice that use of the site constituted acceptance of the terms, where the only notice of that fact was contained inside the terms themselves and the link to the terms was obscure). Compare Register.com, Inc. v Verio, Inc., 356 F3d 393, 401–03 (2nd Cir 2004) (although the user of online services was not given explicit opportunity to accept terms of use relating to marketing use of data, the fact that the user was aware of the terms through multiple uses of the site was sufficient notice for continued use to constitute acceptance).

46   See, e.g., Swift v Zynga Game Network, Inc., 805 F Supp2d 904, 911–12 (ND Cal 2011).

47   See Fraley v Facebook, Inc., 830 F Supp2d 785, 805–06 (ND Cal 2011) (terms of service did not provide a basis to dismiss a claim that Facebook used user information in advertisements without permission, where plaintiffs alleged that notice of such use was added by Facebook after plaintiffs accepted the terms); compare Kirch v Embarq Mgt. Co. 2011 WL 3651359 at *8 (D Kan, Aug 19, 2011) (altered terms of service were held binding on users, where a clickwrap version of terms accepted by plaintiffs warned users that terms could be altered at will through posting of revised terms to the website).

48   Kirch, at *8.

49   442 F Supp2d 1070, 1077 (D Or 2006).

50   Ibid. at *5, 8 (noting that plaintiffs claimed that they did not read privacy policies as a matter of course, but finding consent binding nevertheless).

51   Low, at *11.

52   Michigan: Mich Pub Act No 478 (2012); California: Cal Labor Code § 980 (2012); Illinois: 820 ILCS 55/10(b)(1) (2012); Maryland: MD Code, Labor and Employment, § 3-712 (2012).

53   California: Cal Educ Code § 99121 (2012); Delaware: 14 Del C § 8103 (2012); New Jersey: 2012 NJ Sess Law Serv Ch 75 (ASSEMBLY 2879).

54   An example of this tension appears in laws in the United States that prohibit voters from displaying images of their own completed ballots, in order to protect the secrecy of the ballot and the integrity of the voting process. For a discussion of whether such laws are consistent with guarantees of freedom of expression, see Jeffrey P. Hermes, ‘Ballot Disclosure Laws: A First Amendment Anomaly’, Citizen Media Law Project Blog, http://www.citmedialaw.org/blog/2012/ballot-disclosure-laws-first-amendment-anomaly (November 2, 2012).

55   ‘Texting while Walking, Woman Falls into Fountain’, CBSNews.com, January 20, 2011, http://www.cbsnews.com/2100-500202_162-7265096.html.

56   ‘Fountain Lady: ‘Nobody Went to My Aid’, ABC News, January 20, 2011, http://abcnews.go.com/GMA/video/fountain-lady-cries-nobody-went-to-my-aid-12712703.

57   The fountain incident echoes the Catsouras case, in which photographs of a deceased individual taken by the California Highway Patrol were published online and subsequently went viral. However, the court’s ruling in Catsouras included an extensive analysis of the unique privacy interests held by the family of a deceased person in the circumstances of her death. 181 Cal App 4th 856, 868–74. Finding privacy interests in the case of the fountain video would require an extension of those principles to merely embarrassing information in the hands of privately contracted security personnel.

58   Again, Section 230 of the Communications Decency Act would protect an online platform from liability for user comments in the United States, and might also protect individual users who merely republish others’ content. See 47 USC § 230(c)(1) (‘No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.’).

59   For examples of such lawsuits, see http://www.dmlp.org/threats/rakofsky-v-internet and http://www.dmlp.org/threats/saltsman-v-goddard.

60   For an example of a social solution to the issue of excessive exposure, consider the Wikipedia community’s determination not to include the real name of the subject of the well-known ‘Star Wars Kid’ meme on the relevant entry. See http://en.wikipedia.org/wiki/Talk:Star_Wars_Kid. For a technological approach, consider Twitter’s implementation of its native ‘retweet’ feature, in which an original publisher’s deletion of his or her ‘tweet’ results in the deletion of all ‘retweets’ created through the built-in feature. See ‘The Case of the Disappearing Retweets’, PCMag.com, December 16, 2011, http://www.pcmag.com/article2/0,2817,2397757,00.asp.