327

CHAPTER FIVE

CURRENT PRIVACY CHALLENGES, DEVELOPING TECHNOLOGIES & FUTURE TRENDS

With the quickening pace of technological development, any attempt to divide challenges to privacy into concrete categories of present, imminent, and remote-but-approaching, threats, is by its very nature an exercise in artifice. However, there are certain issues that are clearly more manifest than others. This chapter explores these ongoing and upcoming challenges and contemplates what lies ahead in the field of privacy law.

I.      CURRENT PRIVACY CHALLENGES

A.    IDENTITY THEFT

Identity theft occurs when someone poses as another person by using that person’s personal information without his or her permission. Although the term “identity theft” was coined relatively recently, the concept of impersonating an individual to realize undeserved gain has existed since ancient times, being reflected in such sources as the biblical account of Jacob impersonating his brother Esau to gain their dying father’s blessing. What has made identity theft an increasing problem is the ease with which it can now be committed on a massive scale due to technological changes. Identity theft once required such risky acts as physically stealing a limited number of paper records or actually entering 328a financial institution in person to complete a transaction with forged documents. Today it can be accomplished remotely and with minimal risk of apprehension. Complaints filed with the FTC concerning identity theft topped the list of consumer complaints for the thirteenth consecutive year, with complaints rising from 31,117 in 2000, to 369,132 in 2012. Furthermore, while identity theft is usually thought of as a crime perpetrated against individuals, it is also an increasing problem for corporations and other organizations.

The difficulty of actually finding and prosecuting identity thieves (who are commonly located in a different country than their victims, judgment proof, or both), in recent years, has led to litigation against institutions from which the personal information used to commit identity theft was stolen. The central theory behind this litigation is one of shifting “externalities,” i.e., costs, to the party best able to bear them. Institutions, particularly those operating domestically in the United States, traditionally had little financial incentive to protect customer or employee information since the theft of information had minimal impact on the institution that actually had the information stolen. It was the customer or employee who usually suffered the greatest injury, along with third parties deceived by the identity thief. However, the institutions that hold personal information are, generally, the parties most able to protect the information in their custody. Consequently, litigation against institutions for failure to adequately protect personal information is arguably one of the most 329dynamic areas of privacy law to develop in recent years.

1. Causes of Action Relating to Identity Theft
a. Negligence

The most common type of action brought in litigation resulting from identity theft is a negligence claim. The tort of negligence requires a breach of duty, which causes, both actually and proximately, an injury to the party whom the duty was owed. Identity theft negligence claims typically are predicated on one or both of two duties. The first duty is to take precautionary measures to protect the plaintiff’s personal information, while the second is to eliminate or correct any incorrect information that has already been attributed to the plaintiff. Courts have found that this duty may arise even in situations where the individual whose personal information is held by the institution is not a customer, employee, or other category of person whom one might expect to be owed a duty.

Remsburg v. Docusearch, 816 A.2d 1001 (N.H. 2003), was a seminal case concerning negligence claims based on a failure to take adequate precautions in the handling of personal information. In Remsberg, a stalker employed the help of a company called Docusearch to locate a woman with whom he had become obsessed. The stalker obtained several pieces of personal information concerning the woman from Docusearch, including her Social Security number and work address. Docusearch 330itself had obtained the information from other sources or from the woman herself under false pretenses. Shortly after receiving the information, the stalker drove to the woman’s place of employment and killed her.

The victim’s estate sued Docusearch on several theories of liability, including negligence. The New Hampshire Supreme Court found that, given the serious and pervasive threat of stalking and identity theft, “the risk of criminal misconduct is sufficiently foreseeable so that an investigator has a duty to exercise reasonable care in disclosing a third person’s personal information to a client.” Id. at 1008. A claim for negligence could be based on the unauthorized disclosure of another party’s personal information.

b. Privacy Torts

A second theory of liability analyzed in Remsburg was whether Docusearch’s conduct and that of contractors in its employ could constitute the tort of intrusion, i.e., an invasion of the victim’s privacy interests in a manner as to be offensive to persons of ordinary sensibilities. See id. The New Hampshire Supreme Court concluded that a person could have a protectable privacy interest in a Social Security number, but that the question of whether any particular use or distribution of the number without the permission or knowledge of the number’s holder was a sufficiently egregious violation of the victim’s privacy interests to be actionable would be a question of fact. The court, however, rejected the 331argument that the plaintiff could have a claim based on the disclosure of the victim’s place of employment because, while individuals could reasonably expect their Social Security numbers to be kept private, places of employment are usually public in nature and thus the victim could not reasonably assume that information was private.

An alternative privacy tort that has been pled with some success in the context of identity theft, even in the absence of specific evidence that identity theft has occurred, is that of public disclosure of private facts. In Shqeirat v. United States Airways Group, 515 F. Supp. 2d 984 (D. Minn. 2007), a plaintiff included a claim for disclosure in a lawsuit against the Minneapolis Metropolitan Airport Commission. Following an altercation at the Minneapolis airport, the Commission published a police report containing the plaintiff’s Social Security number on its official website. See id. at 998. The Commission did not dispute that it had made the plaintiff’s Social Security number publicly available, but instead moved to dismiss on the grounds that the plaintiff had not sufficiently pled an injury. The United States District Court for the District of Minnesota, applying Minnesota tort law, declined to dismiss the plaintiff’s claim on a motion to dismiss, finding that the emotional distress resulting from the threat of identity theft could constitute a sufficient injury to support a claim for disclosure.

332
c. Breach of Contract

As discussed in Chapter 3, privacy policies are becoming more prevalent and even mandatory in a number of states. These privacy policies are intended to inform the consumer of how the company will use and store any personal information that it collects during the business relationship. Consumers, however, are using this privacy policy in some instances as a cause of action in identity theft suits against these companies, arguing that the policies create a contract between the consumer and the company. Under a breach of contract claim, the consumer must prove a legal obligation on the part of the company defendant to the consumer plaintiff, a breach of that obligation, and an injury resulting from such breach.

In Kuhn v. Capital One Fin. Corp., 855 N.E.2d 790 (Mass. App. Ct. 2006), a plaintiff brought suit after a hacker compromised certain financial account information, including Social Security numbers, which resulted in fraudulent charges to the plaintiff’s account. The plaintiff alleged a breach of contract claim stemming from the “Privacy Notice and Customer Agreement” that plaintiff had received from Capital One. The court found that this was enough to establish a contract between the parties.

Alternatively, plaintiffs have had limited success in pleading a cause of action for breach of implied contracts. In Anderson v. Hannaford Bros. Co., 659 F.3d 151 (1st Cir. 2011), a plaintiff brought suit alleging that Hannaford breached an implied 333contract between itself and its customers. The plaintiff alleged that there was an implied contract with Hannaford that it would not allow unauthorized access to its customers’ credit card data. The court agreed with the plaintiff, finding, at the pleading stage at least, that under Maine law it was reasonable to conclude that when customers provide a merchant with their credit card numbers they are providing that information to the merchant only, and the merchant has an implicit duty to safeguard that information from unauthorized third-party access.

d. Breach of Fiduciary Duty

Several cases alleging claims premised on identity theft have included the claim of breach of fiduciary duty. A fiduciary duty arises in a transaction when one party has placed trust and confidence in another’s judgment or advice. A fiduciary duty has been commonly found to exist between an attorney and client, an employer and employee, and a trustee and beneficiary. In the identity theft context, this has proven to be a difficult theory under which to succeed. See Anderson, 659 F.3d at 157–58; Meadows v. Hartford Life Ins. Co., 492 F.3d 634 (5th Cir. 2007); but see Jones v. Commerce Bank, N.A., No. 06 Civ. 835 (HB), 2007 WL 672091, at *3 (S.D.N.Y. Mar. 6, 2007) (finding an implied fiduciary duty may exist).

334
e. Infliction of Emotional Distress

Several courts have entertained claims for infliction of emotional distress as a result of identity theft, but there are as yet no reported instances of a plaintiff having successfully brought such a claim.1 The difficulty of establishing the elements of intentional infliction of emotional distress in an identity theft situation is the reason these claims have failed so far. In most jurisdictions, a plaintiff must prove that the defendant acted with malice, ill will, or in an extremely outrageous manner and that an actual physical injury, or apprehension of physical harm, resulted. Because identity theft is usually not accompanied by these facts, the claims are typically struck on a motion to dismiss or for summary judgment.

f. Constitutional Rights

In the specific context of government handling of personal information, courts have recognized that the threat of identity theft occasioned by governmental infringement of privacy rights can support a claim under 42 U.S.C. § 1983. In Tomblin v. Trevino, No. SA01CA1160–OG, 2003 WL 24125145 (W.D. Tex. Feb. 19, 2003), a plaintiff argued that a local police force’s practice of collecting Social Security numbers during routine 335traffic stops was a Fourth Amendment violation. Noting that the use of Social Security numbers was regulated by statute and that dissemination of an individual’s Social Security number posed a significant threat of identity theft, the court concluded that an individual could have a reasonable expectation of privacy in the number. A government actor would require a justifiable basis for collecting information, given the Fourth Amendment privacy interest attached to a Social Security number. Based on testimony from the local police chief and the officer involved in the traffic stop, the court concluded that there was not a sufficiently strong government interest to justify collecting the information in the circumstance of an ordinary traffic stop.

2. Remedies for Claims Related to Identity Theft
a. Damages

Although some courts have permitted plaintiffs to proceed with claims arising from identity theft even in the absence of allegations of specific damages resulting from the theft itself, see, e.g., Shqeirat v. United States Airways Group, 515 F. Supp. 2d 984, 998 (D. Minn. 2007) (permitting disclosure claim based on injury in form of fear of identity theft), the vast majority of claims are dismissed or disposed of by summary judgment due to an inability by the plaintiffs to demonstrate an actual injury. See Pisciotta v. Old Nat’l Bancorp, 499 F.3d 629, 635, 639 (7th Cir. 2007) (discussing lack of precedence for 336finding a mere risk of identity theft to be a compensable injury); Reilly v. Ceridian Corp., 664 F.3d 38 (3rd Cir. 2011) (rejecting claim for time and money spent on credit monitoring to protect against an increased risk of identity theft). For a successful claim, a plaintiff must prove that the breach of duty by the defendant resulted in both an exposure of personal information and that the exposure resulted in a use of the personal information in a manner that caused an actual injury. Alternatively, but for the same reasons, many such claims have been disposed of for failure to allege an injury-in-fact sufficient to confer Article III standing.2 This effectively means that most large-scale incidents in which the security of personal information is compromised will not be compensable because it is quite rare that any specific injury can be traced to the breach.

b. Restitution

Actual damages resulting from identity theft are difficult to prove and even if demonstrated, often do not fully compensate victims for costs incurred in correcting the damage to their credit and financial standing. A number of states, therefore, have 337enacted statutes that require identity thieves to pay restitution to their victims. “Restitution” in this context refers to costs incurred by identity theft victims that are the consequence of the theft, but that do not result directly from the theft. In Delaware, for example, identity thieves may be ordered to pay a victim’s lost wages and reasonable attorney fees incurred as a result of the theft.3 Under Maine law, this may include recovery for costs of acquiring identity theft insurance and fees related to replacing stolen financial account information.4 Maryland also allows reasonable attorney fees, costs for clearing the victim’s credit history, and costs incurred to satisfy debts, judgments, or liens against the victim that resulted from the defendant’s actions.5

B.    PRETEXTING

“Pretexting” is the act of creating and utilizing an invented scenario (the pretext) to persuade a target to give information or perform an action. Typically, pretexting is more than a simple lie, but usually involves prior research and the use of pieces of known information to establish legitimacy in the mind of the target, so as to increase the likelihood that he or she will reveal the sought after information. Like identity theft, pretexting is not in and of itself a new phenomenon. Mythology is 338replete with tales of characters gaining information on the weaknesses of others by posing as an innocent third party. Technology has simply made pretexting far more feasible because of the ease by which it can be accomplished remotely.

Pretexting has been addressed through two major pieces of legislation. Under the Gramm-Leach-Bliley Act (GLB) discussed in Chapter 2,6 it is illegal to use forged or stolen documents or false statements to obtain customer information from a financial institution or directly from a customer of a financial institution. The GLB also criminalizes the same conduct by intermediaries. Congress recently passed the Telephone Records and Privacy Protection Act (TRPPA) in reaction to the revelation that the management of Hewlett-Packard had employed private investigators who used pretexting to obtain the telephone records of board members and members of the media as part of efforts by Hewlett-Packard’s management to investigate leaks of internal discussions.7 The TRPPA specifically criminalizes making false or fraudulent statements to a telephone service provider or providing false or fraudulent documents to a telephone service provider in an attempt to gain a third party’s telephone records.

339

C.    PHISHING

“Phishing” is a broad term covering any sort of internet fraud scheme designed to trick the recipient into revealing credit card account information, passwords, Social Security numbers, and other personal information to individuals who intend to use them for fraudulent purposes. Phishing typically involves communications (particularly e-mails) made to appear as if they have come from a reputable source, whether a government agency, such as the IRS, or a commercial source, like the victim’s bank. The communications often instruct the recipient to verify or update account information by requesting a reply to the e-mail with updated information, or by providing the recipient with a link to a website where the new information may be entered. Regardless of type, phishing schemes are typically dealt with via traditional laws governing fraud or criminal impersonation. The most commonly encountered types of phishing schemes include:

False e-mail messages: The message appears to be from a company that the target does business with, warning that they need to verify account information and that if the information is not provided the account will be suspended. Customers of eBay and Paypal are two of the most targeted groups for this type of phishing scheme, but the approach can be applied to the customers of any business. Although many companies now specifically warn customers that they do not request personal information via e-mail, this remains the 340most popular variety of phishing scheme, likely because it is the simplest to execute. Many companies now have phishing watch groups that monitor this area, and upon discovery, immediately notify the ISP where the e-mail originated, as well as any appropriate law enforcement agencies.

False escrow sites: This occurs when items are put up for sale at a legitimate online auction to lure the target into making payments to a fake escrow site rather than through the auction provider’s regular payment channel. This scheme is more time consuming and elaborate than a simple e-mail, but it has the potential to collect more information, particularly if the phisher actually sends the merchandise that was ordered so that the target does not associate a later instance of fraud with the transaction where the personal information was collected. eBay has tried to combat this type of scheme by providing its members with written policies that prohibit transactions from being completed outside of eBay.8

False charities: The fraudster in this type of phishing scheme poses as a charity and asks for direct monetary donations. These schemes most commonly flourish in the wake of major disasters and around holidays when people are likely to be in a particularly generous spirit. While seemingly the most foolproof scheme, since the targets are usually not expecting to receive anything in return, genuine 341charities vigorously search for evidence of such activities and typically are better able to pressure law enforcement agencies to conduct investigations of suspected charity fraud than individual fraud victims. The substantially higher risk of law enforcement becoming involved causes this type of phishing scheme to remain less popular with fraudsters.

False websites: This scheme uses websites made to look similar to legitimate sites or having web addresses that are sufficiently similar to popular sites that they may be accidentally reached by mistyping an address. In particularly extreme cases, phishers have hacked the websites of legitimate enterprises and reconfigured them so that visitors are redirected to the phishers’ false websites. In any of these variants, the websites will typically contain false login screens designed to convince visitors to type in their usernames and passwords. The website may also attempt to download spyware onto the target’s computer, whether covertly or by representing that the target needs to download a particular software application to use the site. This type of phishing scheme is often paired with the use of false e-mails to drive traffic to the site, and can include a “man in the middle” attack, which intercepts login information entered into a legitimate website after passing through the phisher’s false site. The “man in the middle” attack can also take the form of impersonating legitimate wireless access points. In this form of attack, the bogus access point is commonly setup in a “hotspot,” 342such as a popular coffee shop or café, in order to intercept login information and other sensitive data.

D.    RADIO FREQUENCY INFORMATION DEVICE

1. Technological overview

A Radio Frequency Information Device (RFID) is a system that identifies objects, collects data, and transmits information about the object through a “tag.” RFIDs consist of three components: a chip, antenna, and a reader. The chip is typically programmed with relatively simple information, including a specific alphanumeric string of characters. The antenna transmits information from the chip to the reader via radio waves. When the reader comes in close enough proximity to the chip, signals transfer between the two. RFIDs are either active or passive. Passive RFIDs stay inactive until a reader scans the tag. Active RFIDs have energy sources built in, allowing active transmission of information without the need of a reader to initiate the action. The latest generations of these devices, called “extended capability” RFIDs, include the ability to be read at longer distances (up to 500 meters), operate in more extreme environmental conditions, and to be paired with microprocessors on a single chip, enabling much more comprehensive monitoring of the object the RFID is attached to.

The principal purpose of RFIDs is to allow a user to track objects that have been “chipped” or “tagged.” In a typical application, the user encodes 343each RFID chip with an alphanumeric string and then in a separate database the user creates a description of the object that corresponds to that particular alphanumeric string. Each time the “tagged” object is scanned, that alphanumeric string can be relayed from the reader to the database, allowing the user to determine the location of the object at the time of the scan. Depending on the degree of detail in the underlying database, and how distributed its RFID readers are, a user can potentially form a very detailed overview of the object’s movements.

2. Privacy implications

RFIDs, in of themselves, pose no privacy issues. An RFID chip cannot record information about its surroundings and they typically are not encoded with information that is of use without also having access to an accompanying database providing the information associated with the object to which the chip was attached. The principal privacy concern raised by RFIDs is that they can potentially allow users to track the movement of individuals by tracking the movement of objects in the individuals’ possession. This concern has been particularly raised in the context of RFID chips being embedded in government identification cards, passports, license plates, or similar items. If one assumes a sufficiently distributed network of RFID readers, it would be feasible to monitor the location of the individual who is associated with the identification card, assuming that it remains in the possession of the owner for a substantial part of the time. 344Currently, this type of network is speculative, although many nations are introducing RFIDs as part of passport control stations at ports of entry.

Currently, no federal law directly regulates the use of RFIDs. However, Arkansas, California, Michigan, Minnesota, Nevada, New Hampshire, North Dakota, Oklahoma, Rhode Island, Texas, Vermont, Virginia, Washington, and Wisconsin have passed some form of state legislation restricting the use of RFIDs.9 To avoid further regulation, a number of industries connected to the manufacture and use of RFIDs have formed EPCglobal, an organization intended to promote the development of industry standards for self-regulation on the use of RFIDs in consumer products. As part of this effort, EPCglobal has developed guidelines to help deal with RFID privacy concerns. The guidelines call for products containing RFIDs to be labeled, the RFID tags to be capable of being disabled or discarded, information about RFIDs to be readily available to the consumer, and for any personal information collected in conjunction with the use of RFIDs to be handled in a manner complying with any applicable laws.10

The EU is also currently studying RFID technology, led by the members of the Coordinating 345European Efforts for Promoting the European RFID Value Chain (CE RFID).11 The CE RFID members focus on the research and application of RFID technology throughout Europe. Much like EPCglobal, CE RFID has issued a series of guidelines to assist in the implementation of RFID technology.12

In response to the growth of the RFID market in Europe, the European Commission has passed two directives and one decision to date that touch on RFID technology.13 Of these, the first directive, the Radio and Telecommunications Terminal Equipment (R&TTE) Directive remains the most important. Passed in 1999, the R&TTE Directive mandates certain standards for the application of RFID tags. The R&TTE Directive applies to the RFID tag “at the stage of placing on the market.” This is important because the R&TTE Directive only applies to the tags and not the product it is embedded in, meaning the ultimate buyer of the product is not required to be informed of the RFID tag.14.

346

E.    BIOTECHNOLOGY

1. Technological Overview

Biotechnology has made historic leaps in the past several decades, resulting in both promising benefits and unanswered questions. It seems only a few years ago that scientists undertook to map the human genome—now scientists are looking to manipulate the human genome, among other things. New technologies in biochemistry, genetics, and molecular biology have enlightened our understanding of the underlying causes of disease, enabled the discovery of new pharmaceuticals that specifically target disease, and offered hopes of detecting and preventing disease even before patients become symptomatic.

These new innovations, and the research methods giving rise to them, have led to several diverse privacy concerns. First, much of the biotech research currently conducted requires human tissues, cells, or embryonic materials. This fact, paired with the enormous financial stakes involved, has led to questions over the privacy and ownership rights of tissue donors. Second, advances in biotechnology have enabled the advent of personalized medicine, which seeks to diagnose and treat patients based on their particular genetic make-ups. This individualistic treatment rouses concerns over how personal genetic information will be gathered, stored, and transmitted. Third, with 347the increasing availability and specificity of genetic disease screening and diagnosis, privacy issues arise over potential discrimination by employers and insurance companies based on genetic predispositions or pre-symptomatic genetic diagnosis. Fourth, with the advent of new methods for genetic disease detection, government surveillance and reporting may increasingly rely on individualist genetic data, rather than on group statistical data, resulting in still other concerns over privacy.

2. Human Tissues and Cells in Research and Commerce

Biotech research and development currently relies heavily on human tissues, cells, and embryonic materials. As a result, there is an increasingly lucrative market in human biological materials. Both of these phenomena implicate privacy concerns.

a. Rights of Tissue Donors

Ultimately, as one commentator noted, “our bodies are the source of all we feel and think” and our treatment of them “must incorporate the dignity and sanctity of the body as the source of all we experience.”15 Indeed, one court noted, in considering a patient’s right to refuse medical treatment, that “[e]very human being of adult years and sound mind has a right to determine what shall 348be done with his own body….”16 As a result of this intimate connection, the donors of biological materials have a special interest in how their tissues are used. Privacy concerns are implicated by various actions of third parties in this area, including: (1) unauthorized use of biological materials in research; (2) unauthorized sale of biological materials after donation; and (3) profits inuring to third parties rather than to donors, but based on donor samples.

The California Supreme Court, in Moore v. Regents of the University of California, considered the legal rights of tissue donors when researchers made unauthorized use of their biological materials. 793 P.2d 479 (Cal. 1990). John Moore (Moore) had been treated for many years at the University of California at Los Angeles Medical Center (UCLA Medical Center) for a unique type of leukemia. He later learned that his doctor had developed and patented an extremely valuable cell line by utilizing his cells—without his knowledge or consent. Moore subsequently sued for damages alleging conversion and basing “ownership” of his biological materials on principles of privacy rights.

The Moore court concluded that Moore’s doctor breached his fiduciary duties by failing to disclose his financial and research interests prior to obtaining Moore’s informed consent. The court, however, declined to find the Regents, researchers, 349and commercial genetics corporations liable on Moore’s theory of conversion because of its stifling effect on research. Although the court recognized the lower court’s concern that “[a] patient must have the ultimate power to control what becomes of his or her tissues [and] [t]o hold otherwise would open the door to a massive invasion of human privacy and dignity in the name of medical progress,” the court declined to find a donor property interest in the materials and held that privacy concerns could be adequately addressed by doctor-patient fiduciary duties and informed consent. Id. at 491.

Subsequent courts have also declined to find property ownership rights in body parts, following Moore.17 On the other hand, courts have shown a willingness to find property rights in regenerative biological materials, including blood, plasma, and sperm cells—even allowing sale of these materials.18 Commentators have proposed alternate legal theories to protect the privacy and personal interests of donors, including heightened regulation, trust law, and privacy law principles.

b. Special Considerations Regarding Human Embryos

Courts have dealt with reproductive tissues, especially embryos, differently than other biological 350materials. The Supreme Court’s decision in Roe v. Wade provides the primary basis for this differential treatment. In Roe v. Wade, the Supreme Court recognized a right to procreate, or not, that was inherent to the right of privacy protected by the Fourteenth Amendment. 410 U.S. 113, 153 (1973). Proponents for special treatment of reproductive tissues rely on the privacy rights of the gamete providers, or “parents,” for support, but courts remain split.

In Davis v. Davis, the Tennessee Supreme Court considered whether the husband or the wife was entitled to “custody” of preembryos that were cryogenically preserved during the marriage. 842 S.W.2d 588 (Tenn. 1992). The court held that “preembryos are not, strictly speaking, either ‘persons’ or ‘property,’ but occupy an interim category that entitles them to special respect because of their potential for human life.” Id. at 597. The court further held that both gamete providers had equal decision-making authority regarding the disposition of the preembryos and proceeded to balance the individual privacy interests involved, specifically the rights to procreation and to avoid procreation. The wife sought only to donate the preembryos and not to become impregnated and the husband was vehemently opposed to the donation. Thus, the court held that the husband’s interest prevailed—at least absent prior agreement to the contrary.

In Kass v. Kass, the Court of Appeals of New York considered a near identical issue, but held that the 351wife’s right of privacy was not implicated prior to implantation. 696 N.E.2d 174, 179 (N.Y. 1998). The court declined to decide whether the pre-zygotes were entitled to “special respect” as recognized in Davis, and rather than weighing the privacy interests of the gamete providers, held that the consent agreement entered into with the fertility clinic controlled the disposition of the pre-zygotes.

c. Special Considerations Regarding Human Reproductive Tissues

Quite distinct privacy concerns arise in the area of sperm and egg cell, or gamete, donation. Donors and recipients are often complete strangers, but the private information of the donor is highly valuable to the recipient. Worldwide, the sale of gametes has become increasingly lucrative, and the private donor information linked to the gametes drives the value of individual gametes.19 This donor information traditionally includes: (1) a complete medical and genetic history; (2) general information regarding age, weight, height, educational background, sexual activity, and drug usage; and (3) personal information regarding religion, hobbies, talents, and physical features (including a photograph)—even so far as disclosing “what would send [one] … to an ATM at three in the morning.”20 Most, if not all, of this information is available for perusal by potential 352recipients who subscribe to a gamete broker’s services.

The international sale of gametes is virtually unregulated and U.S. regulations focus primarily on gamete collection and screening by sperm and egg banks, neglecting regulations regarding the collection, transfer, and protection of personal donor information.21 As brokers who deal in gametes do not fall neatly under the HIPAA definition of “covered entities,” their duties regarding the protection of donor information are questionable. One commentator proposes a regulatory scheme for protecting donor information administered by the Federal Food and Drug Administration (FDA), which currently regulates sperm and egg banks.

3. Personalized Medicine

Personalized Medicine (PM) refers to the individualized diagnosis and treatment of disease based on one’s particular genetic makeup.22 PM thus involves both genomics, the mapping and study of individual genotypes, and pharmacogenomics, the tailoring of pharmaceuticals to particular genotypes. PM appears to be the wave of the future. Dr. Elias Zerhouni, Director of the National Institutes of Health (NIH), stated in congressional testimony that “[w]e can now clearly envision an era when the treatment paradigm of medicine will increasingly 353become more predictive, personalized and preemptive.”23

Scientists are becoming more and more aware that the particular genetic makeup of patients and their diseases is directly related to whether and how well they will respond to treatment. The genetic profiling of breast cancer patients, for example, reveals that between five and seven percent of women are unable to metabolize the cancer drug tamoxifen.24 Genetic testing is thus necessary before the drug is prescribed. Genetic disease profiling also reveals that only tumors exhibiting an over-expression of the HER2 gene are responsive to the drug trastuzumab.25

PM represents a powerful tool for the diagnosis, prevention, and treatment of disease, but commentators raise major privacy concerns. There are questions about the regulatory oversight of genetic testing. Any laboratory currently may offer genetic testing without government approval26 and, although the FDA recently proposed a rule for 354expanding its approval requirements, it was met with considerable opposition, and subsequently revised in 2007.27 Commentators also raise questions as to how individual genetic information will be collected, maintained, and transmitted.

4. Government Surveillance and Reporting

Privacy law not only protects the ability to make personal choices, which was the aspect of privacy protected by the Supreme Court in Roe v. Wade, but must protect the very core of each human’s existence. As genetic knowledge expands in breadth and depth, it approaches the very basics of each human. Increased governmental surveillance into our genetic information is especially alarming.

The government has increasingly relied on individually-identified health information to monitor public health—whereas traditionally, group statistical data was collected. This tendency by the government to rely more and more on individualist health data is even more alarming when the data collected is genetic data.

In 1977, the Supreme Court in Whalen v. Roe outlined the basic principles for balancing privacy 355with the police power when maintaining individual public health data. This decision indicates that the government will be granted significant leeway in fashioning reporting schemes that promote or protect public health—even when reporting may involve personal genetic information. One commentator notes, however, that surveillance programs should balance public health concerns with individual privacy and confidentiality rights.28

II.      DEVELOPING TECHNOLOGIES

Beyond making traditional challenges to privacy more prevalent, technology is also creating entirely new challenges. Justice Brandeis foresaw this possibility over eighty years ago in his dissent from Olmstead v. United States:

The progress of science in furnishing the government with means of espionage is not likely to stop with wiretapping. Ways may someday be developed by which the government, without removing papers from secret drawers, can reproduce them in court, and by which it will be enabled to expose to a jury the most intimate occurrences of the home. Advances in the psychic and related sciences may bring means of exploring unexpressed beliefs, thoughts and emotions.

356

277 U.S. 438, 474 (Brandeis, J. dissenting). While Brandeis was warning solely against the actions of law enforcement agencies, the same forces that drive the continuously declining cost and increasing ease of use that applies to consumer goods, also applies to technologies that can impact individual privacy. While government actors will likely always remain the largest single threat to privacy interests, technology has “democratized” surveillance in manners never before seen, and that will likely continue to accelerate in the future.

A.    ONLINE TRACKING

Somebody is watching your Internet activity, whether you like it or not. Online tracking is a concept that many of us know about, but few of us care to know its ramifications. In essence, online tracking is the idea that each time users access the Internet they are leaving a trail of information that, over time, is being used to create massive amounts of data about you, the user. This data is then used by businesses and governments to create a “digital fingerprint.” This “fingerprint” consists of any sort of information that can be considered “personally identifiable,” including IP addresses, user names, credit card numbers, physical addresses, etc. The data in this fingerprint allows businesses to detect spending habits for targeted advertising and governments to know where you have been and even predict where you may go. Over time these digital fingerprints, which can be created consciously by registering an account with a website and unconsciously by simply performing a web search, 357are combined to construct detailed profiles of users’ activities.

1. Technological Overview

Each time a user connects to the Internet, a series of events are set in motion allowing profiling data to be collected. Most methods of connecting to the Internet rely on a process that assigns the user an Internet Protocol (IP) address. From the IP address, an online profiler is usually able to obtain the domain name associated with the users’ Internet service provider (ISP)—the organization providing the Internet connection. The web browser used also provides significant profiling data to websites for collection purposes. A web browser’s “header” often contains not only the user’s IP address, but also the time, pages, and images downloaded, the referring website, form data entered by the user during that browsing session, and any “cookie” data relevant to sites previously visited. Cookie files are stored on a user’s computer to collect website specific information, e.g., registration information, user preferences, sites previously visited, customer codes, etc. Technology inherent to, and embedded in, a web browser also facilitates profile data collection. All of this information can theoretically be concealed or otherwise screened by users, but in practice, most users do not take such precautions.

Any data that a user actively and voluntarily provides to a website is further fodder for profiling activity. Web-based forms, registration, and input allow data to be directly transferred to the 358sponsoring website. Responses to e-mail marketing and advertising campaigns are tracked, not only to gauge success of the campaign, but also to identify the respondent and track their consumption preferences.

2. Privacy Implications

At its most innocuous level, online tracking is a benefit for Internet users. It facilitates targeted advertisements for products that the user is more likely to be interested in, tailors results from search engines to highlight results that fit a user’s past preferences, and otherwise enhances a user’s interaction with websites. Yet online tracking can also be used to learn information that users may generally prefer to keep private, including political or religious views, sexual interests, or medical conditions. Most parties engaged in online tracking do not attempt to be selective in the information they gather about users. Indeed, being selective would in large part defeat the purpose of online tracking: attempting to extrapolate the user’s interests from many disparate pieces of information.

The principal objection to online tracking is the seemingly clandestine nature of the data collection. Although it is now common knowledge that it is possible to track users’ online activities, users are rarely aware of which parties are tracking their activities, how extensive or limited that tracking may be, and to what use the collected information will be put. Compounding this problem further is that many websites, particularly commercial sites, 359are now so heavily dependent on collecting user information that users who have made efforts to anonymize their browsing habits may find the sites unusable. Even sophisticated users often find that they have to compromise on the degree of privacy protection they employ while web surfing.

Attempts to find legal solutions to this issue have been disjointed, but the possibility of some sort of regulation has motivated a number of businesses with strong online presences to form the Network Advertising Initiative (NAI) to push self-regulatory solutions to avoid more rigid government oversight. The NAI describes itself as a “self-regulatory association … [which] maintain[s] and enforce[s] high standards for data collection and use for online advertising purposes. Our organization also educates and empowers consumers to make meaningful choices about their experience with online advertising ….”29 Central to the NAI’s self-regulatory scheme is consumer education and choice. Online advertisers who adhere to the NAI principles through the NAI’s Self-Regulatory Code of Conduct (NAI Code), must agree to ten requirements:

[1]     Transparency: educate consumers about behavioral advertising and choices available to them;

360

[2]     Notice: post notice on its website that describes its data collection, transfer, and use practices;

[3]     Choice: providing consumers with an “opt-out” mechanism;

[4]     Use Limitations: information collected shall only be used for marketing purposes;

[5]     Transfer & Service Restrictions: contractually require compliance with the NAI Code for any third parties who have access to information collected;

[6]     Access: provide consumers access to any information collected;

[7]     Reliable Sources: ensure they are obtaining data from reliable sources;

[8]     Security: provide reasonable security for any data collected;

[9]     Data Retention: retain data only for as long as needed to provide the service; and

[10]     Applicable Law: abide by all laws applicable to their business.30

The NAI principles have been recognized by the FTC as a “sound baseline for … self-regulatory efforts;” however, the FTC has also warned that self-regulation to date has been “too lax” and this 361attempt at self-regulation could be the last if it falls short of “effectively protect[ing] consumers’ privacy.”31

One new private resource for consumers interested in their online privacy has recently emerged, and the results are not good. The website, “Terms of Service; Didn’t Read,” reviews the privacy policies of popular websites and online services and grades them based on how they gather personal data, how they use the personal data, and with whom they share it. Some of the largest and most frequented websites have received negative reviews, including Amazon, Apple, Facebook, and Google.32 It is still to be seen whether these companies take notice and attempt to change their policies based on this new service.

Seemingly taking direction from the FTC and the lack of response from these companies on their own initiative, the courts and states are beginning to step in. Ruling against Google’s motion to dismiss, a federal judge in California rejected Google’s argument that Gmail users have no reasonable expectation of privacy in their e-mail.33 This 362decision will likely be challenged by Google, but for now, it shows that courts are stepping in to curtail policies that may be considered abusive. The state of California is also getting in on the action. On September 27, 2013, California amended its Online Privacy Protection Act (CalOPPA) to require website operators to disclose how it responds to a user’s “do-not-track” preferences and whether third parties may collect PII through their websites.34 Although CalOPPA only applies to collecting PII from California residents, it should have an impact for all Internet users unless companies choose to create a separate policy for California residents only.

The flipside of this issue is that these same Internet companies are moving to limit the federal government’s access to and spying on the data they have collected from consumers. The same giants of the Internet—AOL, Apple, Facebook, Google, Microsoft, and Yahoo—accused of tracking and prying into its users online activity are now banning together to support a bill to prevent the collection of certain PII by the federal government.35 It will be interesting to see if this concerted action by these companies turns out to be the spark that begins to 363change their own practice of tracking and collecting PII.

B.    BIG DATA

We are an online society and that means that information is always being transmitted. “Big data” is a term that can be characterized by the volume, variety, and velocity of this data. Not too long ago, this data was too big, it didn’t fit within the structure of conventional databases, and the data moved too fast. Improvements and changes in the way data is handled and processed, most notably with the advent of cloud computing, has allowed big data to be managed and analyzed, otherwise called “mined.” The concept of data mining has existed for decades, but was primarily limited to searching for patterns in one database at a time and then cross-referencing results. As a result of these improvements in computer networking and processing power, however, it has become increasingly feasible, particularly for large Internet companies, and even government actors who can compel access to otherwise proprietary databases by law, to consolidate and scrutinize personal information from numerous different sources, allowing a previously unrivaled degree of analysis.

1. Private Use

As alluded to in the earlier discussion on online tracking, the information that users leave behind, and the corresponding use of that information by private entities, while doing seemingly innocuous 364tasks is surprising. Consider the following example. You open your favorite search engine and type in “best prenatal vitamins” into your search bar. Instantly, thousands of search results are returned, including retailers that carry prenatal vitamins and articles and blogs written about the desired content of good prenatal vitamins. What you may not realize, is that this information is now being collected by your Internet search provider and used to target its advertisements to you. So the next time you open your web e-mail account, you begin to notice advertisements for certain prenatal vitamins and other baby products. Now take this example one step further and assume that you are a parent in a household that shares a computer among members of your family. The next time you go online to enter a search term or check the inbox of your web e-mail account you see advertisements for prenatal vitamins and baby products. You, as a responsible and prudent parent, begin to wonder why advertisements for baby products are popping up because you and your spouse are not expecting another child. Perhaps you think nothing of it at first, but the advertisements continue over the next several weeks until finally you really start to question why advertisements for baby products continue to appear. It then hits you, “Was my daughter searching for prenatal vitamins? Is she pregnant?” You then later find out that she is pregnant and has been using the computer to research prenatal vitamins and other baby products.

This illustration is just to demonstrate the type of information and analysis that can be extracted from 365a simple Internet search, but it can also occur from something as simple as the loyalty card you carry for your favorite grocery store. Companies use this type of collection and analysis of information to their advantage, and it is a huge business. Take for example Google, one of the largest Internet companies in the world. Google’s revenue in 2012 exceeded $50 billion, of which greater than 90 percent was attributable to advertising. The large dollars involved has many companies wanting a piece of the pie, leading them to collect as much information as they can in the hopes they can monetize it. This desire for revenue, however, provides incentives to companies to ignore privacy principles that could lead to real risks for consumers.

The latest installment of advertising based on information collected from consumers is called Targeted Real-Time Advertising. This type of advertising is targeted advertising taken to the extreme. Rather than wait for the information to be collected, analyzed, and presented to you for a future visit, Targeted Real-Time Advertising strives to render advertising in, you guessed it, real-time. This means walking into your current retail store and seeing customized ads based on your shopping patterns or an advertisement for a donut sale at your local donut shop that you pass on your way to work. While some of these scenarios are still a couple of years off, it is currently being tested and marketed for the Internet in the United States. Facebook, for example, was testing this model with one percent of its user base as of late 2013. For 366Facebook, this means presenting its users with targeted advertising based on real-time comments and conversations the user is having with other Facebook users. Another company, Zoomph, brings this type of technology to all social media networks, like Twitter and Instagram. If the testing of these programs is successful in 2013, real-time advertising could become a reality in all of our online communications.

2. Government Use

Although the volume of information collected by private organizations is astounding, government access to, and collection of, information is perhaps even greater. Revelations from Edward Snowden, a former National Security Agency (NSA) contractor, show the potential for abuse by governments in collecting vast amounts of data and the means by which they can go about that collection. The impact of Snowden’s disclosures will be felt for years.

The US laws at the center of this issue are Executive Order 12333 (Order 12333), the Foreign Intelligence Surveillance Act (FISA), and the FISA Amendments Act of 2008 (FAA).

•    Order 12333 was implemented in 1981 by then-President Ronald Reagan. The stated goal of Order 12333 was to develop intelligence of foreign nations, persons, and organizations for better national security. The order specifically directs the intelligence agencies of the United States to collect such information vigorously; 367however, it also states that measures should be taken to protect “individual interests.”36

•    FISA, which became effective in 1978, authorizes electronic surveillance for the collection of “foreign intelligence information” between “foreign powers” and “agents of foreign powers.” FISA has been amended several times since 2001 after the attacks on September 11, including the PATRIOT Act. Under FISA, probable cause must be shown that the “target of the surveillance is a foreign power or agent of a foreign power.” An important limitation of FISA is that it is not applicable outside the United States, so the gathering of intelligence on foreign soil is not bound by this law.37

•    The FAA allows U.S. intelligence to target persons outside of the United States and monitor their electronic communications. The FAA does not permit targeting of any person known to be located in the United States, nor does it permit the targeting of any U.S. person.38

368

Today, the content of the documents Snowden released are driving a change in how citizens around the world view their government and its intrusiveness into their private lives. Public rallies have been held demanding an end to government spying. Foreign leaders, including U.S. allies caught up in the spying allegations, are calling for change.39 The Internet community is also voicing its outrage. In documents revealed by Snowden, the NSA appeared to have infiltrated company networks, such as Google and Yahoo, and copied information from millions of users, most of whom were innocent American citizens. Although the NSA responded by stating it did not keep all of the information it copied and only focused on discovering “intelligence about valid foreign intelligence targets,”40 the practice sparked public outrage. In direct response to these allegations, Google and Yahoo both stated publicly that they have never authorized free access to their networks and urged that action from Congress is necessary to curtail this activity. Given the unified stance of 369private citizens, foreign governments, and private organizations to change the laws to prevent this type of data collection and mining, some type of reform may be a possibility.

C.    DATA BROKERS

With so much data being transmitted, where is it all kept? Each company you provide information to, whether to enter a sweepstakes or to apply for a loan, uses that information for its intended purpose, but also to update its own records so that it may possibly use that information to target advertising to you in the future. The average consumer understands this. But what happens when a company seeks out this type of information from every type of source you can think of—online social networks, banks, retail stores, and so on—for the sole purpose of creating the most detailed profile about individual consumers that it can. Enter the data broker, a business that collects vast amounts of information, packages it, and then sells that information to other businesses for a profit. Companies in this business accumulate over 1,500 pieces of data per individual, covering over half a billion people worldwide.41 The largest data brokers today have annual revenues in the billions of dollars. With this much money to be made, it is no 370wonder that there are over 250 data brokers today taking their slice of the pie.42

1. Privacy Implications

It is not hard to imagine why a company that houses so much information about individuals presents unique privacy risks. Most immediately, they are easy targets for criminals. If a sophisticated criminal wanted to gather the most information with one hack, a data broker would be at the top of the list. Unfortunately, this has already occurred and will continue to occur so long as such inviting repositories of information are maintained.43

The more immediate concern, however, is how these data brokers are collecting, using, and selling the information. Laws regulating this type of practice are minimal, with the primary source being the Fair Credit Reporting Act (FCRA).44 The FCRA requires entities that collect consumer information, such as employment history, do so in a manner that ensures accuracy. The FTC, which is responsible for enforcement, has taken an increasing interest in the practices of data brokers. In December 2012, the 371FTC began an investigation into the subject when it issued orders to nine of the largest data brokers to study how these companies collect and use the information they collect.45 One of the early results from this study is a proposed “Reclaim Your Name” initiative, with the aim of getting the data brokers to make the information they have transparent to the consumer. In response to this suggestion by the FTC, Acxiom, one of the largest data brokers in the world, unveiled in 2013 a new website that would allow a consumer to see what information has been collected by it and how the information is being used, as well as providing an opt-out option from Acxiom’s marketing services, and even allow corrections to be made through the website.46 In a statement by Acxiom’s CEO, the company acknowledged that this website was both an attempt to empower the individual consumer and appear friendly to any impending legislation.47 Unless more data brokers take similar steps, however, it seems unlikely that the industry will be able to avoid some degree of formal legislation.

372

D.    GEO TRACKING

“Geo tracking” refers to the ability of a third party to track and pinpoint the location of an individual or thing through global navigation satellite systems. This tracking most often results from purposefully revealing your location information. Examples of this include typing your current location into your personal navigation system equipped in your car or on your phone for purposes of getting driving directions, or by placing some type of tracking tag on a product that allows shipping companies to monitor the progress of cargo. Location information may also be collected, however, without individuals even knowing that their location information is being transmitted to a third party. Privacy concerns related to geo tracking center around this latter type of geo tracking: the involuntary and hidden collection and recording of a person’s location and how this location information is being used.

1. Technological Overview

The term “global navigation satellite system” refers to any system that relies on a network of orbiting satellites to determine the geographic position of a radio beacon. The first fully deployed global navigation satellite system was the U.S. Global Positioning System (GPS), but several other such systems have become operational or under development since the GPS first became operational in 1994.48 The GPS is currently composed of thirty-one 373satellites that each orbit the Earth twice daily. The orbits of the satellites are structured so that four satellites are above the horizon relative to any point on Earth at any given time. Each GPS satellite has an atomic clock and continually transmits a signal containing the current time at the start of the message and information concerning its location in the orbital pattern. A GPS receiver intercepts the satellites’ signals. The receiver computes the difference in time between when each signal was transmitted and when it arrived, allowing it to extrapolate the receiver’s position relative to each of the satellites. Collating this information from all the satellites allows the receiver to determine its latitude, longitude, and altitude to a high degree of accuracy.

GPS receivers can be very compact. Commercially available GPS processing chips as small as 5 cubic millimeters have been developed, although display screens, user interfaces, and other features that make the GPS receiver’s information usable to human operators inevitably make the devices larger. Once limited to military and high-value commercial use, it has become routine for GPS receivers to be installed in automobiles, cellular phones, and other consumer electronics. The main limitations on the use of GPS receivers is that they are susceptible to having the signal disrupted on a local level by electromagnetic interference or 374physical obstacles, and they must have a power source to be able to process signal data from the GPS satellites.

2. Privacy Implications in the Courtroom

The major privacy concern with GPS receivers is that, when paired with a computer capable of recording GPS position information, they are able to produce extremely accurate records of the movement of a vehicle or individual carrying the device. While the privacy implications of the use of GPS have been litigated in both the criminal and civil context, it was not until recently that the United States Supreme Court changed its view on the use of GPS and its application under the Fourth Amendment.

Prior to 2012, the common thread in these cases was a reliance on the United States Supreme Court’s ruling in United States v. Knotts, 460 U.S. 276 (1983). Knotts predated public access to GPS (although military applications of the system were already being made), and instead concerned the use of a radio tracking device installed on a drum of chemicals as part of a police sting operation. The Supreme Court found that no Fourth Amendment violation resulted from the tracking because a “person traveling in an automobile on public thoroughfares has no reasonable expectation of privacy in his movements from one place to another.” 460 U.S. at 281. Subsequent cases concerning GPS tracking applied the reasoning of Knotts to both criminal and civil cases to find that 375the use of GPS did not present actionable privacy concerns.49 Recently, however, the United States Supreme Court addressed the use of GPS tracking directly in United States v. Jones.50 In Jones, the Court considered whether the attachment of a GPS device to a vehicle and monitoring of that device constituted a search or seizure under the Fourth Amendment. Relying on the settled conclusion that a vehicle is an “effect” as that term is used in the Fourth Amendment, the Court found that installation of a GPS device on a target’s vehicle, and subsequent monitoring of that device, constituted a “search.”51 Following the Jones decision, courts have been forced to revisit the use of GPS devices and its application under the Fourth Amendment’s protections against unreasonable searches.52

376
3. Privacy Implications to the Public at Large

Although the issue of geo tracking in the courtroom is important, and will certainly continue to evolve as the technology continues to improve, it is the use of geo tracking for individual consumers by private entities that is raising the most eyebrows. Much of this concern is linked to the rapid expansion of the smartphone market. Since 2007, smartphone sales have steadily been increasing, but 2013 was the first year that smartphone sales eclipsed standard mobile phones in sales.53 This means that more than half of the nearly 500 million mobile phone consumers are now at an increased risk of geo tracking. Smartphone geo tracking can arguably be more precise than GPS because smartphones connect and disconnect to cell towers and WiFi hotspots everywhere they go. This can lead to a very detailed map of where the smartphone holder was at different times of the day, even including what floor and office the holder was in a particular building.

There is no doubt that geo tracking has its benefits to the average consumer. Millions of smartphone applications are now available that use geo tracking software to deliver an enhanced consumer experience. These experiences range from receiving discounts at a favorite restaurant courtesy of a Foursquare check-in to keeping children safe by 377monitoring the movement of their smartphones. However, these benefits are a double-edged sword. These same, good-intended applications may provide sexual predators access to information regarding a vulnerable child’s location, or facilitate a stalker’s pursuit of its victim. Beyond these most serious concerns, there is the question of what use private companies may make of this information on a day-to-day basis. Insurance companies are considering using geo tracking information to help inform them of an applicant’s medical and financial tendencies. For instance, an insurance company may use frequent online check-ins at fast food restaurants to determine whether an insured’s dietary habits place them at a higher risk of health problems in the future.

The FTC is the government agency principally responsible for ensuring that companies abide by their own privacy policies and do not mislead consumers. Section 5 of the FTC Act is the law most used by the FTC in these types of cases and prohibits “unfair or deceptive acts or practices in or affecting commerce.”54 The FTC is making a concerted effort to crack down on geo tracking violators. One such example is a series of orders the FTC handed out in 2013 to settle charges of computer spying.55 The orders prohibit the subject companies from using “geophysical location tracking 378without consumer consent and notice.” The FTC has also issued recommendations to companies operating in the mobile space that are intended to better inform consumers. These recommendations include requiring affirmative consent before collecting location information, offering an easy to use interface to allow consumers to review what information is being collected, implementing a privacy by design policy, and offering smartphone users a “Do Not Track” mechanism.56

The FTC’s efforts have no doubt caught the attention of companies operating in the geo tracking market. Perhaps fearing a broad legislative scheme specifically directed at mobile privacy, a group of geo tracking companies has banded together in an effort to self-regulate by publishing the Mobile Location Analytics (MLA) Code of Conduct (the “Code”).57 This Code declares seven principles that MLA companies should incorporate into their business practices. These principles are:

[1]     Notice: “[P]rovide consumers with privacy notices that are clear, short, and standardized ….”

379

[2]     Limited Collection: “MLA Companies … shall limit the data collected for analysis to information needed to provide analytics services.”

[3]     Choice: “MLA Companies shall provide consumers with the ability to decline to have their mobile devices used to provide retail analytics services.”

[4]     Limitation on Collection and Use: “MLA [d]ata should not be collected or used in an adverse manner ….”

[5]     Onward Transfer: “MLA Companies … shall contractually provide that third party use of MLA [d]ata must be consistent with [these principles].”

[6]     Limited Retention: “MLA Companies shall set [ ] policies for data retention and deletion … [and provide the] data retention policy in their privacy notice.”

[7]     Consumer Education: “MLA Companies shall participate in education efforts to help inform consumers about the use of MLA services.”

III.      FUTURE TRENDS

Over the next ten to twenty years, continuing improvements in computing power combined with the proliferation of Internet-capable wireless devices will cause many of the technologies discussed above to become ever cheaper and more widely distributed. 380In turn, as ever-greater volumes of information are collected, whether by fair or foul means, data mining from that information will become an increasingly powerful tool for both monitoring and manipulating individuals. The following topics provide a few examples of the future technological challenges that privacy will face.

Ubiquitous Observation: The term “ubiquitous observation” has been coined to describe programs that focus on prolonged surveillance of individuals and geographic areas. The starting point of ubiquitous observation begins with broad area surveillance as well as “birth-to-death” tracking and identification of critical targets. This ever-present observation is facilitated using both existing technologies and anticipated near-future improvements in them. Many of the technologies required for ubiquitous observation, e.g., wireless microcameras, facial recognition software, RFID-enabled identification cards, “always on” GPS receivers in cell phones and automobiles, are already available on the consumer market. The key to moving from these many scattered components to an integrated surveillance system will be improvements in networking and data management so that information from many thousands of separate collection points can be collated and searched within a reasonable time frame. Ubiquitous observation will effectively be Jeremy Bentham’s Panopticon made real, except that instead of consisting of a single building, it will exist as an overlay on cityscapes.

381

No full-scale ubiquitous observation system is yet in use, but the United Kingdom is rapidly progressing in that direction through the deployment of increasingly sophisticated networks of closed-circuit television cameras and listening devices. The U.S. military is also experimenting with developing these systems in hopes they may predict danger before it happens. The system, known as the Cognitive Engine, is an artificial visual intelligence system that can identify and notify officials if it recognizes anomalous behavior. One such application could be in crowded transportation hubs, notifying officials if abandoned bags are left unattended for more than a few minutes.

Genetic Testing: Genetic testing is the process of testing an individual’s DNA to determine vulnerability to certain diseases and conditions. Genetic testing raises privacy concerns in two principal areas: employment and medical insurance. Congress addressed both of these concerns with the passage of GINA in 2008; however, it is still uncertain what role genetic testing will play in other areas of our life. As mentioned above, GINA does not extend to other types of insurance, such as life, disability, or long-term care insurance. Furthermore, GINA was meant to protect the individual consumer from discrimination by employers and health insurance providers, but what about consumers who attempt to “game the system” by purchasing insurance coverage on favorable terms when an individual knows that an elevated risk of a particular condition exists? If this practice 382becomes widespread, it will undoubtedly affect insurers’ bottom lines and may lead insurance companies to pursue more favorable federal regulation.

As noted in Chapter 1, it has been said that “[c]ivilization is the progress toward a society of privacy.”58 In these times of rapid technological developments, the accuracy of this statement will likely be tested as never before during the coming decades. If history is any guide, however, the cause of privacy protection will likely advance in certain areas, while retreating in others.

_______________________________________

1 See, e.g., Polzer v. TRW, Inc., 682 N.Y.S.2d 194 (N.Y. App. Div. 1998); Jones v. Alltel Ohio Ltd. P’ship, 06 CV 02332, 2007 WL 1731321 (N.D. Ohio June 14, 2007); Allison v. Aetna, Inc., No. 09-2560, 2010 WL 3719243 (E.D. Pa. Mar. 9, 2010); Reilly v. Ceridian Corp., 664 F.3d 38 (3rd Cir. 2011).

2 See, e.g., In re Barnes & Noble PIN Pad Litig., No. 12-cv-8617, 2013 WL 4759588, at *3–6 (N.D. Ill. Sept. 3, 2013) (holding plaintiffs failed to satisfy the elements of Article III standing, and dismissing all five pleaded causes of action: (1) breach of contract, (2) violation of the Illinois Consumer Fraud and Deceptive Business Practices Act (ICFA), (3) invasion of privacy, (4) violation of the California Security Breach Notification Act, and (5) violation of the California Unfair Competition Act) (slip op.).

3 11 DEL. CODE § 854(e).

4 Anderson v. Hannaford Bros. Co., 659 F.3d 151, 167 (1st Cir. 2011).

5 MD. CRIM. L. CODE ANN. § 8–301(g).

6 15 U.S.C. §§ 6801 to 6809.

7 Pub. L. No. 109–476, 120 Stat. 3568 (Jan. 12, 2007) (codified as amended at 18 U.S.C. § 1039 (2007)).

8 See Offers to Buy or Sell Outside of eBay, EBAY, http://pages.ebay.com/help/policies/rfe-spam-non-ebay-sale.html (last visited Oct. 24, 2013).

9 State Statutes Relating to Radio Frequency Identification (RFID) and Privacy, NATL CONF. OF STATE LEGS., http://www.ncsl.org/issues-research/telecom/radio-frequency-identification-rfid-privacy-laws.aspx (last updated Sept. 9, 2010).

10 EPCglobal Guidelines on EPC for Consumer Products, EPCGLOBAL, http://www.gs1.org/epcglobal/public_policy/guidelines (last updated June 2013).

11 Members, CE RFID, http://www.rfid-in-action.eu/public/members.html (last visited Oct. 24, 2013).

12 RFID Guidelines, CE RFID, http://www.rfid-in-action.eu/public/rfid-knowledge-platform/copy_of_rfid-guidelines.html (last visited Oct. 24, 2013).

13 European Commission Directives and Decisions, CE RFID, http://www.rfid-in-action.eu/public/rfid-knowledge-platform/standards/application-independet-standards/european-commission-directives-and-decisions.html (last visited Oct. 24, 2013).

14 Passive RFID Tags at the Stage of Placing on the Market and the R&TTE Directive, ENTER. & INDUS., EUROPEAN COMMN, http://ec.europa.eu/enterprise/sectors/rtte/documents/interpretation_en.htm#h2-36 (last updated Apr. 18, 2013).

15 Elizabeth E. Appel Blue, Redefining Stewardship over Body Parts, 21 J.L & HEALTH 75, 114 (2008).

16 Bouvia v. Superior Court, 225 Cal.Rptr. 297, 302 (1986) (quoting Schloendorff v. Society of New York Hospital, 105 N.E. 92, 93 (1914)).

17 See Washington Univ. v. Catalona, 437 F. Supp. 2d 985 (E.D. Mo. 2006); Greenberg v. Miami Children’s Hospital Research Inst., 264 F. Supp. 2d 1064 (S.D. Fla. 2003).

18 Green v. Commissioner, 74 T.C. 1229, 1980 WL 4486 (1980); Hecht v. Superior Court, 20 Cal.Rptr.2d 275 (Cal. Ct. App. 1993).

19 See Sunni Yuen, Comment, An Information Privacy Approach to Regulating the Middlemen in the Lucrative Gametes Market, 29 U. PA. J. INTL L. 527, 528–29 (2007).

20 Id. at 535–36, 540.

21 See 21 C.F.R. § 1271.1 (2007).

22 Gary E. Marchant, Personalized Medicine and the Law, 44 ARIZ. ATTY 12, 14 (Oct. 2007).

23 Dr. Elias Zerhouni, Testimony Before the House Subcommittee on Labor – HHS –Education Appropriations, United States House of Representatives, 109th Cong. (April 6, 2006), available at http://olpa.od.nih.gov/hearings/109/session2/testimonies/overview.asp.

24 Ability to Metabolize Tamoxifen Affects Breast Cancer Outcomes, Mayo Clinic-Led Study Confirms, MAYO CLINIC (Dec. 26, 2012), http://www.mayoclinic.org/news2012-rst/7228.html.

25 Sandhya Pruthi, HER2-Positive Breast Cancer: What is it?, MAYO CLINIC (Apr. 11, 2012), http://www.mayoclinic.com/health/breast-cancer/AN00495.

26 Genetic Testing, NATL HUMAN GENOME RESEARCH INST., http://www.genome.gov/10002335 (last updated Jan. 10, 2013).

27 The FDA continues to monitor the growth of the genetic testing industry and has sought advice from the public on how it should regulate laboratory-developed tests (tests developed and administered by the same laboratory) and tests marketed and sold directly to consumers. See Jeffrey Shuren, Dir., Ctr. for Devices and Radiological Health, Direct-to-Consumer Genetic Testing and the Consequences to the Public (July 22, 2010), available at http://www.fda.gov/NewsEvents/Testimony/ucm219925.htm.

28 Michael A. Stoto, Public Health Surveillance in the Twenty-First Century: Achieving Population Health Goals While Protecting Individual’s Privacy and Confidentiality, 96 GEO. L.J. 703, 717 (2008).

29 About The NAI, NETWORK ADVER. INITIATIVE, http://www.networkadvertising.org/about-nai (last visited May 15, 2013).

30 2008 NAI Principles: The Network Advertising Initiative’s Self-Regulatory Code of conduct, NETWORK ADVER. INITIATIVE 7–10, http://www.networkadvertising.org/sites/default/files/imce/principles.pdf (last visited May 15, 2013).

31 Jon Leibowitz, Concurring Statement of Commissioner Jon Leibowitz—FTC Staff Report: Self-Regulatory Principles for Online Behavioral Advertising, FTC 1 (February 2009), http://www.ftc.gov/os/2009/02/P085400behavadleibowitz.pdf.

32 TERMS OF SERVICE; DIDNT READ, (last visited Nov. 3, 2013).

33 See In re Google Inc. Gmail Litigation, No. 13-MD-02430-LHK, 2013 WL 5423918, at *14 (N.D. Cal. Sept. 26, 2013) (“Google has cited no case that stands for the proposition that users who send emails impliedly consent to interception[ ] and use of their communications by third parties other than the intended recipient of the email.”).

34 Act of Sept. 27, 2013, no. 370, CAL. BUS. & PROF. CODE § 22575.

35 Craig Timberg & Ellen Nakashima, Amid NSA Spying Revelations, Tech Leaders Call for New Restraints on Agency, WASH. POST (Nov. 1, 2013, 7:34 AM), http://www.washingtonpost.com/world/national-security/amid-nsa-spying-revelations-tech-leaders-call-for-new-restraints-on-agency/2013/10/31/7f280aec-4258-11e3-a751-f032898f2dbc_story.html.

36 Exec. Order No. 12333, 3 C.F.R. 200 (1981).

37 Foreign Intelligence Surveillance Act of 1978, 50 U.S.C. § 1801 et seq.

38 FISA Amendment Act of 2008, Pub. L. No. 110-261 (codified as amended in 50 U.S.C.).

39 One such foreign leader caught in the NSA’s web of spying activity was German Chancellor Angela Merkel, whose personal mobile phone was alleged to have been monitored. See, e.g., Mark Mazzetti & David E. Sanger, Tap on Merkel Provides Peek at Vast Spy Net, N.Y. TIMES, Oct. 31, 2013, at A1, available at http://www.nytimes.com/2013/10/31/world/europe/tap-on-merkel-provides-peek-at-vast-spy-net.html.

40 NSA Statement on Washington Post Report on Infiltration of Google, Yahoo Data Center Links, WASH. POST (Oct. 30, 2013), http://www.washingtonpost.com/world/national-security/nsa-statement-on-washington-post-report-on-infiltration-of-google-yahoo-data-center-links/2013/10/30/5c135254-41b4-11e3-a624-41d661b0bb78_story.html.

41 A company in the data broker industry may use as many as ten to twenty elements (of the 1,500 pieces of data) to describe an individual’s age.

42 Online Data Vendors: How Consumers Can Opt Out of Directory Services and Other Information Brokers, PRIVACY RIGHTS CLEARINGHOUSE, https://www.privacyrights.org/online-information-brokers-list (last updated Mar. 2013).

43 See, e.g., Brian Krebs, Data Broker Giants Hacked by ID Theft Service, KREBS ON SECURITY (Sept. 25, 2013, 12:02 AM), http://krebsonsecurity.com/2013/09/data-broker-giants-hacked-by-id-theft-service/.

44 Fair Credit Reporting Act, 15 U.S.C. § 1681 et seq. (2012).

45 FTC to Study Data Broker Industry’s Collection and Use of Consumer Data, FTC (Dec. 12, 2012), http://www.ftc.gov/opa/2012/12/databrokers.shtm.

46 ABOUTTHEDATA.COM, https://aboutthedata.com/ (last visited Nov. 8, 2013).

47 Natasha Singer, A Data Broker Offers A Peak Behind the Curtain, N.Y. TIMES, Sept. 1, 2013, at BU1, available at http://www.nytimes.com/2013/09/01/business/a-data-broker-offers-a-peek-behind-the-curtain.html?_r=0.

48 These other systems include the Russian GLONASS (became fully operational in 2011); the E.U. Galileo system (scheduled to become fully operational in 2019); the Chinese COMPASS (scheduled to become fully operational in 2020); and the Indian IRNSS (scheduled to become operational in the Indian subcontinent by 2014).

49 See, e.g., United States v. Garcia, 474 F.3d 994, 996–98 (7th Cir. 2007) (finding no Fourth Amendment violation where GPS receiver was placed on vehicle by law enforcement agents); Elgin v. St. Louis Coca-Cola Bottling Co., Case No. 4:05CV970–DJS, 2005 WL 3050633, *3 (E.D. Mo. Nov. 14, 2005) (rejecting claim for tort of intrusion based on placement of GPS receiver on vehicle by employer).

50 132 S.Ct. 945 (2012).

51 Id. at 949.

52 See, e.g., United States v. Sparks, 711 F.3d 58 (1st Cir. 2013) (finding that settled, binding circuit precedent that use of a tracking device on a target’s vehicle did not implicate Fourth Amendment protections was abrogated by Supreme Court’s decision in Jones); United States v. Pineda-Moreno, 688 F.3d 1087 (9th Cir. 2012) (holding that installation of GPS tracking device was not subject to exclusionary rule due to reasonable reliance on existing binding precedent that stated use of GPS device to track movement did not constitute a search under the Fourth Amendment).

53 Eric Zeman, Smartphone Sales Beat Feature Phones, InformationWeek (Aug. 14, 2013, 10:50 AM), http://www.informationweek.com/mobility/smart-phones/smartphone-sales-beat-feature-phones/240159950.

54 Federal Trade Commission Act § 5, 15 U.S.C. § 45 (2012).

55 FTC Approves Final Order Settling Charges Against Software and Rent-to-Own Companies Accused of Computer Spying, FTC (Apr. 15, 2013), http://www.ftc.gov/opa/2013/04/designerware.shtm.

56 FTC Staff Report Recommends Ways to Improve Mobile Privacy Disclosures, FTC (Feb. 1, 2013), http://www.ftc.gov/opa/2013/02/mobileprivacy.shtm. The full FTC report is available at http://www.ftc.gov/os/2013/02/130201mobileprivacyreport.pdf. Privacy by Design (PbD) is the idea that privacy should be built into products at every stage of development.

57 Mobile Location Analytics Code of Conduct, FUTURE OF PRIVACY FORUM (Oct. 22, 2013), http://www.futureofprivacy.org/wp-content/uploads/10.22.13-FINAL-MLA-Code.pdf.

58 AYN RAND, The Soul of an Individualist, in FOR THE NEW INTELLECTUAL 84 (1961).