4

Is Obfuscation Justified?

Be fire with fire; Threaten the threatener and outface the brow.

Shakespeare, King John, 1595

After a lecture on TrackMeNot,1 a member of the audience rose to say that she was deeply troubled by the valorization of deceit and dishonesty. To her it didn’t seem right to submit search queries that were not of true interest. The question of deception has not been the sole source of opposition to obfuscation; other sources of opposition include wastefulness, free riding, database pollution, and violation of terms of service.

Challenges such as that made by the woman at the lecture were worrisome to us: ours was supposed to be the moral high ground, with TrackMeNot defending individuals against illegitimate and exploitative information practices. But such challenges could not be summarily brushed aside. Because obfuscating tactics are often fundamentally adversarial, involving dissimulation and misdirection, the appropriation of resources for unintended or undesired uses must be explained and justified. In an article titled “A Tack in the Shoe,” Gary Marx writes: “Criteria are needed which would permit us to speak of ‘good’ and ‘bad,’ or appropriate and inappropriate efforts to neutralize the collection of personal data.”2 To use obfuscation because it works, or even because it is the only approach that works, isn’t enough. Obfuscation, if used, must be defensible on ethical grounds, and must be compatible with the political values of the society in which one lives.

TrackMeNot exposed many of the ethical issues that can confront not only developers of obfuscating systems but also users, and as a consequence exposed a need to distinguish uses that are morally defensible from uses that are not. Intuition places the Craigslist robber, with his unwilling identically dressed confederates, among the latter, and the Allies’ radar chaff among the former, but why? What makes them different? And how might we adapt the answer to more ambiguous cases? Mere approval or disapproval isn’t sufficient if we are to defend the legitimacy of a particular system; instead, we must provide systematic reasons why that system avoids moral and political hazards.

This chapter prepares designers or users of obfuscation to meet a range of challenges they are likely to confront. Some of the challenges are ethical, claiming that obfuscation causes harm or violates ethical rights beyond general harms. Other challenges are political, suggesting that obfuscation abridges political rights and values, that it is unfair or unjust, that it redistributes power in illegitimate ways, or that it is generally at odds with the political values of surrounding societies or communities.

4.1 Ethics of obfuscation

Dishonesty

It is nearly impossible to avoid charges of dishonesty when the aim of obfuscation is to mislead and misdirect. Linking obfuscation to the ethics of lying leads to a vast landscape of philosophical thought that, though beyond the scope of our book, contributes important insights to our more limited purpose.

The classic Kantian position on lying, which holds that it is absolutely wrong and which famously prescribes truth even in reply to a murderer seeking to locate an innocent victim, would condemn any use of obfuscation. Other defenses of lying have been based on more varied and more contingent ethical positions. Generally, the literature on lying has two strands, one concerned with defining lying and the other with its ethics—whether it is always wrong, whether it is ever right, and whether, even if wrong, it ever can be excused. In practice these two strands are interdependent, because a hard line on the wrongness of lying is softened by a narrow definition. Thomas Aquinas, for example, allowed prudent dissimulation to pass the ethical test not because lying is sometimes morally acceptable but because dissimulation sometimes falls outside of the definition.3 Our guess is that few people are as resolutely committed to truth-telling as Kant and Aquinas, and that most would condone lying with appropriate justification, such as preventing egregious harm, acting under duress, keeping a promise, or achieving other important ends.4

In many of the cases we have discussed in this book, obfuscation presents a means of resisting coercion, exploitation, or threat—ends that might generally legitimize acts of dishonesty. We might say, therefore, that whether obfuscation, like lying, is morally defensible depends on the legitimacy of its ends: radar chaff protecting Allied bombers passes the test, but disseminating malware, robbing a bank, or fixing an election does not, even though we might admire or chuckle at the ingenuity of those who do such things. We do not want to overstate the conclusion and say that legitimate ends alone justify obfuscation, insofar as it is a form of dishonesty; we want to say only that legitimate ends are a necessary condition for ethical obfuscation.

Even in the case that someone chooses obfuscation to achieve praiseworthy he or she will need to defend this choice against further challenges. After we have explored some of the other ethical charges aimed against obfuscation, we will return to the question of sufficiency in order to explain what still is missing from an ethical assessment beyond laudable or even simply acceptable ends.

Waste

Critics may say that an obfuscation system is wasteful if it draws on any important resources to generate noise. In the case of TrackMeNot, for example, some complained about its wasteful use of search engines’ servers, its burden on network bandwidth and even its unnecessary draw on electricity. Similarly, CacheCloak5 could be faulted for wasting network and mobile-app resources, many noise-generating social-network tools for drawing excessively on Facebook’s services, and Uber for squandering the effort of drivers responding to spurious calls. In defense of one’s preferred obfuscation system, one should immediately recognize a hidden agenda in any such accusations, for the notion of waste is thoroughly normative. It presumes standards of acceptable, desirable, or legitimate use, consumption, exploitation, or employment of the resources in question. Only a strong societal consensus around these standards elevates such charges above mere personal opinion, and only a sound foundation in factual knowledge lends credibility to the suggestion that any particular obfuscation system wastes resources.

When standards are not settled, however, there is greater uncertainty over the line between use and waste. We might all agree that carelessly leaving a tap running is a waste of water, but residents of Los Angeles disagree with residents of Seattle over whether daily watering to maintain verdant lawns in a desert climate is wasteful. To defend TrackMeNot against charges of wastefulness, we can point out that its network usage is minimal compared with usage generated by image, audio, and video files, rich information flows on social networks, and Internet-based communications services. Yet noting huge differences in scale between the traffic generated by TrackMeNot search terms and those needed to maintain (say) Bitcoin or World of Warcraft doesn’t address the complaint fully. After all, the cumulative flow of a dripping faucet may be far less than the amount of water a daily shower requires, but the former may still be judged wasteful because it is unnecessary.

Whether one considers the noise produced by systems such as CacheCloak and TrackMeNot wasteful depends not only on the volume of the noise but also on one’s values. A defender points out that protecting privacy by preventing profiling on the basis of search queries is worth the bandwidth— certainly more worthwhile than a good number of the videos clogging bandwidth en route from servers to households. Some critics remain doubtful, though their doubts are less about wasteful usage of common resources than about waste of private ones, such as the server space belonging to providers of search engines and mobile apps. Here too, both quantity and legitimacy matter. In cases where noise overloads an adversary’s system or, in more extreme cases, even consumes all available resources, it becomes a denial- of-service attack and the bar of justification is very high. Unless you can convincingly demonstrate that your target is engaged in oppressive, domineering, or clearly unfair practices, a debilitating obfuscation attack is difficult to justify.

In the case where an obfuscating system merely uses but does not debili- tate a privately owned resource, what counts as legitimate may not be obvious. Take the case of Web searching. Manually submitted queries, no matter how frivolous the purpose, seem not to provoke complaints of waste. No one argues that “ninja turtle” or “fantasy football” is more wasteful of Google’s server resources than, say, “symptoms of Ebola,” although some critics have said that the automated search queries submitted by TrackMeNot are wasteful. We can think of no other reason for such criticism than that TrackMeNot’s queries run counter to Google’s interests, desires, or preferences and that these, according to critics, trump users’ interests, desires, or preferences for privacy-seeking obfuscation. Such is the rhetorical struggle between those who defend obfuscation as a means of protecting its users against illegitimate information capture, and those who are targets of obfuscation who label such actions wasteful. The winner of this debate captures the ethical high ground and transforms a private dispute over conflicting vested interests into a matter of public morality. But it is important to see, in this instance, that when defenders of search resources vilify obfuscation as “waste,” they beg the very question that we, collectively, have not yet properly addressed. In the name of privacy protection, query obfuscation utilizes private resources without owners’ authorization, but whether we deem this wasteful or legitimate, prohibited or allowed, is a political question about the exercise of power and privilege and a question to which we will return later in the chapter.

Free riding

Depending on the design of one’s preferred obfuscation system, one may be accused of free riding—that is, taking advantage of other people’s willingness to submit to the collection, aggregation, and analysis of data or of using services provided by data collectors while denying them profit from your personal information. In the first instance, the adversary will go after the less costly target—people who don’t obfuscate—just as predators, according to the adage, go after the slower prey. In the second instance, if you use services offered by targets such as Facebook and Foursquare in ways that diverge from the terms of service, you are violating an implied contract and are free riding not only on people whose behaviors comply with the terms of service but also on investments made by the providers of the services. This applies, for instance, to users of ad-blocking browser plug-ins, who can enjoy a quieter, faster-loading, ad-free Web experience while having access to content underwritten by users who haven’t installed ad blockers. Or so the critics suggest. Cast as free riders, obfuscators appear to be sneaks more than rebels; after all, when you aspire to the moral high ground, do you want instead to be someone who games the system by exploiting the ignorance and foolishness of others? These charges must be taken seriously, but in our view whether they stick depends on answers to two questions: Is your obfuscation system (either one you have created or one you are using) freely available to others? And are people who aren’t obfuscating left no worse off as a result of your use of that system? When the answers to both of those questions are Yes, as holds for many of the systems we have discussed, we see no exploitation, no moral wrong. When the answer to either question is No, the situation is complex and requires further probing. Secretive obfuscation may be excusable if it leaves non-obfuscators no worse off; obfuscation that disadvantages non-obfuscators may be justified if it is widely and freely available to all. Though further justification is needed in both scenarios, the case that poses the most difficult questions is closed, secretive obfuscation that results in disadvantage to non-obfuscators.

These difficult questions plunge us into philosophical debates about moral responsibility. Even in the worst case, you might redirect blame to the targets of your obfuscating system: the data gatherers. You may ask “Who is taking advantage of whom?” Returning to the metaphor of predator and prey, you can argue “Don’t blame me for being fleet footed; it is the predator, after all, who is responsible for the demise of its victims.” Though you expose your slower compatriots to higher odds of capture, surely blame accrues primarily to the predator. This leaves a stalemate of mutual recrimination, the data collector accusing the obfuscator of free riding on services and the obfuscator accusing the data collector of free riding on personal information.

In the dominant economy of the Internet, individual users enjoy free services, which are sustained by the value extracted from information about those users by ad networks and by other third-party data aggregators. Unlike traditional commercial market-based exchanges, where a price is explicitly paid for goods or services, the economy into which the Internet has settled is based on the capture of information by indirect, subtle, and often well-hidden means. The informational price—effectively a blank check—is anything but free, according to experts whose commentaries have inspired our own thoughts on this matter.6 When relinquishment of personal information with no reasonable account of its use is a necessary condition for receiving a service, when it is disproportionate to need (as in over-collection), and when it is inappropriate (as when it violates contextual expectations), such a price is exploitative and the practice is oppressive. Furthermore, when traditional institutional protections aren’t effective in addressing practices such as these, the obfuscator who has been accused of free riding may justly challenge the presumptive entitlements of the entrenched system, in which naive users succumb to rhetorical trickery that engages them in terms of exchange they have had little hand in setting.7 Each party has an interest in setting terms for the exchange of valuable resources, but which interests are favored must be fairly settled or, says the obfuscator, this is a claim that doesn’t warrant respect.

This argument doesn’t make all information obfuscation legitimate and defensible against the charge of free riding; it does so only when other moral requirements are met and the question of free riding hinges on who is entitled to surplus value generated by the interactions of individual users with service providers collecting information on them. In other words, after you have satisfied yourself that your system meets other ethical criteria, such as worthy ends, questions that remain about conflicting interests and desires or about fair distribution of benefits and entitlements enter the realms of economic and political analysis, taken up below.

Pollution, subversion, and system damage

The charge of data pollution is as vexing as it is unavoidable. Obfuscation, defined as the insertion of noise, invites a parallel to pollution—making something impure or unclean. Someone who taints water, soil, or air with toxic chemicals, particulates, or waste can be roundly criticized because environmental integrity is highly valued not only as an ideal but also as a practical goal. However, critics drawing on the normative clout of environmental pollution aren’t coolly observing that obfuscation clutters a data repository; they are alleging that it contaminates a data environment whose integrity is prized. There is, however, a difference. In most present-day societies, the value of the natural environment is presumed and an action that has been shown to pollute it is considered reprehensible. But unless one can make an explicit case that a data assemblage is worthy of protection, a claim for its integrity begs the question.

Even environmental integrity isn’t absolutely valued and has been traded off against other values, such as security, commerce, and property rights. Analogously, in order for a charge of data pollution to stick, a data assemblage must be shown to hold greater value than whatever the obfuscator aims to protect. Simply revealing negative consequences for a database is, once again, to beg the ethical question. It comes down to this: Data pollution is unethical only when the integrity of the data flow or data set in question is ethically required. Moreover, whether the integrity of the data outweighs other values and interests at stake must be explicitly settled. When what is in question is whether the interests of a data collector are negatively affected by obfuscation, ethical questions can be settled only by establishing that these interests are of general value and that they override the interests of the obfuscator. When there are no clear moral grounds favoring the respective, conflicting interests (or preferences) of a data collector and an obfuscator, a political resolution, or perhaps a market-based resolution, may be the best one can hope for.

If there is genuine public interest in the integrity of particular data flows or data sets, and if obfuscation negatively affects the system as a whole, the burden shifts to the obfuscator to justify his or her actions. For example, one may justly challenge the obfuscator who diminishes the integrity of a population health database when so doing reduces the potential public benefits it can provide. But even in a case such as this, we should assess whether the price an individual pays for the benefit of others or in the public interest is fair. If individuals are coerced to contribute, it should be with assurances that how the information will be used, where it will travel, and how it will be secured will, at the very least, be in line with familiar principles of fair information practice. In other words, the ethical argument hinges on two considerations: whether the data in question are of genuine public and common interest and how much individuals are asked to sacrifice on behalf of such interests. Keeping both of these considerations in sight recognizes that the integrity of a data assemblage—even one deemed valuable—is not absolute, and data controllers have the burden of defending the public importance of the assemblage (and associated practices) as well as the legitimacy of any burdens it might impose on individual data subjects.

In the discussion thus far, we have not differentiated among the three terms “pollution,” “subversion,” and “system damage.” You might want to consider which of the three is relevant when striving to ensure an ethically defensible system. Obfuscating systems that pollute or subvert only the obfuscators’ data trail pose fewer ethical challenges than those that also affect other data subjects, and even fewer than those that interfere with a system’s general functioning, as in a denial of service. A careful assessment would involve asking questions similar to those we have discussed above— questions concerning respective harms, entitlements, societal welfare and proportionality—about data collection as well as about data obfuscation in relation to legitimate ends.

4.2 From ethics to politics

Ends and means

Since obfuscation almost always involves dissemblance, unauthorized uses of system resources, or impairment of functionality, appreciating obfuscation’s intended ends, aims, purposes, or goals, is crucial to evaluating its moral standing. Although some ends might seem unequivocally good and others unequivocally bad, a vast middle ground exists that encompasses merely unproblematic ends (e.g., foiling supermarket surveillance) and ends that are somewhat controversial (e.g., enabling peer-to-peer file sharing). In these zones of ethical ambiguity or flexibility, politics and policy come into play.

Ends, however, are only part of the picture—necessary but not sufficient conditions. Ethical theory and common sense demand that means, too, be defensible, and, as the saying warns, ends may not justify all means. Whether means are acceptable may rest on numerous ethical factors but, as often, may depend on the interaction of ends with various contingent and contextual factors, whose consideration resides in the zone of the political.

Recognizing that certain disputes over ethical issues are best resolved politically doesn’t necessarily remove them from ethical consideration entirely when one takes a view, such as Isaiah Berlin’s, of political philosophy as moral inquiry, “applied to groups and nations, and indeed, mankind as a whole.”8 In some instances, disagreements over the ethics of obfuscation that reduce to disagreements over clashing ends and values may yet be amenable to purely ethical resolutions, such as the resolution Kant seems to have found when he prioritized truth over preventing murder. But disagreements over ends may not always be accessible to purely ethical reasoning. In these cases, resolution becomes a matter for social policy because how these disagreements are settled affects the constitution or shape of the society in which they are embedded. Ethical questions such as those requiring societal resolution have inspired political philosophers through the ages—from Plato to Hobbes and Rousseau to the present—who have sought to compare and evaluate political systems, to identify political properties and modes of decision making that characterize good societies, and to articulate political principles of justice, fairness, and decency. When we conclude that answers to ethical questions must be answered politically, because they are about the distribution of power, authority, and goods in society, we still have ethics on our minds. We do not mean any society; we mean societies opposed to tyranny and striving to be good, just, and decent in the ways that great philosophers, critical thinkers, and political leaders have idealized in word and action. With this in mind, let us revisit the issues of dishonesty (dissimulation), waste, free riding, pollution, and system damage arising in the context of obfuscation.

As we worked through the issue of waste, we imagined clashes of opponents parrying back and forth, one accusing the other of wasteful activity and the other insisting that the activity in question constituted a legitimate use. This was the case when critics accused TrackMeNot users of wasting bandwidth with searches that were of no genuine interest and TrackMeNot users responded that they weren’t wasting bandwidth but rather were using it to promote legitimate privacy claims. Similarly, one who is accused of polluting a dataset or impairing a system’s data-mining capacity counters that the purpose of the dataset or data mining is not one that warrants societal protection, or at least not one that should trump the obfuscator’s evasion of surveillance.

Generally, asserting that data obfuscation impairs and damages a database or compromises a system, or that it overuses or wastes a common resource, doesn’t entitle one to call the obfuscation unethical unless one can clearly explain how the data store or system in question furthers societal goals more important than contrary goals the obfuscator seeks to promote. Rarely are these conflicting ends explicitly or systematically addressed in ways that call on data collectors to justify the value of their activity. To understand the criterion of ends, you would ask about the purposes or values served by data collection—database or information flow—and the same for the obfuscating activities. Further, you would ask how these ends feature within broader political commitments of the collective—society, nation, etc. Thus far, we seem to give great leeway to the Transportation Security Administration’s pursuit and assembly of personal information profiles insofar as its purposes are to provide security for travelers. Accordingly, we might be less tolerant of individuals who obfuscate in this context even for the purposes of protecting privacy, the point being that ends should make a difference in our reactions both to the ethics of data collection and to obfuscation.

But means matter, too. Even good ends may not justify all means. In law and policy, we are often asked to consider proportionality—for example, demanding that the punishment should fit the crime. Although an obfuscator must be challenged to justify means that are disruptive, even damaging, surely it is fair also to challenge the target. You may decide to install TrackMeNot not because you object to the basic practice of logging search queries but because you object to unacceptable extremes such as holding data with too much detail, for too long, without appropriate limits on use. Keeping data in order to improve search functions, even to match contextual ads to queries, may seem acceptable, but isn’t it grossly disproportionate to a search engine’s core function to hold data indefinitely in order to refine behavioral advertising and to match search histories with other online activity so as to profile people too personally, too precisely, too intimately? Such questions are relevant to all the extreme forms of information surveillance, with online surveillance a particular case in which ubiquitous tracking of online behavior seems wildly disproportionate as a means, insofar as it serves only the parochial ends of commercial advertising, even if this tracking slightly improves the efficacy of the ads. But the obfuscator, too, must answer the challenges of proportionality, and in quite concrete terms. Thus, we may agree that the ends of TrackMeNot are legitimate, but still want to regulate the volume of noise—say, to foil profiling but not to disable a search engine entirely with denial-of-service attacks. Drawing an exact line between proportional and disproportional is never easy, but the intuition that there is a line, even if it must be drawn case by case, is robust and deep.

Proportionality suggests normative standards for particular pairs of means and ends and pairs of actions and reactions, but means may also be measured by comparative standards, such as whether their cost is lower than that of alternatives. Utilitarian thinking is a case in point, demanding not only that the happiness yielded by actions or social policies under consideration should be greater than the unhappiness, or that the benefits should exceed the costs, but also that the actions or policies should yield the optimal proportion among available alternatives. Where obfuscation involves pulling the wool over someone’s eyes, spoiling a dataset, or impairing the functioning of a system, even to achieve laudable ends, the ethical obfuscator still should investigate whether other means are as readily available with lesser moral costs. We can ask whether the costs associated with different forms of obfuscation vary significantly, but we also can ask whether other means might achieve the same goals without the costs we have been considering thus far.

The question of whether less disruptive but equally or more effective alternatives to obfuscation can be found is worth asking—although in chapter 3, where we reviewed some of the standard approaches to resisting troubling data-surveillance practices, we found little cause for optimism. Opting out, suggested by critics who say “If you don’t like this practice, you can always choose not to engage,” may be feasible when it comes to nifty mobile apps, digital games, and various forms of social media, but inconvenient and expensive when it comes to online shopping, EZ Pass, and Frequent Flyer programs— and forgoing many vectors of surveillance—mobile phones, credit cards, insurance, motor vehicles, public transportation—is now nearly infeasible for many people.

Other alternatives, including corporate best practices and legal regulation, though promising in theory, are limited in practice. For structural reasons having to do with radically misaligned interests and the proverbial folly of leaving the fox to guard the henhouse, meaningful limits on data practice aren’t likely to be set by corporate actors. Further, a history of unsuccessful attempts to have various industries regulate their respective data practices leaves little hope for meaningful reform. Although governmental legislation has also been variably effective,9 its effects haven’t reached the commercial sector, particularly when it comes to regulating online and mobile tracking. Despite dogged efforts and the intense commitment of the Federal Trade Commission, the Department of Commerce’s National Telecommunications and Information Administration, and other government agencies, general progress has been minimal. For example, notice and consent expressed in privacy policies remain the dominant mechanisms for protecting privacy online, despite decisive evidence that they are incomprehensible to data subjects, are expressed ambiguously, are continuously revised, and have not constrained the degree and scope of data collection and use in practice. Further, by most accounts, concerted efforts to establish a Do-Not-Track standard for Web browsing were sabotaged by the advertising industry,10 and the Snowden revelations11 have revealed that the U.S. government and other governments have long been conducting mass surveillance. Individuals have good reason to question whether their privacy interests in appropriate gathering and use of information will be secured any time soon by conventional means.

Justice and fairness

So far, we have shown that when obfuscators and their critics disagree over the ethics of obfuscation, their disagreements sometimes boil down to clashes over ends and values. The critic accuses the obfuscator of violating legitimate ends; the obfuscator accuses the target of precisely the same. Clashes such as these would benefit from public airing and deliberation in the political arena, something we strongly support. But in our discussion of ethics of obfuscation, we also identified clashes that concerned conflicting interests and preferences more than competing ends and values. A clear instance of this emerged in our discussion of free riding. Charged with unseemly behavior, obfuscators may point to the terms of interaction unilaterally set by data collectors, which enable the seizure by these data collectors of surplus value generated during the course of the interaction. In relation to peers, complaints of free riding have opened tricky questions, such as whether blame is more appropriately assigned to an obfuscator who may have exposed peers to even greater scrutiny or disadvantage or to the agents of that scrutiny or disadvantage.

A purely ethical resolution of such claims and counterclaims might not be possible when, taken in isolation, they amount to favoring either the obfuscator’s interests and preferences or those of the obfuscator’s target. Within a broader societal context, however, disputes over whose preferences and interests are given greatest credence are deeply political. They recognize certain entitlements over others, and in so doing they often bring about systematic allocation or reconfiguration of power, authority, and goods as well as of burdens and subjection. These are among the questions of justice and fairness that, for centuries, have troubled political philosophers when resolving clashes over what values trump other values and whose rights count more than the rights of others. Beyond rights and values, however, societies have sought principles to govern the distribution of a wide range of goods, to ameliorate deeply unfair, unjust, and indecent outcomes, rather than leaving it to brute competition among actors (individuals, institutions, and organizations), or to the fiat of incumbency as the strong incumbents would prefer.

To guide our reasoning about just and fair distribution of goods (power, wealth, authority, etc.), we have dipped into recent writings in political philosophy. We beg our readers’ forbearance as we sample from a vast disciplinary tradition for insights that will help us address the standoff we have identified between target and obfuscator in all its particularities. It might seem unnecessary to drill down to first principles when technologically advanced, liberal, and progressive democracies would already presumably have integrated such principles into their laws and regulations. This would mean that we would need only to refer to existing law and regulation for answers to political questions concerning privacy and obfuscation. It is, however, precisely because existing laws and policies have not, or not yet, adequately confronted overwhelming gaps in privacy protection that the need exists to refer to fundamental principles for better answers.

Returning to situations in which obfuscators’ resistance confounds a target’s will or interests, we ask how these considerations of justice might guide our assessment. John Rawls, in A Theory of Justice,12 demands as a basic requirement that the obfuscation practices in question not violate or erode basic rights and liberties. This requirement calls into question obfuscating systems relying on deception, system subversion, and exploitation that have the potential to violate rights of property, security, and autonomy. This principle establishes a presumption against such systems unless strong countervailing claims of equal or greater weight can clearly be demonstrated, including autonomy, fair treatment, freedom of speech, and freedom of political association—generally freedoms associated with a right to privacy. The first principle makes short work of obfuscation as used by criminals to mask their attacks and confuse their trails.

For nuanced cases in which neither adversary holds a clear ethical advantage in their competing claims, Rawls’ second principle, that of maximin, is relevant. This principle demands that a just society should favor “the alternative the worst outcome of which is superior to the worst outcomes of the others.”13 In practical terms this means that when weighing policy options, a just society should not necessarily look to equalize the standing of different individuals or groups, but where this is not possible or makes no sense should focus on the plight of those on the lower end of the socioeconomic spectrum, ensuring that whatever policy is chosen is one that maximizes outcomes for these stakeholders. A just society’s policies, in other words, should maximize the minimum.

Returning to earlier cases, let us now consider the debate over wasted resources—not common resources, which we have already addressed, but privately owned resources, as when obfuscation purportedly wastes Facebook’s resources with misleading profiles. Here service providers and owners of resources declare that, because proprietary rights allow them to set terms of use at will and to their advantage, unauthorized actions, by definition, make unethical or wasteful use of their services or resources. Obfuscators, by contrast, claim that they are weakened, exploited, made vulnerable, and compromised, and that they are merely acting to rectify an imbalance of control, power, and advantage and to reduce risk and ambiguity. As was noted earlier, how we evaluate the competing claims affects whether we deem obfuscating activity, such as TrackMeNot’s generating of fake queries, wasteful or legitimate, prohibited or allowed. Where no obvious ethical issue is at stake, these political choices about the exercise of power and privilege are subject to the maximin principle of justice. How this plays out will depend on details of specific instances—for example, concrete differences in the properties of TrackMeNot, Vula, and Russian nationalist Twitterbots, as well as the contexts in which they operate.

In relation to free riding, Rawls’ second principle forces a question about whether the data services whose terms enable them to capture surplus value from personal information are entitled to that surplus value. It allows us to see that the entitlements of profit and control that these firms have unilaterally asserted through their terms of service are, in fact, open to redistribution through the adoption of different social policies. Obfuscators aren’t free riding if the disadvantage of a particular engagement is excessive and unfair, and if the only claims they may be violating are those asserted by service providers under a regime that doesn’t fully recognize its implications for information flows newly enabled by sociotechnical systems. A similar point applies to pollution. Although there are some who presume in favor of data collectors merely on the grounds that they have collected and assembled data and hence are entitled to its integrity, we believe that no charge of pollution will stick unless societal worth can be demonstrated. If that can’t be done, an argument is needed to support the claim that any value should accrue only or mainly to the data collectors; it can’t simply be presumed. Though it is true that individuals using obfuscation to take cover may diminish the purity of a data pool, impose costs on data gatherers, or deny data gatherers the benefits of surplus generated through collection, aggregation, and analysis of data, a full picture considers the value of the data and the legitimacy of data gatherers’ claims. When there are charges of free riding or when there are charges of pollution, private claims of data owners and counterclaims of obfuscators are viewed as conflicts of preferences or interests. In our view, seeking resolution by pointing to property rights begs the question of the extent of these rights in the fluid environment of technology and data. This issue remains open to political negotiation and adjustment. General prosperity and societal welfare should be considered, ideally in light of Rawls’ second principle.

Assignment of blame and moral responsibility may also be assessed politically. When considering liability for free riding and data pollution, we have argued that, although the obfuscator is a causal agent in both those cases, moral responsibility may nevertheless reasonably accrue to the target of obfuscation unless the target’s activities and business or data practices are beyond reproach. Considerations of justice apply as much to fair distribution of costs as they do to fair distribution of benefits.14

In the various theories of justice offered by political philosophers, including Rawls, there is a fairly uniform idea of those on the bottom end of the socioeconomic spectrum toward whom great concern is directed. In highlighting various ways in which the maximin principle is relevant to the political standing of obfuscation, we have presumed traditional or standard views of what it means to be better off or worse off—powerful or weak, rich or poor, well or poorly educated, healthy or sick—remain relevant. To those dimensions of inequality, our theme of informational asymmetries of power and of knowledge adds two dimensions of difference between haves and have-nots, crucial to the maximin principle.15

Informational justice and the asymmetries of power and knowledge

Circumstances surrounding the obfuscating systems we introduced in part I of this book are typically characterized by both asymmetries of power and asymmetries of knowledge. The power differential between individuals and the corporate and governmental institutions and organizations that place them under surveillance, capture information about their activities, and subsequently assemble it and mine it is clear. The judging, preying eye of unspecified, digital publics16 also may train its disciplining gaze on individuals. Although, as we demonstrated in part I, obfuscation can be and has been used by the more powerful against the less powerful, the more powerful usually have more direct ways to impose their will. Obfuscation is generally not as strong or certain as these more direct methods, and it is only rarely adopted by powerful actors—and then usually to evade the notice of other powerful actors.17 Stronger actors have less of a need to resort to obfuscation because they have better methods available if they want to hide something—among them secret classifications, censorship, trade secrets, and threats of state violence. So let us consider the less powerful members of society who may reach for obfuscation to even the odds.

To people who are not well off or politically influential and not in a position to refuse terms of engagement, to people who aren’t technically sophisticated or savvy enough to utilize strong encryption, and to people who want discounts at the supermarket, free email accounts, and cheap mobile phones, obfuscation offers some measure of resistance, obscurity, and dignity, if not a permanent reconfiguration of control or an inversion of the entrenched hierarchy. As Anatole France put it, “the law, in its majestic equality, forbids the rich as well as the poor to sleep under bridges and steal bread.”18 For those whom circumstance and necessity oblige to give up data about themselves—those who most need the shelter of the bridge, however ad hoc and unsatisfying it may be in comparison with a proper house—obfuscation provides a means of redress.

What we have called power asymmetries map closely onto traditional vectors of power—wealth, social class, education, race, and so forth. In today’s data-driven societies, epistemic or information asymmetries are highly consequential. Obfuscation may provide cover against known, specific threats, but also may offer protection against lurking but poorly understood threats from uncertain sources (government or corporate), whose presence we sense but about which we know little. We suspect these “others” are able to capture information that we generate and emanate as we move about online, engage in transactions online and off, work, communicate, and socialize, but precisely what information they capture, where they send it, how it then is used, and the logic of its impact on us we simply do not know. This is the nature of the epistemic asymmetry in its most extreme form. Under these circumstances, obfuscation may seem like flailing about in the dark, but it offers some hope against the unknown knowers.

Obfuscating against direct exertions of power and control is resistance of a familiar kind, but the shield that obfuscation may promise against lurking, unknown adversaries calls to mind a different political threat. In his book Republicanism: A Theory of Freedom and Government, Philip Pettit prefers a definition of freedom not as actual non-interference but as non-domination— that is, security against arbitrary interference: “not just that people (or other actors, such as governments or corporations) with a power of arbitrary interference probably will not exercise it, but that the agents in question lose that power: they are deprived of the capacity to exercise it, or at least their capacity to exercise it is severely reduced.”19 Viewed from the weak side of the epistemic asymmetry, we may be aware that information about us and information emanating from our activities, online and off, is accessible to those higher up on the scale, often in the form of rationalized information assemblages—profiles that can be used to control us directly or indirectly and to decide what we can and can’t have and where we can and can’t go. As societies embrace the promise of big-data analytics, and as correlation and clustering assume a dominant role in decision making, individuals may increasingly be subjected to decisions that “work” statistically but don’t “make sense.”20 Our freedom is compromised not only when we are prevented from having or doing what we want, but also when others have the capacity to exercise this power in ways that we don’t understand and that we experience as arbitrary. Domination is precisely this, according to Pettit. Republicanism doesn’t preclude non- arbitrary subjection to suitable forms of law and government; it requires only that individuals be secure against arbitrary interference, “controlled by the arbitrium—the will or judgment—of the interferer: to the extent, in particular, that it is not forced to track the interests and ideas of those who suffer the interference.”21

Those on the wrong side of the power and knowledge asymmetries of an information society are, as we have argued, effectively class members of its less well-off —subjects of surveillance, uncertain how it affects their fates, and lacking power to set terms of engagement. Consequently, in developing policies for a society deemed just according to Rawls’ two principles,22 those on the wrong side of the asymmetries should be allowed the freedom to assert their values, interests, and preferences through obfuscation (in keeping with ethical requirements), even if this means impinging on the interests and preferences of those on the right side of knowledge and power asymmetries. Thus, having seen to the ethics requirements of the first principle, according to the second, maximin principle, social policy aimed at resolving conflicting interests and preferences inherent in cases we have discussed should take heed of the important work these are doing potentially to raise the standing of those on the losing end of entrenched power and knowledge asymmetries.

For the welfare of others

We end this section with what may well be the toughest challenge confronting data obfuscation: whether it can be tolerated when it aims at systems that promise societal benefits extending beyond the individual subjects themselves. As we enter deeper and deeper into the epistemic and decision-making paradigm of big data, and as hope is stoked by its potential to serve the common good, questions arise concerning the obligation of individuals to participate.23 Obfuscators may be faulted for being unwilling to pay costs for benefits, failing to pitch in for the sake of the common good. But what exactly is the extent of this obligation, and its limits? Are individuals obligated to pay whatever is asked, succumb to any terms of service, and pitch in even if there is a cost? Do sufferers from a rare disease, for example, owe it to others to participate in studies, and to allow data about them to be integrated into statistical analyses in which the size of N improves the results? And what if there is a cost?

The plight of the ethical obfuscator resembles that of the ethical citizen expected to contribute to the common good by, say, paying taxes or serving in the military. Some might say, equivalently, that we must fulfill an obligation not only by contributing to the common store of data but also by doing so honestly, accurately, and conscientiously. Even if there is some sense of obligation, what principles govern its shape, particularly if there is risk or cost associated with it? Ethics, generally, doesn’t require supererogation, and liberal democracies don’t demand or condone the sacrifice of innocent individuals, even a few, for the benefit of the majority. Where to draw the line? What principles of justice offer guidance on these matters?

Jeremy Waldron observed that after the terrorist attacks of September 11, 2001 citizens were asked to allow the balance of security and liberty to be tipped in favor of security.24 Although it isn’t unusual for social policy to require tradeoffs—one value, one right against another or others—Waldron reminds us that such tradeoffs must be made wisely with fastidious attention to consequences. One particular consequence is the distributional impact; losses and gains, costs and benefits should be borne fairly among individuals and between groups. Waldron’s worry is that when we say that we collectively give up a measure of freedom in return for our collective security there is an important elision: some individuals or groups suffer a disproportionate loss of freedom for the security benefit of all, or, as sometimes happens with tradeoffs in general, may even be excluded entirely from the collective benefits. Generalizing this warning to questions about paying for the collective good with individual data prompts us to consider not only the total sum of costs over benefits but also who is paying the cost and who is enjoying the benefits. Often, companies defend data avarice by citing service improvements or security but are vague about crucial details—for example, whether existing customers and data contributors are supporting new ones who haven’t pitched in, and what proportion of the value extracted accrues to “all” and what proportion to the company. These questions must be answered in order to address questions about the nature and extent of the obligations data subjects have to contribute to the common data store.

Risk and data

The language of risk frequently crops up in hailing the promise of big data for the good of all. Proponents would have us believe that data will help reduce risks of terror and crime, of inefficacious medical treatment, of bad credit decisions, of inadequate education, of inefficient energy use, and so forth. These claims should persuade or even compel individuals to give generously of information, as we graciously expose the contents of our suitcases in airports. By the logic of these claims, obfuscators are unethical in diminishing, depriving, or subverting the common stock. Persuasive? Irrefutable? Yet here, too, justice demands attention to distribution and fairness: who risks and who benefits? We do not flatly reject the claims, but until these questions are answered and issues of harm and costs are addressed there can be no such obligation. Take, for example, the trivial and ubiquitous practice of online tracking for the purpose of behavioral advertising.25 Ad networks claim that online tracking and behavioral advertising reduce the “risk” of costly advertising to unsuitable targets or to targeting attractive offers to unprofitable customers. Risk reduction it may indeed be, but information contributions by all are improving the lot only of a few, primarily the ad networks providing the service, possibly the advertisers, and perhaps the attractive customers they seek to lure. We made a similar point above when we discussed data aggregation for the purpose of reducing credit fraud: that citing risk reduction often oversimplifies a picture in which risk may not be reduced overall, or even if it is reduced, not reduced for all. What actually occurs is that risk is shifted and redistributed. We offer similar cautions against inappropriate disclosure of medical information, which may increase risk for some information subjects while decreasing it for others; or collecting and mining data for the purposes of price discrimination, imposing risks on consumers under surveillance while reducing risks for merchants who engage in schemes of data profiling.

In sum

Data obfuscation raises important ethical challenges that anyone designing or using obfuscating systems would do well to heed. We have scrutinized the challenges and explored contexts and conditions that are relevant to their adjudication in ethical terms. But we also have discovered that adjudicating ethical challenges often invokes considerations that are political and expedient. Politics comes into play when disputes hinge on disagreements over the relative importance of societal ends and relative significance of ethical and societal values. It also comes into play when addressing the merits of competing non-moral claims, the allocation of goods, and the distribution of risks. When entering the realms of the political, obfuscation must be tested against the demands of justice. But if obfuscators are so tested, so must we test the data collectors, the information services, the trackers, and the profilers. We have found that breathless rhetoric surrounding the promise and practice of data does not say enough about justice and the problem of risk shifting. Incumbents have embedded few protections and mitigations into the edifices of data they are constructing. Against this backdrop, obfuscation offers a means of striving for balance defensible when it functions to resist domination of the weaker by the stronger. A just society leaves this escape hatch open.