For many scholars who are concerned about surveillance of Internet users, demanding transparency (or greater transparency) in data-mining practices may seem a promising approach (Lyon 2007, 181–1833; Danna and Gandy 2002).1 The hope is that transparency about data collection practices would serve as an antidote to the erosion of online privacy. The argument is: when data-collection practices are revealed, users become aware of how they are being watched, and adjust their behaviour accordingly. They may opt out of the activity, change their behaviour, protest, or take some other action. They also might be able to negotiate changes to data uses. Transparency empowers, so the story goes.
In this chapter we challenge the idea that transparency always or necessarily protects the public good and will therefore protect Internet users. We claim that when transparency is constituted on the Internet, the accounts produced are never simply the “truth” about what is happening as the term “transparency” might suggest. We use the case of campaign finance disclosure (CFD) in US elections to show that, in effect, transparency is not transparent. Put differently, transparency systems do not construct glass houses that allow others to see what is happening inside; rather, they pull data about people and institutions into a house of mirrors in which the observer can “see”, at best, a partial construction—a mediated glimpse—of what those being watched are doing. The “house of mirrors” metaphor serves to describe and analyze what happens in transparency accounts, and to understand the limitations as well as the promise of transparency on the Internet.
Elsewhere, we have argued that transparency systems have important structural similarities to surveillance systems and, consequently, the distinction between these two types of systems is blurry (Johnson and Wayland 2010). Indeed, our analysis of CFD illustrates how a system instituted for transparency of campaigns gets used for surveillance of donors. The analysis here begins with some background on CFD, its underlying metaphor, and its workings, before presenting the house of mirrors model and using it to analyze the CFD system.
Campaign finance disclosure is a central component of election law in the US and in many other countries. These laws require campaigns to disclose the names (and other identifying data) of those who contribute to a campaign and to specify how donated funds have been spent. Such regulations fall under the general category of “sunshine” laws. More specifically, CFD is thought to be a form of transparency, in the sense that it enables the populace to “see” what candidates are doing, which in turn functions as an important means by which democratic governments are held accountable to their electorates. CFD is, perhaps, the epitome of a transparency system aimed at democratic accountability because it involves the ultimate moment of accountability in any democracy, the moment when citizens exercise their voting power.
Focusing just on the US, the first federal campaign disclosure law was the Publicity Act of 1910, also referred to as the Federal Corrupt Practices Act. This law required disclosure of contributions to national party committees or multi-state committees, but not until after the election (Corrado 2005, 8). Congress amended the law the following year to require disclosure immediately prior to and after an election. In 1925, the Federal Corrupt Practices Act introduced quarterly reporting to the disclosure regime (Corrado 2005, 9). Yet these rules went largely ignored because they were not enforced (Potter 2005, 125). Federal disclosure law changed little until the Federal Election Campaign Act of 1971, followed by major amendments in 1974 and 1976. These subsequent acts of Congress greatly expanded the requirements for who must disclose funding details, covering all large federal campaigns and committees, not just national or multi-state committees. Further, the acts established the Federal Election Commission (FEC) as the regulator of this new system.
In 1995, Congress required the FEC to create a technological and regulatory structure that allowed campaigns to file their reports electronically, and a 1999 law made electronic filing mandatory. Most recently, the Bipartisan Campaign Reform Act of 2002, better known as the McCain-Feingold bill, extended disclosure provisions to third-party “issue advocacy” campaigns and required the FEC to post reports on the Internet within 48 hours of receipt.
Because we will focus later on a particular campaign in California, it is important to note that campaign finance laws have a longer history in California. In 1893, California state legislators introduced a disclosure system that foreshadowed the Federal Corrupt Practices Act (Ansolabehere 2007, 169; Thompson 1953). Indeed, California made disclosure the central element of its anti-corruption efforts. Unlike the federal system, between 1949 and 1988 the state placed no limits on the size or source of contributions, although the state legislature did briefly limit the amount that campaigns could spend. A 1988 law added limits on donations to candidates, but not to ballot initiatives (Baber 1989). And electronic disclosure of donations became law in 2000 (CA GOVT § 84605 (a)).
The basic rationale for campaign finance laws has been upheld in court, with the Supreme Court case Buckley v. Valeo (1976) establishing the prevailing wisdom. That case evaluated whether the restrictions of the FEC legislation were justified, given the burden they placed on political expression protected by the First Amendment. The ruling laid out three compelling arguments for disclosure. First, disclosure would help deter corruption. Second, it would aid enforcement of anti-corruption laws. Third, disclosure would provide the public with useful information about political candidates, increasing “voter competence”. The courts have sustained, with some alterations, these basic justifications over the years in the face of a variety of challenges (Potter 2005). Indeed, in a recent case, Citizens United v. Federal Elections Commission (558 U.S. 50 (2010)), the Supreme Court struck down long-standing restrictions on corporate spending on elections but explicitly stated that such spending could still be subject to disclosure.
Putting campaign donation data on the Internet is believed to improve the efficiency of managing this information, though some would argue that the most significant benefit of putting information about campaign contributions online is that it makes the information available to regulators and the general public (including public interest groups) in a medium that allows them to analyze the data more effectively. This effectiveness derives from combining powerful data-mining techniques with the speed of electronic disclosure (Holman and Stern 2000).
Still, those who support posting CFD data on the Internet tend not to focus on the consequences of increased efficiency. In a 2000 report titled “Access delayed is access denied”, the watchdog group Public Citizen acknowledged a curious side benefit to the campaigns themselves: now the candidates, too, could more easily mine their own donation data for targeting donors with greater precision (Holman and Stern 2000). This use of the data goes well beyond the stated rationale of the system and thereby hints at the potentially profound consequences of pushing for increased efficiency.
Transparency has been prevalent in governance theory going back to Jeremy Bentham (1748–1832), who first used the term in this way, although he was echoing earlier ideas from as far back as the ancient Chinese and Greeks (Hood 2006). In recent decades, however, transparency has become a much more common concept in endeavours to improve government and reduce corporate malfeasance. In tandem with notions of good governance, transparency has mostly been embraced as a positive public policy, even a necessary one for a globalizing world (Florini 1998). Various forms of “government in the sunshine” laws and corporate disclosure requirements are said both to foster the confidence of citizens and investors, and to reduce health and safety risks for consumers.
“Transparency” is a metaphor that borrows from the domain of sight and optics: by casting light on campaign financing we bring into view what has before lurked below the surface. Shedding light on the activities of institutions and political leaders is said to reveal their inner workings and allow the public to better understand, manage, and evaluate the object of this gaze. This optic metaphor is often coupled with a cleansing theme, exemplified by Justice Brandeis’s oft-quoted comment that “sunshine is said to be the best of disinfectants” (Brandeis 1914, 92). In this sense “light” exposes and destroys wrongdoing, whether in the form of corporate fraud or political corruption. The fundamental rationale for transparency, then, is to reveal the otherwise obscure workings of power, subjecting the powerful to public scrutiny and reducing the likelihood that they might abuse their positions or hide their intent. In forcing political leaders to reveal their activities and the type of people who support them, according to this argument, the public and other interested parties will be able to hold them accountable. Campaign finance laws, in particular, aim to hold candidates and campaigns accountable to their constituencies. In knowing which groups and citizens donate to political campaigns, we are supposed to better know the campaigns themselves and thus be better able to detect corruption in the democratic process. Supporters of such systems argue that they discourage deal-making in “smoky back rooms”, where wealthy supporters use donations to influence a candidate’s positions on issues of interest to them. In addition, by knowing who supports a candidate, the public is supposed to better understand the candidate’s political leanings.
For decades, CFD reports were difficult to access and therefore examined only by the most diligent journalists. They were difficult even for regulators to use. However, when CFD information is posted on the Internet, it is much more easily accessible and, consequently, much more widely viewed. The reports are, in effect, broadcast. Once the campaigns’ data are put into electronic databases, journalists, interest groups, and regulators can more easily search for patterns in the timing or the sources of donations that might suggest some corruption.
Importantly, due to this searchability of electronic databases, it is not just campaigns that are scrutinized. Donors also are subject to a new level of examination. The donation reports are large collections of accounts about individual contributors—including their names, hometowns, postal codes, occupations, and employers. Thus, both campaigns and donors become “watched” entities, and the wide electronic availability of accounts of both attracts an array of “watchers”.
The addition of these watchers creates entirely new forms of accountability. Donors suddenly are watched not only by campaigns (who target them for more donations and for possible volunteer work), but also, potentially, by neighbours, professional colleagues, friends, and family, all of whom can easily view the information on the Internet. These uses of donor data take transparency far beyond simply attesting to the ethical or unethical influence of money on politicians. These practices make transparency into a system for further tracking of individuals. In short, the transparency system becomes a surveillance system.
The distinction between surveillance and transparency, we acknowledge, can be vague. We use the term surveillance to invoke the common understanding of surveillance as a negatively valued exercise of power. Such a tactic calls attention to the radical shift that takes place as the system focuses on individual citizens in addition to political parties and candidates. We roughly distinguish surveillance from transparency in the following way. Generally, surveillance systems track and monitor individuals’ behaviour, sort them into groups, and treat them on the basis of generalizations about the group. Further, surveillance takes place largely without the consent of the individual being watched. Transparency, on the other hand, is generally understood to refer to practices in which organizations (and sometimes individuals) reveal information about their behaviour. Whether they do so because it is required by law or because they are trying to control their public image, the organizations or individuals have some control over the information that they disclose.
In the shift from paper and ink accounts of campaign finance to electronic records posted on the Internet, new actors (both watched and watchers) are drawn into the system, and new rationales and new forms of accountability are constituted. The challenge of Internet studies is to understand what happens when information, individuals, practices, or institutions are constituted online. Although it might seem that the newly constituted systems are more transparent because the information is widely available, the language of transparency is misleading. Accounts of campaigns are being constituted, and they are being distributed widely, but the process can hardly be characterized as “laying bare” or seeing into what is actually going on. Most information theory recognizes that informating shapes what is produced (Zuboff 1985).
If not providing the glass through which viewers can “see” what is going on, what happens when campaign finance disclosure is instrumented on the Internet? The system may be best characterized as a house of mirrors.
What does it mean to say that campaign finance accounts are produced in a house of mirrors? A house of mirrors is full of reflection and imaging; a person standing in a house of mirrors sees aspects of his or her body elongated, shortened, exaggerated, multiplied, fragmented, and reconfigured. The house of mirrors is a complex of projection, bouncing, highlighting, and shading that produces a surprising portrait of a person. One sees oneself in what seems like a distorted version, a rendering that is out of whack with ordinary experience. Of course, the distortion is far from random; it is the result of the nature of mirrors, the positioning of the mirrors in question, the lighting, and so on. In everyday life, houses of mirrors are often built as a form of amusement. Hence the colloquial term “funhouse”.
Using the house of mirrors as a metaphor gives us a novel way of understanding Internet-based data systems. Paper and ink data have very different properties from data constituted and processed on the Internet. Playing out the metaphor, at least four processes can be identified in the production of Internet-instrumented information systems: entry, bouncing, highlighting and shading, and rendering. Although each of these processes might be thought of as architectural (i.e., resulting from the nature of information technology), they are socio-technical processes; cultural assumptions and norms always guide both the workings of the system and the meanings derived from them. The outcome of these processes constitutes an account (or accounts) delivered in the name of transparency, but processed and infused with system-specific assumptions and values. The case of campaign finance disclosure illustrates these processes.
The first thing that occurs when a person enters an actual house of mirrors is that an image of the person is reflected off of the mirrors. The individual can then see an image of herself in a mirror (or mirrors). Similarly, when someone donates to a campaign, the campaign creates a record (a reflection of the donor). This initial record-creation is a legal requirement. In the US system, donors must supply five items of personal information; campaigns are required to gather and record this information. Although the donor may not “see” the record created in the campaign database, their donation means that they have entered a system. An image—a reflection—of the donor has been created. Indeed, multiple images have been created because the record of the donation is also a record of the campaign; the reflection of the donor is a reflection of the campaign.
Donors are required to supply their name, employer, occupation, home address, and the amount donated. In this respect, the reflection of the donor (and campaign) is selective and limited. It is a reduction of the person, as any representation, informated or not, must be. In requiring certain information, American CFD law singles out certain aspects of donors that are deemed relevant. That it is a reduction can be seen clearly when we consider what citizens might want to know about donors and campaigns, but is not required in CFD. For example, the required information does not include a donor’s motivation in contributing to the campaign, the percentage of the person’s total wealth that is donated, the person’s country of birth, age, gender, or party affiliation.
The reduction of individual donors into the required information is the expression of a set of cultural values and norms about which aspects of campaigns need to be monitored. Each of these data points derives from a set of assumptions about human nature, interests, and corruption. The overriding norm is that of the entrepreneurial individual; that is, persons have material interests that they seek to further. Donations to political candidates are a central way through which they can pursue their interests. In American democracy, some forms of influence are legitimate (supporting candidates for their beliefs) and others are not (seeking a quid pro quo). Thus, the data first tie the donation to an individual person to make sure that no individual exceeds the maximum donation. Once the individual is isolated, his or her interests can then be identified, starting first with occupation and employer. Occupational role is thought to be the major driver of influence-seeking, and collecting this information therefore helps regulators and the public look for undue influence. Transparency supports democracy by not letting such influence go on in the dark. By attending to these particular data, however, the system draws attention away from any number of other possible motives for seeking influence. These might include religious or social values, community interests, or wealth not tied to a specific occupation, among other motives. Instead, the system’s norms presume that citizens need to know who contributes to campaigns, what the donors do for a living, and for whom they work.
That the campaign creates an account of itself (by gathering the required information about its donors) may seem insignificant because the data gathered are simple and required. However, the fact that campaigns develop accounts of themselves is important because it points to a feature of transparency systems that, as we have seen, differentiates them from surveillance systems. Campaigns are watched, but they are watched through information that the campaign produces and turns over to others. In most surveillance systems, those who are being watched are passive in the data collection process; others develop accounts of them and use those accounts to make decisions about the watched and/or give those accounts over to others. Yet in this transparency system, the data provided by campaigns easily become reflections of individual donors. Simply by reflecting donors, CFD—a system developed in the name of transparency—comes to resemble a surveillance system.
In short, then, when individuals make a donation to a campaign, they enter a house of mirrors. Entry involves creation of a record, a record that is a reduction insofar as it is based on a selective set of data, and the selection has been made on the basis of ideas, theories, values, and concerns that, presumably, justify the selection. In other words, embedded in the reflection are norms about “what matters” in political campaigns and what constitutes political corruption. Whether she likes it or not, the record of a donor’s contribution may be taken as a public expression of political patronage.
For people moving through a house of mirrors, one of the oddities of the experience may be sharing the misshapen or distorted image of themselves with companions. The image bounces off the mirror to the person’s own eyes, as well as to the eyes of others, and sometimes it even merges with other reflections from nearby mirrors. The images might appear in a fragmented way to others in the house who are strangers. This may happen even without the person’s knowledge, for example, when someone sees an image of you, but you cannot or do not see that they do.
There are parallels in the “digital” house of mirrors. Once a record of a donation has been created by a campaign, it is merged with records of other donations (received in a period of time); the merged information is then submitted to the FEC; and the FEC posts the report on the web. Once posted on the web, the original reflection of the donor (and thereby, the campaign) is bounced (replicated) from one location to another, without the consent or control of the donor. This bouncing nicely parallels what seems like an infinite regress of images when two mirrors are positioned face to face. Once posted on the web, campaign finance data can be copied, reformatted, and reposted ad infinitum. It can move to unexpected places, with unpredicted results.
In American CFD, because the information is posted on public websites, entries on individual donations bounce from the databases of the campaign to those of regulators to those of watchdog groups, journalists, law enforcement, neighbours, family, and friends. This is possible because of the affordances of IT (see boyd 2008). At each place, the data can be easily and almost perfectly replicated and transmitted. When the FEC posts the reports online, journalists, watchdog groups, other data repositories, and even citizens can download them in their entirety. The data can be searched quickly, and they can be mined for relationships that might not be immediately apparent. People can search within their neighbourhood to find what their neighbours are doing. Campaigns and political consultants can link the donation databases to other databases to better target fund-raising and advertising. Finally, the data persist in web-linked databases, ready to be recalled when an interested user searches for them.
How the information bounces and becomes available for unexpected purposes can be illustrated with a recent example from California. In 2008, a ballot initiative in California known as Proposition 8 sought to ban gay marriage. Many groups poured resources into advertising for or against the controversial measure. When the proposition passed (effectively banning gay marriage), many opponents of the ban were outraged with the result, and sought to figure out how proponents had succeeded. Because state campaign finance laws require CFD data to be posted online, these opponents were able to develop a database of people who had funded the campaign in favour of the new law. An anonymous programmer mashed up the names and geographic locations of the donors with Google maps, producing www.eightmaps.com, a site where any visitor could see who in what neighbourhoods contributed to the campaign. With some fairly simple coding, then, the record of a donor’s contribution was “bounced” to a wide and very interested audience. As a result many of the individual donors were targeted with insults, threats, and boycotts (Stone 2009).
It is one thing to explain the bouncing of images in a house of mirrors that is intentionally designed to create fun, and quite another to explain why campaign finance information moves so freely on the Internet. Part of the explanation no doubt has to do with the nature of the Internet. Free flow of information on the Internet is part of the Internet’s architecture and history; it is, in some sense, what was sought after in building the Internet. Still, the fact that the Internet makes free flow of information possible does not explain why the decision was made to post CFD information on the web. This decision may best be understood as a coalescing of Internet capability with transparency (and democratic) theory. Ideas about the role of transparency and its deleterious effects on corruption must have played a role in the decision to post contribution data in a medium that would allow for so much bouncing.
So, once data are put into a digital house of mirrors, they bounce all around. When the data bounce from location to location, from user to user, changes in the meaning or significance and use of data occur. Arguably in the case of the Proposition 8 campaign, a system of transparency was transformed into a system of surveillance. What originally had been an explicit disclosure of information by the initiative-supporting campaign, meant to better inform voters, became a means of profiling individual donors and targeting them for retribution. Nevertheless, the full significance of bouncing cannot be seen until the highlighting and shading that go on in the process are described.
The images that bounce through a house of mirrors are transformations of the initial reflection of a person. The transformations that occur might be thought of as highlighting and shading. In a real house of mirrors, this has comic effects. One sees one’s nose as the largest component of one’s face, or one sees oneself with big feet, but essentially no legs. In the digital house of mirrors, the effects are less comical and can be more profound and enduring. As reflections are bounced from place to place, the information is recontextualized and repurposed according to the interests and actions of the watchers. Once information is available on the web, additional watchers may shine light on the individual or entity in unpredictable ways and at unexpected times, neglecting aspects that were important in the original context, and drawing attention to what was minor or poorly understood information in the original. The ramifications of the highlighting and shading are significant. Among other things, data collected for one purpose may be used for another, that is, those who gave the data for one purpose may discover that they have revealed something they had no idea they were revealing.
As already mentioned, CFD donor information is bounced to innumerable others. These others can use the information for whatever purpose they like, with the exception that they cannot be used for commercial purposes (2 U.S.C. §438(a)(4)). They can mine the data, merge them with other data, and redistribute the results. Anyone can become an intermediary and the intermediaries may or may not have anything to do with CFD and why the system was created. Even if their motivation is consistent with the intent of CFD, intermediaries can use the data in ways unimagined in the decision to post the data online.
The press is one of the most powerful intermediaries, and reporters routinely scour disclosures for signs of influence and indications of a candidate’s political leanings. Watchdog groups, too, pore over this information looking for threats to the public interest. Opposing candidates and opposition researchers also probe the data seeking any hint of scandal tied to individual donors, such as contributions from corrupt business leaders or ineligible donors. Opponents may look for classes of donors, such as trial lawyers, health care organizations, or oil companies, who could shape or fit into a narrative that reflected poorly on the candidate. In some cases, especially in major national elections, a candidate may be forced to publicly denounce acquaintances formed largely for financial expedience. Such was the case in the 2007 US presidential primaries, when then-candidate Hillary Rodham Clinton decided to return $850,000 in funds raised for her campaign by the Democratic operative Norman Hsu. Hsu was charged and later convicted in a pyramid scheme that bilked investors out of $20 million. Senator Clinton claimed that she was unaware of Hsu’s crimes, but she continues to face scrutiny regarding her relationship to him (Flaherty 2009; Solomon 2007). In this way, the availability of donor data in digital form facilitated highlighting and shading that repurposed the data, reaching well beyond the prevention or rooting out of corruption.
Sometimes a person or an aspect of a person in a house of mirrors is highlighted simply because the person stands in a particular place within the architecture of the building. In the CFD digital house of mirrors, donor contributions are, through intermediaries, accessible on the web through Google’s search engine. In this architecture, highlighting and shading become a function of a complex variety of factors that, in some sense, have nothing to do with CFD. If one searched on Google for “Kent Wayland” (one of the authors of this paper) in 2009, one of the top results would have been a link to a database available at The Huffington Post, an online newspaper, where users could access campaign donation information by name, zip code, date of donation, campaign season, etc. The web tools available at the site would allow the user to browse recent political donations downloaded from the FEC, repackaged, and indexed by Google’s web crawler. Although Wayland’s campaign donations were relatively minor, they made up a significant component of his online identity due to the high ranking Google gave these search results. Information on Wayland’s contributions were a highlighted aspect of his web presence because of the combination of the way Google works, The Huffington Post’s popularity, and other incidental factors: Wayland’s name is not especially common and his web presence was not especially extensive.
Returning to the Proposition 8 example, although only the minimal required information was collected on donors to the pro Proposition 8 campaign, that information, when combined with another system (Google Maps), highlighted the locations of the donors. This highlighting made the donors available for intimidation and reprisals. In effect, the website rendered all donors equivalent, tarring them all as antigay and shading other possible motives for a donation. Other possible motives could be, for example, the desire to create a favourable impression on some third party, the desire to distract attention from one’s own sexual preference, or the willingness to follow the suggestion of a pastor or other advisor. In all likelihood, donors had not imagined that the record of their contributions would lead to the inference that they were antigay zealots, or that it would lead to personal harassment. Indeed, the use of donor data in this way seems antithetical to democratic elections, as the US Supreme Court recognized when, in the Buckley v. Valeo case, it allowed for an exception to disclosure when donors might be targeted for unpopular views. It may also be seen as encroaching on the privacy of the vote (Regan, Johnson, and Wayland 2011).
The CFD digital house of mirrors is, then, like a real house of mirrors insofar as certain aspects of a person are highlighted and shaded. This highlighting and shading takes place as a result of the architecture of the Internet, the design of the databases, and the relationship of these components to human interests and purposes. The initial reflection created upon entry into the system is bounced to a variety of audiences who in turn highlight and shade it according to their own interests.
Eventually, when a person exits a real house of mirrors, the generation, bouncing, highlighting, and shading of images stop. The experience one had in that house of mirrors might be likened to seeing a cubist or surrealist portrait of oneself. Features were selected, fragmented, and reconfigured into a very different representation of one’s body (self). The individual may remember a series of these images or a concatenation of images that make her see herself differently.
In the case of the digital house of mirrors, what results is not a memory of one’s distorted body, but an account (or many accounts) that has been rendered. “Render” here carries the connotation of something (someone) being taken apart and then transformed into something different. The rendered accounts—the equivalents of cubist or surrealist portraits of donors and campaigns—are the outcome of reduction, bouncing, and highlighting and shading.
Posting CFD data on the web means that many different watchers can render many different accounts: accounts of donors and campaigns, accounts produced by a range of actors, accounts that are used for a variety of purposes. There may, in fact, be no exit from this house of mirrors because the data persist, ever-ready for the rendering of additional accounts. Further, any of the rendered accounts may become the starting place for new matching and mining processes and new interpretations of the data that lead to yet more renderings (accounts). Reflections of individuals can, thus, get caught in an infinite loop.
Referring to what is produced as an account—or multiple accounts—has the benefit of suggesting that what is going on is accounting; it makes explicit that what is at stake is accountability. In the case of CFD, donors and campaigns are being held accountable. As we have argued elsewhere (Johnson and Wayland 2010), accountability can be thought of as a triad in which there are watchers, watched, and accounts. Renderings are selective, processed accounts of those who are watched, and they are used by watchers in powerful ways: they lead to consequences for the watched (such as accusations of corruption).
The renderings produced in the digital house of mirrors draw on specific cultural assumptions and norms, as we have shown in earlier stages. These cultural assumptions and norms will vary with different watchers who are pursuing different ends as they render accounts. In the case of Proposition 8 in California, the data about donors supporting the Proposition 8 campaign—when combined with the cultural assumption that donating to this cause was an affirmative act of resisting gay marriage—rendered those donors as antigay people who deserve scorn, protest, or even retaliation.
The rendering of accounts, then, is the final step. It involves the final pulling together of all the bounced, highlighted, and shaded images into an account that is tailored to and coherent for the purposes of a particular watcher. Furthermore, the multiplicity of the system means that a number of different watchers can render their own accounts, based on their own cultural assumptions and norms. They use this rendered account to effect some consequence, to hold the watched accountable in some way.
Although the house of mirrors metaphor has the potential to be used in many different contexts, including those involving transparency as well as other systems instrumented on the Internet, the metaphor should not be interpreted as more than it is: an extended metaphor. It is a heuristic device used to uncover some of the ways that data are transformed and repurposed. The metaphor might be thought of as a technique used to get at what happens behind the scenes (backstage) of systems instrumented on the Internet. In this respect the metaphor is targeted to counter the tendency to think that the Internet provides unmediated access.
The four processes identified—entry, bouncing, highlighting and shading, and rendering—are not the only processes that might be identified behind the scenes of any given system. Yet the four processes are inextricably intertwined. This can be seen by contemplating changes that might be made in a system. Perhaps the most powerful stage is entry. When information is entered into a database, it becomes available for bouncing, highlighting, and shading, and rendering. Different data (less or more or of a different kind) mean different consequences for bouncing, shading/highlighting, and rendering. Similarly if bouncing is restricted, for example, when data are posted in read-only form, then the possibilities for highlighting and shading are constrained as well, and this, in turn, means constraints on accounts rendered.
What, then, is the significance of thinking about transparency as a house of mirrors? One obvious answer is that it allows us to see that transparency is not transparent. The notion of transparency suggests that we see persons or institutions as they are. This is misleading. What we see in the case of CFD is a system (aimed at transparency instrumented through the Internet) rendering accounts of individuals and campaigns that can be likened to cubist paintings. The accounts rendered involve reduction, selection, multiplication, highlighting and shading, and recontextualization, a dramatic transformation. Hence, we draw the conclusion that transparency is not transparent.
Does this mean that transparency should be rejected as a goal or potential remedy for Internet surveillance? Such a conclusion does not seem justified; it overextends what the house of mirrors metaphor allows us to see. In fact, the house of mirrors metaphor offers a better way to achieve what is often aimed at in transparency systems. When we stay with the metaphor, we are more likely to see and ask how the mirrors might better be arranged. We should be asking not for transparency per se, but for systems that reveal information appropriate to the context and without unintended consequences. For example, one general rule of thumb might be to limit, as a matter of policy and of computer coding, the extent to which digital information travels, so that it cannot move too far beyond its original or appropriate context. Fung et al. (2007) argue for just this kind of “targeted” transparency, and thinking in these terms might provide a useful way of keeping the information from bouncing too far within the house of mirrors.
In campaign finance disclosure, recognizing the structure of the system as constituted suggests that we consider either collecting more information and placing tighter controls on it or, paradoxically, collecting far less information. For example, one way of rearranging the mirrors in campaign finance disclosure would be to allow only government enforcement agencies access to identifying information about individual donors, while publicly releasing useful systemic and aggregated information about campaign donors. Such information might include more data points than are currently collected and thus capture a broader picture of how candidates might be influenced, without compromising the privacy of individual donors. Voters might be interested to know the relative income of donors, for example, whether they are homeowners, what their race or ethnicity is, etc. On the other hand, greater secrecy and less information might afford a more practical solution to the problem of transparency in campaign funding. Here, we might imagine, as others have proposed, candidates funding their campaigns from something like a blind trust (Ayres 2000). In that scenario, donors would support campaigns by donating to the trust, and their donations would remain anonymous both to the public and, crucially, to the campaign itself. If the reflection does not bounce to the campaign, then elected officials cannot reward their patrons. With no reflection, there is no possibility of corruption, at least in the traditional form of a quid pro quo. (This approach, it must be said, would not address the recent controversy over anonymous corporate funding of political ads in the US.)
Of course, each of these proposals carries with it some peril or disadvantage, not least of which is the probable need for greater regulation. The ultimate moral of this story might be, as Lessig has suggested, that we do away with the notion of transparency entirely and revolutionize such systems. At the very least, the US could consider adopting campaign finance policies used in other countries. In the UK, for example, for the most part there are no limits on fund-raising but rather on spending. Individual office-seekers need only report individual contributions of more than £1,000, whereas for parties the threshold is higher, £5,000 (Fiekert 2009). By the same token, however, candidates may not purchase ads of any kind on broadcast or in the mainstream print media. Instead, the government requires TV and radio stations to provide free and equal airtime to candidates around a relatively abbreviated election season. In France, public financing of campaigns is accomplished by way of tax credits to individual donors, covering up to 60% of their contributions (Atwill 2009). In this way, electors “vote with their pocketbooks” about which campaigns to invest taxpayer dollars in, and how much to invest. Copying such policies wholesale, of course, would face a steep uphill battle in the US, given the political climate and recent court decisions on campaign financing.
Finally, although the case of CFD indicates that transparency is not transparent, the underlying goal of transparency is accountability. In the case of CFD, the accountability at issue is essential to democracy. In this respect, CFD is a good case to learn from for Internet surveillance. Those operating on the Internet—especially those engaged in surveillance—should be accountable for what they are doing. Hence, the lessons of CFD should be helpful for figuring out antidotes to Internet surveillance. A number of lessons can be found in this chapter, but the most important is, perhaps, that what we call transparency is really a house of mirrors.
1 This material is based upon work supported by the National Science Foundation under Grant No. 0823363.
Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation. The grant has funded discussion and collaboration among a team of researchers composed of the three authors and Priscilla Reagan, Siva Vaidhyanathan, Alfred Weaver, and Kathleen Weston.
Ansolabehere, S. 2007. The scope of corruption: lessons from comparative campaign finance disclosure. Election Law Journal 6 (2): 163–183.
Atwill, Nicole. 2009. Campaign Finance: France. Washington, DC: Library of Congress.
Ayres, Ian. 2000. Disclosure versus anonymity in campaign finance. In Designing democratic institutions, ed. Ian Shapiro and Stephen Macedo, 19–55. New York: New York University Press.
Baber, Bill. 1989. California’s new campaign finance law: is section 85303(C) the life of the party. California Western Law Review 26 (2): 425–447.
Brandeis, Louis D. 1914. Other people’s money: And how the bankers use it. New York: Frederick A. Stokes.
boyd, danah m. 2008. Taken out of context: American teen sociality in networked publics. PhD dissertation. Berkeley: University of California.
Corrado, Anthony. 2005. Money and politics: a history of federal campaign finance law. In The new campaign finance sourcebook, ed. Anthony Corrado, Daniel R. Ortiz, Thomas E. Mann, and Trevor Potter, 7–47. Washington, DC: Brookings Institution.
Danna, Anthony and Oscar H. Gandy. 2002. All that glitters is not gold: digging beneath the surface of data mining. Journal of Business Ethics 40 (4): 373–386.
Fiekert, Clare. 2009. Campaign finance: United Kingdom. Washington, DC: Library of Congress.
Flaherty, Peter. 2009. Hsu convicted but no reckoning for Hillary. National Legal and Policy Center, Promoting Ethics in Public Life. May 20, 2009.
Florini, Ann. 1998. The end of secrecy. Foreign Policy 111 (Summer): 50–63.
Fung, Archon, Mary Graham, and David Weil. 2007. Full disclosure: The perils and promise of transparency. New York: Cambridge University Press.
Holman, Craig B. and Robert M. Stern. 2000. Access delayed is access denied: electronic reporting of campaign finance activity. Public Integrity 11 (Winter). https://www.citizen.org/documents/electronic_reporting2000_new.pdf (accessed September 5, 2010).
Hood, Christopher. 2006. Transparency in historical perspective. In Transparency: The key to better governance? ed. Christopher Hood and David Heald, 3–24. Oxford: Oxford.
Johnson, Deborah G. and Kent Wayland. 2010. Surveillance and transparency as sociotechnical systems of accountability. In Surveillance and democracy, ed. Kevin D. Haggerty and Minas Samatas, 19–33. London: Routledge.
Lessig, Lawrence. 2009. Against transparency. The New Republic, October 21. http://www.tnr.com/print/articles/books-and-arts/against-transparency (accessed October 12, 2009).
Lyon, David. 2007. Surveillance studies: An overview. Cambridge: Polity.
Potter, Trevor. 2005. Campaign finance disclosure laws. In The New Campaign Finance Sourcebook, ed. Anthony Corrado, Thomas E. Mann, Daniel R. Ortiz,and Trevor Potter, 123–160. Washington, DC: Brookings Institution.
Johnson, Deborah G., Priscilla M. Regan, and Kent Wayland. 2011. Campaign disclosure, privacy and transparency. William and Mary Bill of Rights Journal 19 (4). forthcoming.
Solomon, John. 2007. Clintons to return $850,000 in Hsu funds. In the trail: a daily diary of campaign 2008. The Washington Post Online Edition. Sept. 10, 2007.
Stone, Brad. 2009. Prop 8 donor web site shows disclosure law is 2-edged sword. New York Times, February 8.
Thompson, Bruce A. 1953. Campaign contributions and expenditures in California. California Law Review 41 (2): 300–319.
Zuboff, Shoshana. 1985. Automate/informate: the two faces of intelligent technology. Organizational Dynamics 14 (2): 5–18.