For better or worse, we have entered a new paradigm of public service provision characterized by more user involvement and a blurring of the boundary between providers and users. This paradigm affects urban infrastructures and public services, including the four big utilities of water, electricity, sanitation, and transportation. Terms such as “(prod)user,” “expert amateur,” and “civic hacker” indicate that users have to some extent become service providers themselves, turning their cars into taxis and their bedrooms into hotels, generating electricity and monitoring public service quality, reporting issues through online tools or challenging official data using their own sensor networks. Individuals book their own flights, execute their own money transfers, choose between different service providers, and leave detailed traces of their choices and preferences in the process. Not all aspects of this infrastructural inversion are voluntary or make the interactions with and within urban systems less burdensome. The increased level of participation in urban services also enables new accountability instruments and tools for mobilization, which promise the public more voice in deciding matters of infrastructure governance.
Unlike earlier phases of decentralization promoted by market-oriented public management reformers, a diverse group of actors is driving this current infrastructural inversion, each with different goals and politics. Local governments, NGOs, technology startups, and data activists use social media technologies to provide, augment, and scrutinize public services. The similarity of their tools and their coordination and mobilization tactics, however, can distract from ideological differences and diverging visions of what constitutes “good governance.” Depending on perspective, civic technologies may appear neoliberal or neo-Weberian, critical or service-oriented, tools for deliberation or technocratic governance. An application that helps neighbors fix infrastructural issues among themselves may be designed to foster shared civic values by appropriating the transactional mechanisms used by commercial services such as Uber or Airbnb. A city could use this app to make public service provision more useful and meaningful, or offer it as an excuse to withdraw from service provision and shift the responsibility to the public.
The described paradigm is information-centric: based on the assumption that all urban issues are problems that can be, in one way or another, addressed by exchanging information and improving coordination. This involves assessing needs and issues either directly by collecting feedback or indirectly by appropriating available data sources that can be used as proxies. It involves making service provision more targeted by intervening only where service is needed and measuring outcomes based on available data and the metrics of choice. Civic technologies promise to open new spaces for deliberation and help create public pressure by coordinating social action on a massive scale when services fail. Yet this information-centrism is problematic in several ways. Information does not automatically translate into action, and data generated by sensors or volunteers offer an image that is necessarily incomplete and shaped by particular interests. As acknowledged even by optimistic voices of the development sector, information technology is no “equalizer” that mitigates economic and social differences (World Bank 2016). Working with public records can resemble searching for a needle in a haystack and, paradoxically, transparency and open data initiatives can obfuscate relevant information by creating bigger haystacks.
Civic technologies are not just a pragmatic means for simplifying communication with cities or collecting information about issues of concern. They create a world of informational objects through which users describe and perceive their environment. This world, in Woolgar’s terms, configures its users, requires specific knowledge, and favors certain behaviors and forms of expression (Woolgar 1991). Interface design determines how users are represented and shapes their interactions. By simplifying communication and shortening distance, civic technologies informalize the interactions between citizens and governments. At the same time, it makes these interactions more formal by recording a persistent and identifiable trace that is not bound to the specific context of the interaction, one that can be aggregated and analyzed.
As they define the objects, rules, and governance of this interface world, interface designers are often not aware of their implicit ontological claims. They have, however, the capacity to make infrastructures legible in a way that acknowledges their heterogeneous natures. This can involve emphasizing the seams between system components, providing clues that indicate activity, creating interfaces that are socially translucent rather than transparent, developing visual languages to express the processes of governance, or supplying references about where to get additional information. At the same time, their representations of socio-technical systems are always partial and incomplete, based on perspectives that are never universal. Legibility always serves a specific purpose.
Perhaps more than other infrastructures, waste systems have so far resisted pervasive dataization. Compared to some of the more esoteric issues of smart cities and data-driven urban management, waste management struggles with fundamental data issues that affect policy decisions. While some aspects of waste management are increasingly captured—examples include the material composition received and processed by automated MRFs, collection volumes at the household scale with the help of RFID tags, and remote sensing methods to identify informal dumpsites (Hannan et al. 2015)—these data sources remain islands of information in a largely opaque system. These islands may be limited to a specific purpose, such as the economics of collecting and valorizing specific recyclables including bottles, metals, papers, and plastics. They may be limited to a specific area by, for instance, the idiosyncratic and incompatible taxonomies and data collection methods in different states, or more importantly, by the specific challenges faced by developing and developed countries (Wilson 2007). Connecting these islands is complicated by the fact that existing data are often based on unknown or incompatible methodologies, starting with widely diverging definitions of such fundamental concepts as municipal solid waste.
These issues make the waste system a good case study for investigating the mechanisms and limitations of monitoring practices for socio-technical systems, especially considering the political and contested nature of the waste system’s definitions and monitoring procedures. The case studies discussed in this book investigate the waste system from the bottom-up perspective, investigating the movements of waste across state boundaries, the informal organization of collection, and the processes of urban maintenance through citizen reports that cover waste and sanitation issues. Constructing an image based on the data from these studies required engaging with technologies of geolocalization, open data repositories, data from other participatory initiatives, and regulatory databases.
In conceptualizing infrastructure legibility, I have compared notions of urban legibility described by James Scott and Kevin Lynch. Scottian legibility utilizes a perspective from above and involves creating system-wide, simplified representations based on standardized symbolic conventions. Lynchian legibility, on the other hand, means constructing an image of the system from below, based on a heterogeneous set of clues and traces. While Scottian legibility relies on the metaphor of territory-as-text, Lynchian legibility is perhaps best thought of as tracking by scent.
All three of the case studies take advantage of location sensing to generate data sets consisting of variables including timestamps, latitudes and longitudes, and derivatives such as speed, distance, or distribution. As a standardized, reductive, and essentially asemantic representation, these quantitative values also pinpoint a place, and place is layered with meaning. According to Waldo Tobler’s first law of geography, “Everything is related to everything else, but near things are more related than distant things” (Tobler 1970). The geographic coordinate is an indexical point to a complex network of interrelations in which metric distance is not the only measure for expressing proximity. Regarded this way, spatial analysis becomes a qualitative endeavor.
In the Trash Track study, the observation of the waste system is limited to interpreting the recorded data points reported by the deployed location sensors. In the ideal scenario, a tracked item would report four to six locations per day. In reality it was typically less than that. The sparse data made it difficult to identify whether the report was sent from a facility or from the road. Localization artifacts made the reported location a matter of uncertainty. Because automatic algorithms for detecting stops at facilities, spatial clustering, and geocoding facilities require a certain amount of data to work reliably, analysis was mostly a manual process of collecting clues from various sources all the way from facility databases to waste management contracts. Demonstrating true dedication to waste forensics, my colleague David Lee spent part of his honeymoon on a road trip with his wife, exploring reported locations and visiting waste facilities in rural Oregon. But despite its sparsity, the data captured other aspects typically not included in official data sources, especially information about time and duration. Such information can be relevant for inferring carbon emissions of organic waste or strengthening evidence by matching the recorded trajectories with shipping documents. Overall, though, constructing evidence from sparse location data remains a precarious endeavor because multiple interpretations are possible.
The Forage Tracker study that took place in Brazil was initially motivated by a very Lynchian question: How do informal collectors read the urban environment, how do they find material, and which parameters inform their spatial decisions? Although the recorded GPS traces and observations in the cooperative recorded during the experiment were no more than anecdotal glimpses into a system that is opaque almost by definition, this time we had the opportunity to contextualize the traces with explanations from the collectors. Although each collector used and described different spatial strategies—focusing on specific materials and collecting from particular clients in particular areas—they all were influenced and constrained by the same parameters, including traffic, distance, and terrain, and most importantly the market prices of their goods.
In the third case study, citizen feedback systems offered a window into how residents perceived problems in their neighborhoods. The data was subjective, biased in many ways. The gravity of the described issue and the urgency expressed in the submitted report did not always correspond. Even if issues were perceived similarly, not every resident would decide to contact the city about it, and if they did, they used different media, ranging from letters to phone calls to smartphone reports. To some extent, citizens also read the city through the feedback app, perhaps alerted to issues in their neighborhood through the system. The city read the concerns of their constituents through the submissions captured by their constituent relationship management (CRM) system. In both cases, categories and interface design shaped how each party perceived the concerns. The study also demonstrated how an interface acts like a mirror: citizen reporters saw their role in the maintenance of infrastructure through their reports and the actions they triggered in a city department.
From the Scottian perspective of universal and reductive symbolic representations of the infrastructural landscape, the Trash Track study made it possible to integrate information across system boundaries such as state borders, service contract areas, transport modalities, or waste stream designations. Here, interpreting the recorded traces relied on the availability of official data sources.
In Forage Tracker, the difficulties in establishing legibility from above are manifest in the struggles by local, state, and national governments with collecting reliable data about the informal sector, which can serve as evidence to inform policy decisions. Unlike the abstract Scottian modernist state, the administrative levels within the informal sector are not the most powerful actors, and mandates requiring cooperatives and associations to report data to cities are executed to different extents and with different rigors, partially undermining the efforts of standardized data collection and benchmarking.
In the study of citizen feedback systems conducted in Boston, the effort to shape and establish legibility from above is manifest in the ongoing evolution of citizen feedback systems. The Scottian notion that administrative taxonomies influence reporting behavior becomes a bidirectional process of adaptation. The changing features and service types found within system interfaces bear witness to the iterative approach taken by local government to guide and shape their interactions with constituents and their attempts to reconcile internal structures and terminologies with the perception of issues by the citizens. Again, the boundary between constituents and officials is blurred by conscious design decisions in systems such as SeeClickFix that represent all parties in a similar way. This user parity is also seen in the fact that officials frequently use, out of convenience, the citizen feedback app to report issues.
Throughout this book, I have avoided a strict separation between data and visualization, as well as between sensing and displaying. The Trash Track data set was visualized in different ways, including animations, interactive graphics, static maps, and quick-and-dirty working models that used online mapping services. The multiple representations emerged from data explorations or were produced as public presentations. In Forage Tracker, data and maps were sometimes handwritten and sometimes created in a digital format, with one format often grafted onto another. Mapping a route involved recording a trace, printing it as a physical map, and annotating it manually during an interview. Occasionally the cooperatives produced maps of service areas and collection routes, but often the neighborhoods where they operated did not exist on official maps. In the case of citizen feedback apps, the maps used by the different systems were more consistent. All of them summarized reports as online markers. However, they all used different ways to represent users and to facilitate user interactions in the interface.
In the policy domain, visualization practice is often understood as the translation of predetermined messages into accessible visual forms. In research practices, transitions between data analysis and data visualization are more fluid, and the visualization practitioners who are often involved in data collection and analysis engage deeply with the characteristics and limitations of a particular data source. A failed visualization is often one that does not account for a data set’s structures, error ranges, and biases. Data visualization artifacts are, like data sets, based on a codified symbolic language. In data analysis, visual and computational operations are often used and treated equivalently, and manipulating and transforming data sets usually involves exploring data as scatterplots that map the data into discrete or continuous color scales or arrange them in different spatial layouts. In all three case studies, visualization was an essential tool for the analysis and interpretation of the recorded spatial data.
At the beginning of this book, I introduced a definition of information as “data plus meaning,” which implies that meaning is located external to the data artifacts. Critics of the Big Data paradigm point out that meaning is defined by the context of data collection—the precise conditions under which a data set was encoded—rather than through the data values (Drucker 2011). In fact, many geographic data sets collected in crowdsourced projects present themselves as an aggregation of decontextualized and underspecified location markers that were generated by an anonymous collective under unknown local conditions. The studies in this book demonstrate the difficulties of interpreting sensor data collected in unknown environments and deployed by participants with different motivations and interests. Without contextual information, we end up with close to nothing in our hands. Research that takes advantage of data generated by social media services such as Twitter struggles with similar issues because the demographics of the users who generated the data, along with their specific motivations and purposes for using the service, are often unknown. In such cases one could say that context is sticky; it cannot be ignored without diminishing the data value.
It turns out, however, that some data sources are less sticky than others, and they prove to be reliable proxies for modeling phenomena that are far removed from the original context of data collection. An example of such a data set is the workhorse of many geographers and economists, the data captured by the Operational Line Scanner (OLS) sensor on the satellites from the U.S. Defense Meteorological Satellite Program (DMSP). The geographic data grids, which show the nocturnal light emissions of cities and human activity, are used to model such diverse phenomena as economic output, urbanization and poverty, resource footprints, and disease outbreaks (Sutton 1997; Henderson, Storeygard, and Weil 2009; Elvidge et al. 2011; Bharti et al. 2011). Interestingly, the data set is an entirely accidental byproduct of the military satellites built for measuring cloud cover for reconnaissance missions (Hall 2001). Engineers discovered that their optical instruments were sensitive enough to register city lights, information that closely correlates with energy utilization (Croft 1978; Welch 1980). OLS data demonstrates the capacity of some data sources to transcend original context if the apparatus of measurement and the context of observation are well understood and robust. Data generated through human interactions, however, rarely fulfill this requirement. In both cases of sticky and nonsticky, analysis requires a thorough attention to the context of data generation.
A second obstacle when reading socio-technical systems are the known and hidden biases in data sets and research design constructs. The concept of “data” has been conceptualized and scrutinized from different angles, but the concept of “bias” is often taken for granted. “Bias” means a systematic pattern of error or a deviation from the true mean of a distribution (Kitchin 2014, 14). In other words, the concept of bias implies a known truth. A crowdsourced data set is biased in the sense that it originates from a self-selected group of volunteers and is not a random sample drawn from the larger population.
As the case study on citizen feedback systems in part III demonstrates, there are many aspects of citizen-generated data sets that cannot be evaluated in terms of their accuracy. There is no canonical form of a citizen report that could serve as a template to evaluate which themes are addressed, how arguments are rhetorically framed, and how the received feedback influences reporting behavior.
Beyond how accurately citizen feedback represents a population and the infrastructural issues they experience in their neighborhoods, the third case study demonstrates that data sets can be used to investigate the dynamics between citizens and cities by looking at how their interactions are influenced by the design factors of the mediating system. Because these data sets never represent conditions that are stable in time, the design of the reporting systems is constantly tweaked by cities, and users adjust their behavior based on the feedback they receive. Computational social scientist David Lazer has shown that the predictive capability of the Google Flu Trends service—which was designed to predict flu outbreaks based on user search terms—has degraded over time because users have started to change their search behavior in response to flu-related news they receive through the same search engine (Lazer et al. 2014). Such feedback phenomena are difficult to account for in terms of bias and truth without scrutinizing the dynamics of how the design of mediating technologies shapes the democratic discourse they facilitate.
In the context of global waste systems, the need for infrastructure legibility is not difficult to demonstrate, considering the urgency of the environmental, public health, and equity issues as well as the lack of evidence necessary to make informed policy decisions. But if we expand this concept to other kinds of socio-technical systems, what is achieved by making infrastructure more legible? Am I overstating the importance of information and awareness? One might object that urban infrastructures are remarkably resilient even when they remain entirely illegible, due in part to the appropriations and improvisations of users.
As discussed throughout this book, infrastructure legibility has an important function for accountability, and I think there are reasons to assume that this dimension has become more important over the past decades. Contemporary urban systems are characterized by complex structures of governance and ownership. They are often run by a hybrid network of actors that include banks and pension funds, public institutions, private corporations, and community organizations.
As utility poles and street lights are retrofitted with networked sensors in many U.S. cities, the accountability dimension of infrastructure gets even more complicated. The party that owns and operates the sensors is not necessarily the same entity that owns the collected data and has accountability for what happens with the information. Questions of data life cycle, privacy, and public anonymity have to be solved in a complex network of accountability between all parties involved. At the same time, none of these aspects are legible to the pedestrian on the sidewalk, not even the fact that sensors are present and collecting data. Policies that regulate data sharing with third parties might change over the years, yet these changes do not have any consequences for how utility poles present themselves in the public space.
As the Internet of Things (IoT) enters the public space, even minuscule infrastructure consumption becomes measurable, quantifiable, and ultimately billable. The capacity of IoT for micro-transactions between networked devices introduces another accountability-related aspect, which could be called the “transactionalization of infrastructure services.” During the 2013 Turing Festival in Edinburgh, Mike Hearn, a former developer of the cryptocurrency Bitcoin, shared an idea for a future infrastructure he called TradeNet, which connects all existing objects and systems: “In this future scenario, the roads on which Jen is driving will have also become autonomous actors, doing trades with the car on TradeNet. They can submit bids to the car about how much they are going to charge to use them. If she is in a hurry, Jen can choose a road that is a bit more expensive but which will allow her to get into the city faster. Awesome, right?” (Hearn 2013).
It is not entirely clear what fuels Hearn’s enthusiasm for this scenario, but let’s assume it is the notion that the transportation infrastructure can be maintained entirely by billing users only for the “fair share” that corresponds to their service consumption. The governance of TradeNet is algorithmic, maintains a dynamic equilibrium, and adjusts prices based on market mechanisms to achieve an efficient system load. It is not difficult to compare this scenario to a “pay as you throw” model in the waste system, where each waste generator pays for the amount of waste generated. In both cases this model might have a positive environmental impact by affecting people’s decisions to conserve resources.
But the TradeNet model also has interesting consequences for the status of infrastructure as a public good. As discussed earlier in this book, when service consumption can be accurately measured and billed with few transaction costs, exclusion becomes more feasible and the common good becomes a private good. In the case of Bitcoin, this is not without irony since the blockchain, the underlying infrastructure necessary to verify Bitcoin micro-transactions, is a common good itself. The system would not function without the contributions of Bitcoin enthusiasts who run the transaction-verifying nodes by contributing their own time, hardware, and electricity.
Bitcoin is a paradoxical commons. From the outside, its network is presented as a neutral and incorruptible self-governing algorithmic system. Bitcoin “is regulated, only by mathematics instead of politicians,” according to a common argument by supporters (Voorhees 2012). In this perspective, governance is seen as a form of housekeeping, required only to enforce existing contracts. “Just as robots have helped the world reduce menial physical labor, so cryptocurrency technology now gives us the tools to automate the menial labor of bureaucracy. Optimistically, the entirety of humanity will benefit as a result” (Barski and Wilmer 2014).
Not only is this concept of governance very different from the one used in this book, it is also inconsistent with what happens in the Bitcoin community itself. For more than two years, as the digital currency has gained popularity, the community has been deeply divided by the “block size debate,” a controversy around the appropriate bandwidth of the blockchain determined by the atomistic size of its blocks storing Bitcoin transactions. This seemingly trivial technical detail has wide-ranging, even geopolitical implications for the distribution of power among the participating actors since most Bitcoin mining activities are concentrated in China.
From the perspective of infrastructure legibility, Bitcoin is black-boxed. It presents itself to the outside as a transparent, transactional, and incorruptible algorithmic system of governance. But in fact, it is dominated by the same kinds of controversies and politics that shape most other socio-technical systems.
The schizophrenic aesthetics of systems that represent themselves outwardly as conceptually simple and algorithmically precise, hiding the messy negotiations necessary to keep the system running, are not limited to the Bitcoin network. From platform companies such as Uber to search engines, most digital services employ similar design choices. The minimalistic interface of Google’s search engine hides the company’s constant tinkering with its search algorithm to neutralize attempts by outsiders to manipulate the search results. The Uber smartphone app looks the same in every city of the world, hiding the fact that the company often has to negotiate with each city government to comply with local regulations. Contributors to crowdsourcing platforms are invisible and abstracted into a unified digital service API as ironically acknowledged by Amazon’s Mechanical Turk platform, which in name refers to an eighteenth-century faux automaton: a human chess player pretending to be a machine.
All of these phenomena introduce new challenges for reading infrastructures. In the case of Bitcoin, the controversies and discussions around the protocol take place in the public, the code is open source, and all changes are extensively discussed in the community. Kevin Hamilton and colleagues have investigated issues of “algorithm awareness,” meaning the degree to which everyday users are aware of the invisible algorithms that determine their online experience. As the authors explain, “Algorithms are buried not only outside of human perception, but behind walls of intellectual property” (Hamilton et al. 2014, 632). Questions of algorithm awareness gained public attention when Facebook researchers manipulated newsfeeds without user consent to study how emotions spread through the network (Kramer 2012), raising the concern over how such practices could be deployed in the context of national elections. Another example is the “right to be forgotten” legislation of the European Union (Mantelero 2013), which allows individuals to request the exclusion of their person from Internet search results to protect them from abuse. At the same time, this right raises concerns about the possibility of manipulating the online representation of public figures.
Algorithmic governance brings to the fore certain dilemmas of algorithmic accountability (Diakopoulos 2014). Some forms of algorithmic governance work only as long as they remain secret. Early search engines such as AltaVista failed because users reverse-engineered the search algorithm, making their own sites more visible while diminishing the quality of results for everyone else. For similar reasons, algorithms used to calculate credit scores remain secret, giving only vague indications about which factors are considered. In urban space, algorithmic modes of governance produce what Steve Graham describes as “software-sorted geographies” that can manage the visibility of points of interest in online maps, direct users by way of navigation systems, or spatially adjust service rates (Graham 2005).
In the space of algorithmic governance, infrastructure legibility is first of all an issue of accountability, a question of integrating the nature and function of algorithms into the democratic discourse. Making the function of algorithms legible raises several dilemmas for which full transparency is not always a solution. In the last section of this book, I outline design principles that allow us to navigate the dilemmas and complexities involved in cyber-physical infrastructures.
Throughout this book, I have argued that design and governance are closely related. First, design involves many aspects of governance. From architecture to smartphone user interfaces, design regulates behavior and frames issues in certain ways. Second, governance also shares similarities with design. Setting policies and negotiating rules requires reconciling contradictory factors and making small adjustments over multiple iterations.
In the preceding chapters, I was concerned with different aspects and practices of dissecting and reverse-engineering waste systems. I will conclude by proposing provisional design principles that adapt the preceding discussions for contemporary cyber-physical urban infrastructures. My principles of accountability-oriented design run counter in many ways to the traditional ideas about “good design.” The functionalist design principles of clarity and simplicity that define current information design practices aim at reducing complexity to its essence (Rams 1984). But in the case of the messy, ambiguous, and sometimes paradoxical reality of infrastructural systems, such essentialism is futile. As Don Norman and Pieter Jan Stappers argue, the human mind is not well equipped to investigate socio-technical systems since we tend to look for simple, reductive models, a tendency reinforced by minimalist design heuristics (Norman and Stappers 2015).
To acknowledge the nature of complex socio-technical systems, a different set of design heuristics is needed. Such an approach would avoid deceptively simple and reductive representations, calling attention instead to the multiple perspectives on infrastructures. What could be called “accountability-oriented design” calls attention to issues of governance and the role of design as an agent that regulates human behavior and system interactions. In the remainder of this chapter, I discuss a proposal for accountability-oriented design that:
Accountability-oriented design is contextual. It considers a specific situation and a particular group of constituents to provide governance-related information where and when it is needed. In the earlier example of the sensor-equipped utility pole, an accountability-oriented design approach could mean alerting pedestrians to the presence of the sensor and pointing to online resources that provide details about the sensor’s data governance, such as how long information is stored and who has access to it.
An example of accountability-oriented design is what political scientist Dieter Zinnbauer terms “ambient accountability,” defined as “all efforts that seek to shape, use and engage systematically with the built environment and public places and the ways people experience and interact in them, in order to further transparency, accountability and integrity of public authorities and services” (Zinnbauer 2012). As an example of ambient accountability, Zinnbauer cites a construction site display, such as those legally required in many countries. The display identifies the architect, the client, and the construction company, the beginning and anticipated end dates, and the budget and funding sources if the project is public. If the construction site appears abandoned or presents safety hazards, it is important to have this information presented on site rather than hidden in institutional databases. Maintenance logs on machines and cleaning schedules in public bathrooms fall into the same category.
Ambient accountability also assists in educating the public. “Know your rights” murals can be found in parts of New York City, especially in areas with large ethnic minority and African-American populations. These murals inform people about their rights relevant to encounters with police—for example, that it is legal and encouraged (by the mural at least) to film officers on duty during arrests. Ambient accountability also includes actions of public shaming, such as the inflatable rats erected by members of U.S. labor unions in front of businesses that do not use unionized labor. But ambient accountability can also be more subtle in its expressions, for example in the way public officials represent themselves in their own offices.
Figure C.1 Know Your Rights mural in Bushwick, Brooklyn by artist Dasic Fernández. Screenshot from Google Street View, reproduced under Google Maps fair-use policy.
Just like the utility pole, most objects are embedded in larger systems. An accountability-oriented design approach looks beyond the boundaries of the object and considers how its different roles in the surrounding systems can be communicated through design. This can involve simple gestures, such as the practice of labeling waste bins “landfill” to designate the larger system the bin is part of. Mandatory product-labeling requirements, such as the disclosure of health effects of food or cigarettes, the environmental impact of packaging, and the exact meaning of terms such as “compostable” and “recyclable” are subjects of ongoing battles between regulators and industry precisely because they call attention to controversies in the larger systems of food production and manufacturing.
Accountability-oriented design is relational; it concerns the ways in which objects announce themselves to their surroundings. The shutter sound of cell phone cameras is not just a nostalgic reference to analog cameras, but indicates to the people in proximity that a photo has been taken. For this reason, some countries require phone manufacturers to include this sonic signifier, acknowledging that taking a photo is inappropriate in some situations.
What can be communicated by attaching a physical label and defining a designated icon is limited. Accountability-oriented design can address this by making sure that sensors in public spaces are both physically and virtually identifiable. If through acceptable practice a waste bin collects data about pedestrian activity by skimming hardware addresses from personal devices, it should be possible for people to connect to this sensor through their own phones to access accountability-related information, such as the email addresses of those responsible for safeguarding the collected data.
Both physical and digital components are necessary because they are contingent upon each other. The open data movement frequently invokes the notion of “digital public space,” a web server where public data sets can be accessed. But despite this rhetoric, digital and physical spaces are not equivalent in their “publicness.” According to urban writers Jane Jacobs and Richard Sennett, the involuntary exposure to diversity is a central aspect of the public space where one cannot choose who one runs into (Jacobs 1961; Sennett 1970). This is not the case in digital space where chance encounters are less likely and often managed by a filter bubble. The only people who will be able to utilize open data sets are those who actively seek them, know where to look for them, and know how to work with them. The idea of ambient accountability could be instrumental for connecting open data resources to the physical places where they are relevant. QR-codes, the two-dimensional barcodes that can be read by a smartphone, have become ubiquitous in public space, but they are not human readable, so they can be augmented with relevant, human-readable information.
In part III of this book, I discussed how the perception of the user in citizen feedback systems is managed by the configurations of the interface. This is not necessarily a deceptive practice, but a decision every designer of social interfaces has to make. Every design decision has consequences for user behavior, including many that cannot be anticipated. Accountability-oriented design suggests that a designer should be aware of his or her own role in regulating and shaping behavior.
But managing visibility also means considering the back-ends of protocols and software licenses that regulate the visibility of technical arrangements. The synergy and kinship between open standards, open source software, and participatory democracy are recurrent themes in many public sector projects. Since 2003, the city of Munich has migrated over fifteen thousand computers to the open-source operating system Linux. Other examples of open source in government include the adoption of the OpenDocument standard for all Massachusetts state entities in 2005, the partially crowdsourced Icelandic constitution reform, and the citizen-written transparency law in Hamburg, Germany (Shah, Kesan, and Kennis 2008; Landemore 2015; Verein f. mehr Demokratie 2012).
The notion of seamful design has been discussed in several instances in this book. At this point, I want to clarify that seamful design does not necessarily mean making an experience deliberately inconvenient.
Private ride-sharing services such as Uber and Lyft are increasingly integrated into public transportation systems. Navigation apps that combine real-time data from multiple modes of transportation make trips that involve public and private modes of transportation an almost seamless experience. At the same time, the boundaries between public and private services can become obscured. As local governments and transit authorities seek collaborations with ride-sharing companies, complex questions of data ownership and accountability arise. Seamful design in this context could mean disentangling the accountability relationships in public-private partnerships.
Connecting to a public Wi-Fi hotspot typically involves scrolling through several pages of a usage agreement that details the legal implications of connecting to the Internet through the provided connection. However obscure and misleading their language may be, these agreements represent an important accountability mechanism. Nevertheless, few people read them in places like airports where perusing detailed explanations of a socio-technical system is out of the question. In addition to the agreements, logging onto the hotspot could provide an abstracted, socially translucent representation of the activity of other users who are currently connected to the same system.
Reading clues and traces requires less effort than reading a text or decoding a symbolic language. This is the rationale behind ambient displays residing at the periphery of attention (Wisneski et al. 1998; Offenhuber 2008). The LEDs on an Internet router are symbolic representations of technical states that are unknown and meaningless to most users. They do not explain the function of the device or, more generally, the TCP/IP network. They nevertheless convey a sense of activity that most people are able to understand. If the LEDs remain relatively calm for hours, only to burst into frantic activity in the middle of the night, one might get curious about whether someone is trying to break into the network or the computer is simply performing a regular update.
In part III of this book, I contrasted the rich and messy appearance of “dirty visualizations” created by collaborators using whatever software and tools were at hand, with the polished, minimalist designs of professional information designers. Accountability-oriented design communicates on multiple levels, addresses different contexts and situations, and therefore necessarily involves redundancies.
Making things legible by sharing and publishing information is no universal remedy. Accountability-oriented design also requires a realistic and critical reflection on what can and should be addressed through design and data collection. Often, vulnerable groups have to withhold information to avoid having more powerful groups take advantage of them. Many things have to remain hidden. A whistleblower platform depends on the trust that the identity of the whistleblower remains anonymous. Transparency platforms call attention to failures rather than successes, which can be instrumentalized by political opponents. At the same time, radical transparency is an effective way of obfuscating relevant signals within a torrent of noise.
In many cases, the responsible solution is not to collect data at all. An accountability-oriented design approach is therefore not only concerned with showing what happens, but also with what does not happen, as secrets that should not be disclosed should be safeguarded. As the case study of Brazilian recycling cooperatives described in part II has shown, attempts at making informal practices legible through technology sometimes arrive at the conclusion that these practices should remain illegible.
In this book, I have described infrastructure legibility as structure and process, presence and social practice, governance and the civic self. I have used legibility in the context of data collection, experience and awareness, visual communication, and narratives for guidance. Legibility involves reading infrastructure through material and digital interfaces as well as through human practices and performances in physical and informational spaces. The aspects of infrastructure legibility that I have described are not conclusive categories; they are heuristics that make no claim to completeness or universality.
In an environment of increasingly mediated infrastructure, urban planners can learn from design disciplines that deal from the outset with the experience of infrastructure and its implications. The confusing and contradictory recycling ordinances and bins with a variety of shapes and colors are two obvious indications that the interfaces of waste systems suffer from a lack of attention to design. But designers are also frequently oblivious to the political nature of their artifacts and can benefit from lessons offered by social sciences and the humanities.
Waste systems are fitting exemplars of heterogeneous, illegible, and contested infrastructures. The archaic, physical nature of waste systems forces us to think closely about the process of observing a particular aspect, encoding it into data sets, and constructing evidence to inform policy decisions or enforce environmental laws. The implications of this study are not limited to waste infrastructures. To prevent the contents of open data portals and transparency initiatives from becoming informational waste that clutters the arena of public discourse, we must pay attention to the experience of information, which is not simply there and ready-to-hand, but something that needs to be made.