Pachube was an open Internet of Things (IoT) platform that allowed participants to “share real time environmental data from objects, devices and spaces around the world” (Pachube 2008). The shared sensor streams collected by amateurs and institutions included live data ranging from radiation and air quality measurements to the status of countless coffee machines in labs around the world.
Founded by the artist Usman Haque, the platform allowed individuals to connect networked sensors to the Internet and make their data accessible to others via its own application programming interface (API). Unlike earlier, constrained efforts such as personal automated weather stations connected by enthusiasts to websites such as the weather underground network,1 Pachube was open and configurable for any kind of data stream. By 2011, the project had become popular enough to potentially become a Wikipedia for live sensor data, and the Pachube team was dreaming about a global sensor network maintained by hackers, artists, and urban activists. In an online conversation on the Pachube blog with the urban interaction designer Adam Greenfield, Pachube executive Ed Borden sketched a vision in which a volunteer collective shaped the future of urban data generation, in his words, “establishing their own standards and questioning the standards of others” (Borden and Greenfield 2011):
BigGov has become irrelevant in the public sector, eclipsed by someone with a supercomputer in their pocket, open source hardware and software at their fingertips, and a global community of like-minded geniuses at their beck and call: YOU. YOU are the Smart City.
… Adam, we need better verbiage here. What do we call this “citizen of the Smart City” and how do we make sure there are a whole lot more of them?
—Ed Borden
Greenfield responded with a different take on infrastructure (ibid.):
We call them “citizens,” Ed.
… There are some things that can only be accomplished at scale—I think, particularly, of the kind of heavy infrastructural investments that underwrite robust, equal, society-wide access to connectivity. And for better or worse, governments are among the few actors capable of operating at the necessary scale to accomplish things like that; they’re certainly the only ones that are, even in principle, fully democratically accountable.
—Adam Greenfield
Greenfield turned out to be right. Less than two months later, the Pachube platform was sold, and its new owners renamed it and turned it into a closed commercial service—a success for its original creators, but arguably a loss for the community of contributors. The fate of Pachube, its groundbreaking idea, growing community, infectious enthusiasm, and ultimately brief life as an open community illustrate a question that frequently arises when participatory online communities are compared with urban systems: can infrastructures be crowdsourced? The final part of this book examines the tensions between the acts of building and maintaining systems, between voluntary contributions and standardized maintenance protocols, and finally, between the role of mediating interfaces and technologies in this respect.
The notion that distributed, user-driven platforms such as Pachube can make traditional urban service provision obsolete is a central narrative of networked urbanism and appears in many forms. It claims that the decentralized, information-centric, perpetually adapting, and voluntary nature of online platforms offers a replicable model for the organization of urban services that is superior to traditional centrally managed systems. Critic Evgeny Morozov describes this position as Internet-centrism, which, in his view, is based on the mistaken assumption that the Internet is a coherent object with its own essential logic, rather than a heterogeneous assemblage of wires, protocols, and human practices that involve both centralized and decentralized aspects (Morozov 2014).
Two examples of user-driven infrastructures, embodying the “logic of the Internet,” are frequently mentioned in optimistic accounts of possible technological futures: the world of open source software (OSS) development in general and the online encyclopedia Wikipedia in particular. They are often characterized as complex systems that miraculously work seemingly without central coordination. It is often overlooked that open source projects fail far more often than they succeed, and both examples involve complex governance structures and have many centralized aspects.
Nevertheless, open source projects, such as the Linux kernel developed under its “Benevolent Dictator for Life” Linus Torvalds, broke with the long-held assumption in software development that too many cooks spoil the broth. Despite relying on the work of amateurs, the software has the reputation of being stable and reliable. Software developer Eric Raymond attributes the success of the Linux kernel to what he describes as the “bazaar model”: instead of releasing only clean and stable versions of the code base authored by a small team of experts, the project succeeded by frequently releasing imperfect versions, relying on the community of developers to find and fix problems (Raymond 1999). Raymond characterizes bazaar-style development as a perpetually unfinished process of small increments that involves a constant rewriting, reusing, and discarding of code. Users are treated as co-developers whose bug reports and suggestions are an integral part of the development process.
Considering that open-source software projects do involve not only creative problem solving but also many tedious and repetitive tasks, it may seem only a small step to applying similar principles to questions of infrastructure governance and maintenance. The communities of open software users need to be cultivated, but the constituents already live in the neighborhoods where they are affected by infrastructure deficiencies and could supply the local knowledge needed to resolve them. It might seem practical to address these issues through horizontal coordination and voluntary cooperation.
Such a position is abetted by technology advocate Tim O’Reilly, who offers OSS development principles as an alternative to bureaucratic decision making: “Open source software projects like Linux and open systems like the Internet work not because there’s a central board of approval making sure that all the pieces fit together but because the original designers of the system laid down clear rules for cooperation and interoperability” (O’Reilly 2011).
The alleged advantage of clear and transparent rules of code over messy politics is a recurrent theme in tech literature. This characterization, however, ignores the variety of governance models hidden under the broad umbrella of open source: its vastly different approaches to motivating contributors, resolving conflicts, and planning future directions. Open source projects are not always exemplars of democratic and decentralized governance, but can fall anywhere on a spectrum from a dictatorship to anarchy. Projects such as the Linux distribution2 Debian have a constitution and elected leaders, while others have no governance structures at all. Many large projects such as the Linux kernel, Wikipedia, or the content management system Drupal have “Benevolent Dictators for Life” (BDFL). Within this group, some projects such as Wikipedia have detailed policies for decentralized conflict resolution, while others, such as the Linux kernel, are strongly centralized with the BDFL, usually the initiator of the software project, making strategic decisions that settle all disputes (Fay 2012).
The application of open source models to urban governance can mean, quite literally, using and developing OSS for cities. The nonprofit organization Code for America’s mission statement declares, “We build open source technology and organize a network of people dedicated to making government services simple, effective, and easy to use” (Code for America 2016). Former president Barack Obama’s Memorandum on Open Government calls for applying new technologies for managing and distributing information for purposes of government transparency (Obama 2009). Five years earlier, the city of Munich started the migration of all its software systems to open source software (Casson and Ryan 2006).
More often than not, however, “open source” is used metaphorically for describing distinct values and principles. For sociologist Saskia Sassen, open source embraces incompleteness and thus resonates with her understanding of cities, which are constantly remade, not only by powerful actors, but also by citizens, who may resist through their practices (Sassen 2011). In a similar vein, anthropologist Alberto Corsín Jiménez frames urban infrastructure not as a finished system, but as “a prototype, whose main quality is its permanent ‘beta’ condition” (Jimenez et al. 2014). By laying out the affordances and deficiencies of infrastructure systems open to the public, open source has a justice dimension becoming a means toward what he calls the “right to infrastructure” (ibid.).
Incompleteness, however, can only be seen as a positive value in combination with a capacity for continuous improvements. In the words of Beth Noveck, lawyer and head of the White House Open Government Initiative: “Whenever we confront a problem, we have to ask ourselves: How do I parse and distribute the problem? How might we build feedback loops that incorporate more people” (Lathrop and Ruma 2010, 49)? To facilitate distribution and asynchronous collaboration, several tools used for managing open source projects have been reimagined for urban governance. This includes version control systems (VCS), platforms that document every change to the codebase and facilitate communication and deliberation grounded in the activity of coding (Fuller and Haque 2008). Through VCS, bug trackers, and other web platforms, deliberation can happen asynchronously, and every change to the codebase can be reversed.
This space of asynchronous collaboration, where every action within the platform is legible in its historical context, enables a specific culture of participation. As anthropologist Chris Kelty describes, a substantial part of the work of developer communities is dedicated to building and refining not only the software to be developed, but also the very tools and modes of communication necessary for coordinating the community—in his words, a “recursive public” that constantly rebuilds itself (Kelty 2005). Within this recursive public, individuals do not assume exchangeable roles as equal but generic citizens, but become active in their own areas of expertise—a phenomenon that human–computer interaction researchers Stacey Kuznetsov and Eric Paulos describe as the “rise of the expert amateur” (Kuznetsov and Paulos 2010).
The qualification “expert” is relevant in this context. The widespread assumption that the impressive products of online collaboration such as the Linux kernel are the result of hobbyists with too much time on their hands is contradicted by empirical studies that reveal that many participants are self-selected professionals and experts (Brabham 2012). According to a report by the Linux Foundation, only about 15–20 percent of the Linux kernel is written by independent developers; the rest is contributed by companies such as Intel or IBM (Kroah-Hartman, Corbet, and McPherson 2008).
The lesson drawn by initiatives such as Code for America is that not everyone needs to get involved—after all, only a tiny fraction of Wikipedia users also edit or add content. Instead, the goal of these initiatives is to reach those potential expert amateurs who may contribute to platforms such as Pachube, can build systems for measuring radiation, or have an intimate knowledge of vacant lots in their neighborhood.
The rhetoric celebrating the advantages of decentralization and co-production does, however, raise a question: is centrally managed, efficient infrastructure really such a bad thing? More precisely, when did it start to be seen as such?
Decentralized models for building infrastructures certainly predate the Linux kernel. Historian of technology Thomas Hughes illustrates the transition from modernist, top-down infrastructure development with its central hierarchies to the postmodern modes of planning in heterogeneous networks through the example of Boston’s Central Artery/Tunnel (CA/T) Project. The highway tunnel project in central Boston, completed in 2007, replaced an elevated inner-city highway built during the 1950s and took more than twenty-five years to complete. Unlike fifty years earlier, the main challenges in the CA/T Project were no longer technical and logistical, but social and organizational. Without a single central planning authority, the interests of many different actors had to be negotiated—federal laws were enacted to secure funding, firms founded in joint ventures, environmental assessments conducted, and especially important in the last phase of the project, pressure from media and interest groups negotiated (Hughes 1998, 197ff). In contemporary infrastructure planning and governance, urban systems are rarely operated by a single entity, but typically by a hybrid network of actors with different relationships and dependencies: various governmental agencies, utility companies, financial institutions, and other interest groups.
By the late 1960s, most cities in developed countries were connected to the four essential public services—water, transportation, sanitation, and electricity—which were built, owned, and operated by the public. Geographers Steve Graham and Simon Marvin term this form of uniform, standardized, and universal service provision the “integrated infrastructural ideal” (Steve Graham and Marvin 2001, 73). Centralized infrastructure development has many advantages. The high initial costs create natural monopolies and make it difficult for private actors to compete. Economies of scale increase efficiency with the size of the system, and the consensus that these services constitute public goods makes governments their natural custodians.
As chronicled by Graham and Marvin under the label of “splintering urbanism,” the urban infrastructure landscape became fragmented during the second half of the twentieth century as the public hand largely withdrew from service provision and privatized central utilities. They describe a number of causes responsible for its demise. The economic crises of the 1970s and the departure from the Keynesian welfare state left many public projects underfunded, diminishing the quality of service. The changing political economies of globalization further weakened the role of the state as the main provider of infrastructure. Finally, the departure from the modernist command-and-control planning paradigm made infrastructure projects increasingly complex and expensive, a development accelerated by social critiques and fierce civic opposition to federal infrastructure projects conducted under the urban renewal program in the United States, which involved large-scale eminent domain, demolition, and resettlement (Steve Graham and Marvin 2001, 91ff).
Graham and Marvin diagnosed increased inequalities along spatial, economic, and social dimensions due to the fragmentation of infrastructural networks. In the process of privatization, the large services became unbundled and marketed in different locations as separate services with different prices for different user groups. This fragmentation, reinforced by the economic logic of private service provision, creates winners and losers. Economically disadvantaged groups or areas often end up without service, or they have to pay a higher price for it (ibid., 284).
The described processes of infrastructure decentralization affect how users perceive and engage with a system. The deteriorating condition of urban infrastructure, power blackouts, and inequalities of service provision draw attention to infrastructure and make these systems more visible. The same conditions also make infrastructure more participatory—though, alas, unaccompanied by the fanfare of engagement and empowerment. Brittle and unreliable systems, not just in the Global South, require the increased involvement of their users, improvised solutions, and informal coping strategies (Graham 2009, 144). Paradoxically, the abundance of competing services can have a similar effect. Consumers become more actively involved with an infrastructure by having to choose among services whose relevant differences are not immediately obvious (Steve Graham and Marvin 2001, 148). Many services require a higher level of infrastructure literacy on the side of the user than they used to require in the past. In the example of waste management, the residents of Seattle require a basic understanding of recycling processes to decide whether the greasy pizza cardboard box or the plastic bag should go into the recycling bin or the trash can.
The line between service provider and consumer is becoming blurred as users increasingly get involved in service provision. What started with subcultures such as the home-power movement has become commonplace (Tatum 1992). In many countries, households operating photovoltaic panels receive incentives or compensation to feed extra electricity back into the grid. Even on a more mundane level, more and more tasks that were previously in the domain of the service provider are shifted to users. This can happen visibly, as in the case of users having to manage bank accounts, or invisibly, as in regard to smart meters, which shift some of the grid’s operational logic to users and collect load data without them being necessarily aware.
The same processes of decentralization also change the perspective of the provider. For private utilities, the profitable operation of an urban service requires making service consumption legible. In a market of competing infrastructure services, it is difficult to cover investment costs through regular service rates, since these tend to move toward the marginal cost of delivering the service (Frischmann 2012). A solution to this dilemma is to measure service consumption at a granular level and divide the rates and conditions of service delivery into different segments. For example, to offer competitive rates for cell phone service that would otherwise not cover the costs of installing and maintaining base stations, a provider can charge more for text messages or prepaid phone services, which are used by demographics that are of less interest to the provider. Information technologies and sensor networks give providers the fine-grained measurements necessary to enable the billing of service consumption and the feeding of user consumption data into dynamic pricing models. By introducing multi-tiered service costs that depend on local conditions, providers can take advantage of unbundling and service fragmentation to recuperate their investment costs. As Graham and Marvin argue, multi-tiered service provision and unbundling introduce inequalities to service delivery across user groups, services, and geographies: by charging users of prepaid services a higher rate than subscribers, introducing different rates for equivalent services, or offering a service only in profitable areas (Steve Graham and Marvin 2001).
An electricity network in which every household is connected through a smart meter that measures and submits real-time usage data offers more possibilities to the provider than just cleverly segmenting services for profit. By dynamically adjusting service rates to individual consumption relative to the overall system load, the provider can directly influence the user’s behavior in order to balance the load on the system. As a result, less power is needed during peak times, and the system becomes more efficient—the grid becomes a dynamic feedback system that can be optimized. Furthermore, based on usage patterns in collected data as well as interactions with other networks and external events, future states of the system can be anticipated. This is, concisely, the promise of the “smart city,” a term that appeared as early as 1992, and was further discussed by Bill Mitchell and others in the following decade (Gibson, Kozmetsky, and Smilor 1992; Mitchell 1995, 41; Stephen Graham and Marvin 1996). The idea originated in earlier work on urban cybernetics, which conceptualizes the city as a dynamic system in which all actors, including its planners, constantly adapt to each other’s actions, never reaching a static equilibrium (Forrester 1970; Goodspeed 2015). To accomplish this level of granular legibility, a smart city involves geographically distributed sensors and information infrastructures to measure the state of water, sanitation, electricity, transportation, healthcare, or policing.
Smart cities occupy an ambiguous place in the context of the decentralization, participation, and legibility of infrastructure. Smart cities are both agents and products of infrastructure privatization and fragmentation by involving IT companies such as IBM, Cisco, and Siemens in the management of public infrastructure. At the same time, its advocates strive for convergence instead of fragmentation, promoting the integration of diverse services in a unified model operating behind the scenes. The synoptic vision of urban processes, however, remains reserved for the administrator. The resident of a smart city is blissfully unaware of disasters mitigated, traffic jams averted, energy saved, and crimes prevented. Outspoken critic Adam Greenfield characterizes the smart city as a clinical, reductive, and generic model situated in generic space and time and predicated on notions of optimization and objectivity that are inappropriate for dealing with urban complexity. Greenfield argues that the idea of the smart city is based on a paradigm of seamlessness that is ultimately unachievable: “When systems designed to hide their inherent complexity from the end user fail, they fail all at once and completely, in a way that makes recovery from the failure difficult” (Greenfield 2013).
Indeed, IBM’s white paper outlining the company’s “Vision of Smarter Cities” did not consider citizen participation; instead, it offered a technocratic perspective that refers to citizens only as the recipients of services that enable them to enjoy a high quality of life (Dirks and Keeling 2009). Urbanist Anthony Townsend observes: “The technology giants building smart cities are mostly paying attention to technology, not people, mostly focused on cost effectiveness and efficiency, mostly ignoring the creative process of harnessing technology at the grass roots” (Townsend 2013, 118).
While the smart city model frames citizens primarily as consumers and passive sources of information, the contrary model of civic technologies emphasizes the agency of the individual, co-production, and creative appropriation. As critics like Townsend and Greenfield argue, instead of instrumenting the city with soon-to-be-obsolete sensors, technology can be used in more inclusive and participatory ways, leveraging local knowledge and ubiquitous technologies like smartphones. While smart city projects aspire to provide comprehensive solutions to general problems, civic technologies present themselves as more nimble and incremental, focusing on the nuts and bolts of local issues. However, just like the smart city, civic technologies are based on a data-centric ideology, grounded in the belief that improving coordination and access to relevant information is the key to solving urban problems. Nevertheless, urban problems such as segregation and gentrification cannot be solved solely through information—on the contrary, they seem to be exacerbated by information-saturated real-estate markets. I will address a number of broader critiques of civic technologies in the epilogue to part III.
Over the following sections, I will examine and compare different instances of a prototypical civic tech application that embodies the proclaimed values of openness, participation, and engagement—citizen feedback applications mediate citizen-government interactions and facilitate information exchange, feeding local knowledge back into the governance process. The following case study will show a more nuanced picture of civic technologies in action. The central concern is not the fact that something is made legible; rather, the issue is the effect of different approaches to establish legibility for public discourse on infrastructure governance.
The recent history of 311 systems in the United States illustrates the evolution of a feedback mechanism from a simple method of logging complaints and nonemergency incidents to an ambitious tool for civic engagement using telephone helplines, websites, and smartphone applications. In the context of this investigation, 311 systems are an interesting exemplar because they establish infrastructure legibility in two directions. First, they afford governments a detailed reading of the situation on the ground and the attitudes of constituents. Second, they make the activity of the local administration legible to constituents. By focusing on actual incidents, they offer a window into the material reality of infrastructure maintenance. Legibility in both directions is mediated by communication technology, which can act as a filter, an amplifier, a resonator, and a switch. Precisely how the interactions between citizens and governments are shaped through these interfaces will be the subject of the following case study.
The history of 311 citizen feedback systems in the United States is a story of growing ambition, provisional prototypes, and incremental improvements. Within a decade, what started as an attempt to relieve the load on emergency call centers and provide better access to services has become a primary means for data collection about the condition of urban systems, a tool for public accountability and citizen engagement, and a conduit for government and citizen cooperation with infrastructure maintenance.
By the late 1980s, the police, fire, and medical emergency number 911 had become so popular for nonemergency requests that the call volume became a headache. Public management scholar Malcolm Sparrow and his coauthors quote a police executive who declared in 1985, “We have created a monster” (Sparrow, Moore, and Kennedy 1992, 105). The exploding number of cell phones only aggravated the issue.
To address this situation, in 1997 the U.S. Federal Communications Commission (FCC) designated the short code 311 for requesting nonemergency public services (Flynn 2001; FCC 1997). Some cities, including Buffalo and Baltimore, kept the nonemergency calls within the purview of policing. Other cities, among them Dallas and Chicago, integrated 311 call centers into local government (Mazerolle et al. 2002). Chicago launched its 311 community response system in January 1999 because of the urgent need to replace a non-Y2K-compliant mainframe system (City of Chicago 2013).
In 2002, then-Mayor Michael Bloomberg announced a 311 system for New York City as his first major policy initiative. At that time, twelve call centers served more than forty city agencies, often with significant overlap in competencies. Set up as part of the Office of Operations, the NYC311 call center was initially staffed by 300 operators who entered requests into a service management system used for scheduling department tasks. During the start-up phase, analysts and engineers continuously revised the service category assignments, protocols, and database structures used to parse and route the incoming requests. In 2009, NYC311 offered a web interface for submitting and tracking reports. By 2011, it was handling twenty-two million calls annually, more than the combined total of the next largest twenty-six cities with 311 call centers (New York City 2013).
After the launch of NYC311, the emphasis shifted away from the initial goal of load reduction. Although early experiences in Baltimore—where the police remained in charge of the nonemergency number—had shown a decrease in the volume of emergency calls (Mazerolle et al. 2002), a reduction did not occur when 311 calls were handled by the city (California, Department of General Services 2000). In New York, the goal was recast as simplifying access to city services for a multilingual constituency while simultaneously evaluating performance and increasing accountability (Cardwell 2002).
The focus on accountability included not only the city’s responsibility toward its constituencies, but also the horizontal and vertical relationships between government entities. An NYC311 technician noted in an interview with the author that former Mayor Bloomberg was a frequent 311 caller; he wanted to observe how requests were handled by different departments. Unlike earlier systems, NYC311 assigned a unique ticket number that allowed each issue to be tracked from request to resolution. This approach, referred to as “constituent relationship management,” was modeled on customer relationship management (CRM) systems used by large companies to track customer requests and schedule response tasks.
The data held in call-center CRM systems were useful in different respects. Complaints offer feedback on urban problems through the eyes of citizens. With their urban issues digitized, georeferenced, and categorized, city managers started to view 311 call data as valuable resources for measuring the quality of services. Because citizen calls represent self-reported data instead of random samplings, CRM data are biased in many different ways and present a challenge to scientific analysis. Although the large data volume allows controlling for suspected biases through statistical modeling, the relationship between self-selected participants and the general population is poorly understood and limits the generalizability of results. This is, of course, a problem that is not restricted to citizen reports, but affects all large-scale data sets that are assembled from user-contributed sources (Mayer-Schönberger and Cukier 2013, 39; Hargittai 2015).
Nevertheless, citizen calls captured issues that would otherwise have gone unreported. For instance, the city of Chicago used citizen reports for combating bed bug infestations (Gabler 2010), and New York’s 311 data were instrumental in identifying an episode of air pollution by locating the source of a mysterious smell reported by residents in a particular neighborhood (Johnson 2010). Reported issues were also used to create econometric models that tracked the perceptions of neighborhood characteristics over time, including empirical tests of the infamous “broken windows” theory (O’Brien, Sampson, and Winship 2015).
With 311 call centers recording incidents based on operators asking questions, online submissions through visual interfaces represented the next stage of citizen feedback systems. In the first “web map mashup,” amateurs reverse-engineered some code of the Google Maps service. In January 2005, housingmaps.com georeferenced rental listings, making them searchable by location (Singel 2005). In May of that year, ChicagoCrime.org offered georeferenced crime reports scraped from police logs (Holovaty 2005). Do-it-yourself cartography officially emerged that summer when Google released an API for its map service, allowing anyone to create online maps using his or her own data (O’Connell 2005).
In July 2005, public advocate Andrew Rasiej launched WeFixNYC.com,3 which let users upload photos of potholes to a photo-sharing website and georeference them in Google Maps (Shulman 2005). Unlike the earlier mashups, WeFixNYC invited users to create their own data. Driven by the emergence of smartphones with cameras and location sensing, similar services followed. In February 2007, the first mature system for submitting urban incidents started operation in the UK, followed in 2008 by the U.S. platform SeeClickFix.
By 2009, users could easily enter a location and a description in a smartphone app, turning the job of reporting into little more than taking a picture and assigning it to an incident category. The mobile apps for SeeClickFix and FixMyStreet were released. In a novelty for local government, the city of Boston’s office for New Urban Mechanics, in collaboration with the mobile startup ConnectedBits, released the reporting app CitizensConnect. The apps sponsored by municipalities sought to improve the city’s understanding of how citizens use services. Analysis of 911 calls had revealed that a remarkably large proportion of calls came from only a few addresses (Sparrow, Moore, and Kennedy 1992, 105). Chris Osgood and Nigel Jacob from New Urban Mechanics have noted in a discussion at Northeastern University, “We do not need to receive a report from everyone; we want to find the people who submit a lot of reports.”4
By this point, many local governments had started to embrace civic technologies, developing online tools and mobile apps or licensing platforms such as SeeClickFix. The narrow focus on data generation began to shift to a more ambitious goal of involving citizens in infrastructure services, engaging them as stewards of their environment. In one of the first papers to analyze a data set of citizen-submitted incident reports, researchers Stephen F. King and Paul Brown lay out a roadmap as follows:
In the first stage, local government deploys ICT to improve information provision to citizens and to enable transactions with citizens to be conducted electronically (“the Responsive council”). In the second stage the data generated by these interactions is analysed by local government to generate insight into service use and future demand (“the Insightful council”). In the final stage, citizens take the lead and, through sharing information with each other and with local government, become active participants in service design and delivery (“the Insightful citizen” stage). (King and Brown 2007)
Mobile citizen feedback apps fall within the field of volunteered geographic information (VGI) systems, which include participatory mapping projects like OpenStreetMap as well as disaster relief and accountability platforms like Ushahidi (Goodchild 2007). Despite the large number of reporting apps and platforms, most share similar functionality: take a picture, verify the location, select an incident category, and enter a text description. But with many cities and developers building similar tools, issues of interoperability and standardization have emerged.
Open standards are an important factor for the widespread adoption of e-government tools. The nature of open standards makes it possible to use a broad range of clients, platforms, and interfaces while generating machine-readable data that can be incorporated into other applications. The first Apps for Democracy Contest held in Washington, DC, in 2009 introduced an open standard for incident reporting, Open311, which aimed to put cities in a position to share data and quickly implement a feedback system based on interfaces that could be improved by outside developers.
Another requirement for civic technologies in the digital age is the open data principle, the public provision of government data in a structured and machine-readable format for unrestricted use (Lathrop and Ruma 2010). The U.S. Freedom of Information Act (FOIA), specifically in the instance of the Government in the Sunshine Act of 1976, granted citizens access to information unless restricted for privacy or national security reasons. However, FOIA requests can take several months to process. The photocopied or scanned pages released for the request are not machine readable and therefore of limited use for computational analysis, requiring meticulous labor to render the data in a digital format.
Open data improves data exchange, allowing developers to build applications that utilize information freely. Commercial dining guides, for instance, can benefit from restaurant inspection reports. Traffic guides may incorporate public traffic and weather information. The opening of accurate GPS data for civilian use in May 2000 has nurtured an industry that offers a range of location-aware services for portable devices. For governments, implementing open data requires a slow coordination of heterogeneous entities. As New York’s Chief Data Analytics Officer commented during the Open Data Conference 2015 in Ottawa, “Convincing agencies to share their data is like pulling teeth.”
Beyond a means for sending service requests, citizen feedback platforms are accountability mechanisms that allow users to follow up on issues directly with city workers. This two-way connection turns infrastructure governance into an interactive process, a conversation. In their passionate case for civic technologies, Goldsmith and Crawford reference citizens who say that calling the 311 hotline makes them feel like they are complaining, whereas reporting apps make them feel that they are helping (Goldsmith and Crawford 2014). The authors argue that such tools also improve local governments, since they introduce a new form of accountability that focuses on results rather than processes. Since all interactions with citizens and their outcomes are reflected in public data, civil servants are judged by the public based on these results rather than on compliance with internal guidelines. In their diagnosis, process-centric accountability is responsible for what they describe as a current crisis in local government: a general ossification.
This argument, however, has a longer history and echoes central notions of the New Public Management (NPM) doctrine, which similarly called for a redefinition of “accountability,” from processes to results. Rooted in the conservative reorganizations of the public sector in 1980s Britain, NPM promotes a business-oriented model of governance that involves replacing bureaucratic accountability mechanisms with what public administration consultant Richard Boyle calls “post-bureaucratic control mechanisms,” which involve contracts and partnerships with private firms, continuous performance monitoring, and private sector management techniques (Boyle 1995).
Shortly after the collapse of the Soviet Union, management consultants David Osborne and Ted Gaebler called for an “American Perestroika” of public management (Osborne and Gaebler 1992). Characterizing public service provision as slow, inefficient, and fundamentally outdated, the authors postulate the following principles to improve public management:
Despite the ostensible emphasis on empowerment and participation, the benefits and success of NPM remain highly controversial. Public management theorist Kulachet Mongkol summarizes the various critiques of NPM in three broad points (2011). He calls the first the “paradox of centralization through decentralization”: the introduction of managerial and market-oriented principles under the banner of decentralization has led to a centralization of decision making by concentrating authority in the hands of a few public managers. This concentration is problematic since, as he argues in his second point, private sector managerial approaches are not directly applicable to the public sector: they tend to emphasize simple solutions for simple problems and fail to account for the most basic requirements of democratic governance (Drechsler 2009). Finally, the emphasis on measuring performance over process necessitates a new bureaucratic apparatus for conducting assessments that can introduce its own problematic ethical standards and incentives (Mongkol 2011).
While traditional public accountability instruments are designed to prevent waste and corruption, NPM shifts the emphasis toward measuring service quality. Paraphrasing political scientist Christopher Hood, NPM constitutes a shift from public accountability to accounting (Hood 1995). The claimed benefits of key performance indicators (KPIs) on the quality of service provision remain controversial. Particularly in the domain of law enforcement, performance metrics that reward officers based on their number of arrests have come under criticism (Mazerolle et al. 2002). Even in the less contentious area of infrastructure maintenance, KPIs can limit the discretion of public officials, while it is not always clear whether metrics such as the number of filled potholes are an accurate proxy for service quality. Introduced to facilitate decentralization, KPIs paradoxically introduce centralization by requiring comprehensive information infrastructures for measuring service quality.
Many of Osborne and Gaebler’s principles resonate in contemporary visions of a participatory, user-driven infrastructure that emphasizes the roles of open source, public engagement, and empowerment: the diagnosed failures of the public sector and the capacity of digital technology to resolve these failures by empowering constituents to take matters into their own hands. Yet citizen feedback systems and open data initiatives are not in all respects aligned with the goals of NPM. By creating new public services and platforms, current digital initiatives depart from the NPM imperatives of cost efficiency and the devolution of public utilities. Their information infrastructures are not necessarily intended to measure performance, but also to collect and integrate local knowledge. Citizen feedback systems are not instruments to limit the role of government, since managing requests is a considerable additional burden. Nor do they typically reduce costs for a municipality, breaking with the central NPM paradigm of economic efficiency. As Nigel Jacob commented in a discussion:
Citizens Connect is extra work for everyone. It does not save money, and nobody has checked this, because money is not the metric. Like in policing, the appropriate metric is not the number of arrests, but the subjective feeling of safety; quantitative metrics can lead to perverse incentives. Citizens Connect is really about engagement. Inclusive language and the perception of value are important. Our conceptual model is different from that of the Smart City, where efficiency is central.5
Traditionally, accountability is a vertical relationship between citizens and elected officials or between a principal and a subordinate. Horizontal relationships of accountability also exist, for example, between agencies, in scientific peer review, or professional evaluations and appraisals.
Accountability mechanisms for urban service provision can be implemented either by the “short route,” which directly connects citizens to service providers to resolve issues, or the “long route,” which uses a public authority as an intermediary. Studies suggest that the long route, allowing municipalities to enforce the contractual compliance of the utility, is more effective in getting citizens’ complaints resolved (Fox 2015).
Citizen feedback systems often involve more complex relationships of accountability, both horizontal and vertical, formal and informal, and always involving a large number of stakeholders. Many citizen feedback systems can be described as social accountability initiatives that aim to establish community-driven approaches that keep power holders accountable (Joshi and Houtzager 2012). While these initiatives often start with the community itself, they can also be spearheaded by an institution. International lenders like the World Bank promote social accountability as a way to combat corruption and to monitor how their funds are used in infrastructure projects.
Mechanisms of social accountability can be both formal and informal, operating through the judiciary or public pressure. When the formal mechanisms such as elections or court systems fail or are unavailable, accountability initiatives resort to informal channels such as media campaigns and protests. In the absence of enforcement and formal sanctions, constituents might turn to tactics of naming and shaming, what sociologist Naomi Hossain describes as “rude accountability” (Hossain 2010).
Social accountability is increasingly employed in development projects for improving urban services. Service providers can be held to account more effectively by international lending institutions, if the beneficiaries of services are directly involved in monitoring (Cavill and Sohail 2004, 155). In this view, social accountability could help prevent the misspending of public funds as well as make services more equitable to those who otherwise have no voice. By bringing infrastructure governance into the foreground, the approach can improve services and increase the perception of urban services as a public good.
Digital information technologies can play many different roles in helping social accountability projects make sense of infrastructure and public services (Offenhuber and Schechtner 2013). Sociologists Eric Gordon and Paul Mihailidis describe “civic media” as “the mediated practices of designing, building, implementing, or using digital tools to intervene in or participate in civic life” (Gordon and Mihailidis 2016). When supported by local governments, these practices reintegrate functions into the governmental sphere that were neglected under NPM. This model is sometimes described as Neo-Weberian, since it reaffirms the central role of the public sector in solving urban problems (Pollitt and Bouckaert 2004; Dunleavy et al. 2006).
Digital platforms have been used to document corruption or monitor elections. An initiative to map violent incidents after Kenya’s disputed presidential election of 2007 led to the development of the popular crowd-sourced mapping platform Ushahidi (Okolloh 2009). In cases such as these, civic technologies depend on the support of a dedicated community for development and to protect them from destructive forces and cooption (Zittrain 2008).
Because social accountability initiatives require a system of governance that respects the role of the community, they must interact with formal mechanisms of enforcement, bringing them to the limits, if their scope does not include questions of procurement and contract negotiations. Service providers are not accountable in the way that public officials are to their constituents—eventually, they are only responsible for complying with the conditions specified in their contracts.
Smart city visions and civic media practices are often described in the dichotomy of top-down versus bottom-up: the urban manager versus the citizen-activist, central control versus decentralized organization, or the private versus the public good.
A pure version of a smart city might resemble the authoritarian dystopia caricatured in Jean-Luc Godard’s 1965 film Alphaville.6 At the same time, attempts to create a “responsive city” risk turning a city into something that merely reacts to requests from those who are the most vocal. Addressing citizen requests can bind scarce resources, and short-term fixes can replace long-term strategic planning. In both cases, increasing amounts of public data heighten the temptation to read these data sets as the reality of urban infrastructure problems.
Distinctions of top-down and bottom-up are evocative metaphors but not always useful categories for understanding infrastructure because they tend to obscure the multifaceted nature of large socio-technical systems. A closer look at the technologies reveals that these dichotomies are not as clear cut as they might seem. Similarly, even smart city solutions are not as monolithic as frequently presented by both their supporters and critics.
The engagement of the citizen has also changed. As sociologist Michael Schudson explains, the ideal of democratic decisions made by fully informed citizens is no longer attainable—if it ever was. Instead, Schudson sees the rise of the “monitorial citizen” who concerns himself or herself with selected issues and possesses an unfocused awareness of what’s relevant to his or her interests, who “scans (rather than reads) the informational environment,” and is ready to mobilize when alerted (Schudson 1998). As the next chapter shows, civic tech tools in the form of feedback platforms that share functionality have emerged from both top-down initiatives and activist projects. They handle the same issues and share many similarities, but do so in ways that represent governments and citizens differently, their interface designs affecting the perception of what is and what is not a problem.